Pre-screened and vetted.
Mid-level Data Scientist & AI Engineer specializing in NLP, computer vision, and MLOps
Mid-level Data Engineer specializing in cloud data pipelines and streaming analytics
Senior Backend Python Engineer specializing in cloud-native APIs and data platforms
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Mid-level Software Engineer specializing in FinTech and scalable backend systems
Mid-level AI/ML Engineer specializing in financial risk, NLP, and MLOps
Mid-level AI/ML Engineer specializing in NLP, computer vision, and recommender systems
“Built and deployed a production NLP sentiment analysis system at Piper Sandler to turn noisy, finance-specific customer feedback into scalable insights. Demonstrates strong end-to-end MLOps: fine-tuning BERT, improving label quality, monitoring for language drift, and automating retraining/deployment with Airflow and Docker (plus Kubeflow exposure).”
Senior Machine Learning Engineer specializing in LLMs, RAG, and Computer Vision
“Built a production LLM-powered clinical note summarization and retrieval system that structures patient/provider/payer discussions into standardized outputs (symptoms, treatments, clinical codes, and prior-auth decisions) and stores notes as embeddings for hybrid search and proactive prior-authorization prediction. Experienced with LangChain/LangGraph orchestration, RAG, and grounding against medical code databases, and has communicated model feasibility/limitations to business stakeholders (Virtusa/Comcast).”
Mid-level Data Scientist specializing in industrial IoT, predictive analytics, and generative AI
“ML/NLP engineer with Industrial IoT experience who built an end-to-end anomaly detection and GenAI explanation system: AWS (S3, PySpark, EC2/Lambda) pipelines feeding dashboards, plus transformer-embedding vector search to connect anomalies to noisy maintenance notes and past events. Demonstrated measurable impact (15% lift in defect detection; ~35% reduction in manual review; 35% fewer preprocessing errors) and strong productionization practices (orchestration, monitoring, rollback, data-quality controls).”
Senior Software Engineer specializing in enterprise platforms and data engineering
“Backend/data platform engineer who owned an enterprise Django REST + PostgreSQL reporting backend and built Python ETL pipelines to normalize 3M+ legacy customer records, improving data reliability by 85%. Strong Kubernetes/GitOps practitioner (Helm, ArgoCD, Jenkins/GitHub Actions) with real-world production debugging experience, plus Kafka streaming at 5M events/day and a zero-downtime monolith-to-event-driven microservices migration on AWS that cut infra costs by 42%.”
Mid-level Data Scientist specializing in GenAI, RAG, and forecasting
“ML/NLP engineer focused on large-scale data linking for e-commerce-style catalogs and customer records, combining transformer embeddings (BERT/Sentence-BERT), NER, and FAISS-based vector search. Has delivered measurable lifts (e.g., +30% matching accuracy, Precision@10 62%→84%) and built production-grade, scalable pipelines in Airflow/PySpark with strong data quality and schema-drift handling.”
Mid-level Data Scientist specializing in credit risk, fraud detection, and ESG analytics
“AI/LLM practitioner who has deployed production chatbots across e-commerce, HRMS, and real estate, focusing on retrieval-first workflows for factual tasks like product and property search. Optimized intent understanding and significantly improved latency by using lightweight embeddings and tuning the inference pipeline on Groq (Llama 3.3), while applying modular orchestration and measurable production evaluation.”
Mid-level GenAI Engineer specializing in RAG systems and AI agents
“LLM/agentic systems builder who has deployed production solutions for a resource management firm, using an MCP-driven architecture with Neo4j + Elasticsearch and a ChatGPT frontend to generate candidate/company “SmartPacks” and answer entity Q&A. Also built a LangGraph/LangSmith-orchestrated multi-agent workflow that automates data-infra change requests end-to-end (impact analysis, SQL + tests, and PR creation), and delivered a ~60% latency reduction through TTL-based context caching while improving accuracy via a business data dictionary.”
Junior Data Engineer specializing in data pipelines and streaming ingestion
“Backend/data platform engineer who owned a near-real-time patient feedback ingestion system, building a FastAPI + Kafka service with Snowflake/Airflow orchestration. Demonstrates strong production Kubernetes/GitOps practices on AWS EKS (Helm, Argo CD, Sealed Secrets) and solved real-time data integrity issues via idempotent processing with Redis.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Junior AI/ML Engineer specializing in LLM agents and RAG systems
“Built and deployed a production, multi-tenant modular agentic AI platform at Easybee AI, using LangChain/LangGraph with Redis-backed durable state to make agents reusable, traceable, and auditable. Emphasizes reliability via strict tool schemas, deterministic controllers, tenant-level policy enforcement, and regression testing derived from real production failures; also delivered AI automation for legal/finance workflows (attorney draw and expense automation) with explainable, deterministic payouts.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics engineering
“Built and deployed a production LLM-powered demand and churn forecasting system for an e-commerce client, combining open-source LLMs (LLaMA/Mistral) and Sentence-BERT embeddings to generate business-friendly explanations of forecast drivers. Strong focus on data quality and model trust (validation, baselines, segmented monitoring) and production reliability via Airflow-orchestrated pipelines with readiness checks, retries, and ongoing drift/A-B testing.”
Mid-level Data Scientist specializing in Generative AI, RAG systems, and MLOps
Mid-level MLOps/ML Engineer specializing in LLMs and financial risk modeling
Mid-level Data Scientist specializing in ML, data engineering, and real-time analytics
Junior Cloud Software Engineer specializing in AWS serverless and data platforms