Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in NLP, LLMs, and MLOps
“Built and deployed an LLM-powered financial/regulatory document analysis platform at State Street, combining fine-tuned transformer models with a RAG pipeline over internal knowledge bases. Owned the productionization stack (FastAPI, Docker, SageMaker, Terraform, CI/CD) plus monitoring for drift/latency/hallucinations, delivering ~40% faster analyst review and improved reliability through chunking/embeddings and grounding.”
Mid-level AI/ML Engineer specializing in Generative AI and data engineering
“IBM engineer who built and deployed a production RAG-based LLM assistant using LangChain/FAISS with a fine-tuned LLaMA model, served via FastAPI microservices on Kubernetes, achieving 99%+ uptime. Demonstrates strong practical expertise in reducing hallucinations (semantic chunking + metadata-driven retrieval) and managing latency, plus mature MLOps practices (Airflow/dbt pipelines, MLflow tracking, monitoring, A/B and shadow deployments) and effective collaboration with non-technical stakeholders.”
Mid-level AI/ML Engineer specializing in Generative AI and production ML systems
“At CVS Health, the candidate productionized a RAG-based LLM solution in a regulated healthcare setting, emphasizing reliable data pipelines, LoRA fine-tuning, monitoring, safety guardrails, and A/B testing. They have hands-on experience troubleshooting real-time RAG failures (e.g., chunking/embedding issues) and regularly lead developer-focused demos/workshops while translating technical architecture into business value for stakeholders.”
Mid-level Data Engineer specializing in cloud ETL/ELT and lakehouse architecture
“Data engineer focused on sales/marketing analytics pipelines, owning ingestion from CRMs/ad platforms through warehouse serving and dashboards at ~hundreds of thousands of records/day. Built reliability-focused systems including dbt/SQL/Python data quality gates with alerting, a resilient web-scraping pipeline (retries/backoff, anti-bot tactics, schema-change detection, backfills), and a versioned internal REST API with caching and strong developer usability.”
Intern Data Scientist specializing in ML engineering and LLM agentic workflows
“Built an agentic, multi-step LLM system that generates full-stack code for API integrations using LangChain orchestration, Pinecone/SentenceBERT RAG, and a human-in-the-loop feedback loop for iterative code refinement. Also collaborated with non-technical content writers and PMs during a Contentstack internship to deliver a Slack-based AI workflow that generates and brand-checks articles with one-click approvals.”
Mid-level AI/ML & Full-Stack Engineer specializing in LLM agents and medical RAG systems
“Full-stack engineer at an early-stage startup building an agentic AI application for enterprise systems, combining customer-facing Next.js/React UI work (30% faster load times) with backend/workflow orchestration using FastAPI + n8n, Redis, and RabbitMQ. Previously at Deloitte USI, built BDD Selenium/Java automation and managed 200+ defects end-to-end using JIRA/JAMA to support on-time production releases.”
Mid-level AI/ML Engineer specializing in GenAI and financial risk & compliance analytics
“Built and deployed a production LLM-powered financial risk and compliance platform to reduce manual trade exception handling and speed up insights from regulatory documents. Implemented a LangChain multi-agent workflow with structured/unstructured data integration (Redshift + vector DB) and emphasized hallucination reduction for regulatory safety using Amazon Bedrock. Strong MLOps/orchestration background across Kubernetes, Airflow, Jenkins, and monitoring/testing with MLflow, Evidently AI, and PyTest.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and time-series forecasting
“ML/AI engineer with hands-on ownership of production recommendation and RAG systems at Northern Trust. They combine transformer modeling, latency optimization, cloud deployment, and monitoring with measurable business impact, including 14% accuracy gains, 12% engagement improvement, and 19% better query relevance.”
“Senior AI/ML engineer focused on production ML, LLMs, and MLOps, with concrete experience shipping fraud detection and enterprise RAG systems. They combine strong deployment and monitoring discipline with measurable business impact, including 31% precision improvement in fraud detection and 37% better answer relevance in a financial-document QA system.”
Senior AI/ML Engineer specializing in Generative AI, NLP, and regulated industries
“Built end-to-end ML and GenAI systems at Northern Trust, including a production RAG-based document intelligence platform for financial reports and contracts. Stands out for combining strong MLOps execution with practical product judgment—improving forecast accuracy by 22%, document review accuracy by 38%, and cutting deployment time by 45% while keeping latency and reliability production-ready.”
Senior software engineer specializing in AI/ML and LLM platform delivery
“ML/AI engineer with strong production ownership across predictive ML and Generative AI systems. They’ve delivered measurable business impact through real-time churn/drop-off prediction, RAG-based document QA, and scalable LLM optimization, with a consistent focus on reliability, safety, latency, and developer productivity.”
Mid-level AI/ML Engineer specializing in NLP and Generative AI
“Built and deployed a production LLM-powered RAG assistant for healthcare teams (care managers/support) to answer questions from clinical and policy documentation, emphasizing trustworthiness via improved retrieval, reranking, and strict grounding prompts to reduce hallucinations. Also has hands-on orchestration experience with Apache Airflow for end-to-end ETL/ML workflows and applies rigorous testing/metrics (hallucination rate, tool-call accuracy, latency, cost) to ensure reliable AI agent behavior.”
Mid-level Machine Learning Engineer specializing in LLMs, RAG, and MLOps
“LLM/agentic systems engineer who built a production "Agentic AI Diagnostic Assistant" for network engineers, using a multi-agent Llama 2 + LangChain architecture with RAG over telemetry/incident data in DynamoDB and confidence-based deferrals to reduce hallucinations. Also has strong MLOps/orchestration experience (Airflow, EventBridge, Spark, Docker, SageMaker/ECS) at multi-terabyte/day scale and delivered multilingual NLP analytics (fine-tuned BERT/spaCy) for support operations through hands-on stakeholder workshops.”
Mid-level AI/ML Engineer specializing in healthcare NLP and MLOps
“ML/AI engineer with healthcare payer experience (Signal Healthcare, Cigna) who has shipped production fraud/claims prediction systems using Python/TensorFlow and exposed them via FastAPI/Flask microservices integrated with EHR and Salesforce. Emphasizes operational reliability and trust—Airflow-orchestrated pipelines with data quality gates plus SHAP-based interpretability, A/B testing, and drift/debug workflows—backed by reported outcomes of 22% lower false payouts and 17% higher model accuracy.”
Mid-level Data Scientist specializing in predictive modeling, NLP/LLMs, and RAG search systems
“Built production LLM/RAG platforms for financial services to enable natural-language Q&A over large policy/compliance document sets stored in Snowflake and SharePoint. Strong in MLOps and orchestration (Airflow, ADF, Step Functions, MLflow) and in solving real production issues like stale embeddings and model performance, including an incremental Snowflake Streams sync that cut processing time from hours to minutes.”
Mid-level Machine Learning Engineer specializing in NLP, LLMs, and MLOps
“Built a production internal LLM/RAG assistant at CVS Health to cut time spent searching long policy and clinical guideline PDFs, combining fine-tuned BERT/GPT models with FAISS retrieval and a FastAPI service on AWS. Demonstrates strong real-world reliability work (document cleanup, hallucination controls, monitoring/drift tracking with MLflow) and close collaboration with non-technical clinical operations teams via demos and feedback-driven iteration.”
Mid-level AI/ML Engineer specializing in computer vision, NLP/LLMs, and MLOps
“ML/AI engineer with defense and commercial analytics experience: deployed a real-time aerial object detection system at Dynetics (YOLOv5 + TorchServe in Docker on AWS EC2) with drift-triggered retraining and 99.5% uptime, tackling ambiguous targets and weather degradation. Previously at Fractal Analytics, built and explained a churn prediction model for marketing stakeholders using SHAP and delivered it via a Flask API into dashboards, driving a reported 22% attrition reduction.”
Principal Data Scientist specializing in Generative AI, NLP, and MLOps
“ML/NLP practitioner with banking experience (M&T Bank) who has built a GPT-4 RAG system using LangChain and Pinecone to connect unstructured customer data with internal knowledge bases, improving accuracy and reducing manual lookup time by 50%+. Strong in entity resolution and productionizing scalable Python data workflows, including major performance wins by migrating bottleneck joins from Pandas to Dask.”
Senior Software Engineer specializing in cloud-native microservices and AI-enabled platforms
“Infrastructure/operations engineer with hands-on production IBM Power/AIX (AIX 7.x, VIOS, HMC) and PowerHA/HACMP clustering experience, including DLPAR changes, failover testing, and incident recovery. Also delivers modern cloud DevOps work—GitHub Actions CI/CD for Docker-to-Kubernetes on AWS and Terraform-based provisioning of core AWS infrastructure (VPC/EKS/RDS/IAM) with controlled rollouts and drift checks.”
Mid-level AI Software Engineer specializing in LLM systems and cloud APIs
“Built and productionized an LLM-powered support/knowledge pipeline using embeddings and retrieval (RAG) to deliver more grounded, higher-quality responses while reducing manual effort. Focused on real-world reliability and performance—adding structured validation/guardrails, optimizing vector search and context size for latency/scale, and monitoring failure patterns in production. Experienced with orchestration via LangChain for LLM workflows and Airflow for production data/ML pipelines, and iterates closely with operations stakeholders through demos and feedback.”
Junior ML Data Associate specializing in AI training data and LLM prompt evaluation
“Applied ML/embodied AI practitioner who built an on-device gesture-control system for smart-home lights using Raspberry Pi + camera, focusing on privacy-preserving real-time inference and hardware-constrained optimization (async pipeline + TF Lite INT8). Also made a high-impact architecture decision for an ML content evaluation/QA pipeline processing millions of annotated text samples weekly, reducing batch runtime from ~6 hours to ~40 minutes while lowering compute cost.”
Mid-level AI/ML Engineer specializing in Generative AI, NLP, and healthcare RAG systems
“Built and deployed a production clinical claim validation RAG system at GE HealthCare that automated nurses’ patient-history/claims checks, cutting manual review time by ~65%. Designed the full stack (retrieval, embeddings, Pinecone, prompt/verification guardrails, FastAPI backend) with PHI-compliant anonymization via NER and orchestrated pipelines using Airflow, Azure ML Pipelines, and MLflow with drift monitoring.”
Executive Technology Leader specializing in SaaS platforms, AWS microservices, and security
“Platform/infra engineer with deep Kubernetes (EKS) and VMware vSphere experience who has led a monolith-to-microservices transition for a credit lending decision platform. Built GitOps-driven Terraform delivery with strong governance, and used LaunchDarkly-based progressive rollouts plus Datadog observability to safely ship multiple times per day (reported ~6x throughput and 1/3 the bugfix tickets vs legacy). Also operates hybrid on-prem/AWS networking (firewall + Transit Gateway, BGP) and has handled high-stakes datacenter migrations (100TB Storage vMotion) under Sev1 conditions.”
Senior Data Engineer specializing in cloud data platforms and real-time analytics
“Data engineer (Credit One) who built and owned real-time financial transaction and credit risk/fraud data systems end-to-end on AWS + Snowflake. Delivered high-scale pipelines (150k events/hour; ~2TB/week), raised data accuracy to 99%, and cut Snowflake costs 42% while adding strong observability, schema-drift handling, and production-grade APIs/documentation.”