Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“Built a production AI-driven contract/document extraction system combining OCR, normalization, and LLM schema-guided extraction, orchestrated with PySpark and Azure Data Factory and loaded into PostgreSQL for analytics. Emphasizes reliability at scale—using strict JSON schemas, confidence scoring, targeted retries, and multi-layer validation to control hallucinations while processing thousands of PDFs per hour—and partners closely with non-technical business teams to refine fields and deliver usable dashboards.”
Mid-level AI/ML Engineer specializing in LLMs, NLP, and MLOps
“AI/ML engineer with healthcare domain depth who led a HIPAA-compliant, production LLM system at McKesson to automate clinical document understanding—extracting entities, summarizing provider notes, and supporting authorization decisions. Hands-on across Spark/Python ETL, Hugging Face + LoRA/QLoRA fine-tuning, RAG, and cloud-native MLOps (Airflow/Kubernetes/Step Functions, MLflow, blue-green on EKS/GKE), with explicit work on PHI handling and hallucination reduction.”
Mid-level AI/ML Engineer specializing in Generative AI and LLMOps
“Built and deployed a GPT-based RAG enterprise search system for healthcare clinicians, emphasizing low-latency performance and reduced hallucinations while maintaining end-to-end HIPAA compliance. Demonstrates deep applied experience with PHI-safe data governance (detection/redaction/de-identification), secure Azure ML deployment patterns, and orchestration of production LLM workflows using LangChain and Airflow.”
Mid-level Data Scientist specializing in MLOps, LLM/RAG applications, and deep learning
“Built and deployed a production compliance automation RAG system (at Citi) that generates citation-backed, schema-validated risk summaries for regulatory document review. Emphasizes regulated-environment reliability with retrieval-only grounding, abstention, confidence thresholds, and immutable audit logging, plus orchestration using LangChain/LangGraph and Airflow. Reported ~60% reduction in compliance review effort while maintaining high precision and traceability.”
Mid-level AI/ML Engineer specializing in enterprise ML, MLOps, and Generative AI
“ML/LLM engineer who has shipped production RAG systems (LangChain + HF Transformers + FAISS) with hybrid retrieval and cross-encoder re-ranking, deployed via FastAPI/Docker/Kubernetes and monitored with MLflow. Also partnered with wealth advisors at Edward Jones to deliver a client retention model with SHAP-driven explanations and a dashboard that improved trust, adoption, and reduced high-value client churn.”
Mid-level Data & GenAI Engineer specializing in lakehouse, streaming, and RAG platforms
“Built a production internal LLM-powered knowledge assistant using a RAG architecture (Python, LLM APIs, cloud services) that answers employee questions with sourced, grounded responses from internal documents. Demonstrates strong practical depth in retrieval tuning (chunking/metadata filters), orchestration with LangChain, and production reliability practices (latency optimization, automated embedding refresh, evaluation metrics, logging/monitoring) while partnering closely with non-technical operations teams.”
Entry-Level Software Engineer specializing in AI/ML and Full-Stack Development
“Backend engineer who built an NL-to-SQL system at Target, using a multi-step LLM pipeline with vector-store schema retrieval and SQL validation to safely answer business questions. Strong in production FastAPI systems (async, Pydantic, Docker/Uvicorn, load balancing) and security (OAuth2/JWT, scopes, and database row-level security), with experience migrating Flask apps to FastAPI + PostgreSQL using strangler/feature-flagged canary rollouts.”
Mid-level AI Engineer specializing in LLMs, RAG, and content automation
“AI/LLM engineer who built a production autonomous GenAI content ecosystem that generates short-form scripts, extracts viral highlights from long-form video, and dubs content into 33+ languages. Focused on making LLM outputs production-safe via schema enforcement, token-to-time alignment, critic-agent verification, and scalable async orchestration—cutting manual workflows by ~90% and saving $200k+ annually.”
Junior Data Scientist specializing in healthcare ML and clinical NLP/LLMs
“Healthcare-focused LLM engineer who has built two production clinical applications: an automated structured clinical report generator from physician-patient conversations and a RAG-based chatbot for retrieving patient history (procedures, allergies, etc.). Demonstrates strong applied RAG expertise (overlapping chunking, entity dependency graphs, temporal filtering, graph RAG) to reduce hallucinations/omissions and partners closely with clinicians to automate hospital workflows.”
Mid-level Machine Learning Engineer specializing in GenAI, LLMs, and real-time ML systems
“Built and deployed a production long-form article summarization system using BART/T5/PEGASUS, tackling real-world constraints like token limits, latency/quality tradeoffs, and factual drift via chunking/merge logic and constrained decoding. Uses pragmatic Python-based pipeline orchestration (scheduled jobs, modular scripts, logging/retries) and iterates with stakeholder feedback to make outputs genuinely useful for content workflows.”
Senior Software Engineer specializing in Python automation and hybrid cloud integration
“Embodied AI / robotics-focused ML engineer with experience at JPMorgan and EY building language-to-robot control systems that connect transformer/LLM intent to safe real-world robotic actions. Designed production-grade, low-latency architectures (Kafka/Redis, monitoring, CI/CD) and applied sim-to-real and model distillation to make research ideas deployable on physical systems.”
Mid-level AI/ML Engineer specializing in fraud detection and risk analytics in Financial Services
“Finance-domain ML/LLM engineer who has shipped production systems including a RAG-based financial insights assistant with a custom post-generation validation layer that verifies atomic claims against retrieved source text to prevent hallucinations in compliance-critical workflows. Also built large-scale MLOps automation on AWS using Kubeflow + MLflow + CI/CD for fraud detection and credit risk models processing 500M+ transactions/day with a 99.99% uptime goal, and partnered closely with JP Morgan risk/compliance stakeholders on NLP-driven compliance monitoring.”
Junior Full-Stack & AI/ML Engineer specializing in LLMs and multimodal document processing
“Built a production RAG-based NBA player scouting assistant that embeds player profiles into FAISS, orchestrates retrieval and LLM recommendations with LangChain, and surfaces results via embedded Tableau dashboards. Demonstrates strong focus on evaluation/monitoring (batch tests, LLM-as-judge, latency/failure/token metrics) and has experience translating non-technical founder goals into DAPT + fine-tuning plans on curated data.”
Mid-level Generative AI & Machine Learning Engineer specializing in agentic LLM systems
“Built and deployed a production agentic LLM knowledge assistant that answers complex questions over internal documents, APIs, and databases using a RAG architecture (FAISS/Pinecone) and LangChain/LangGraph orchestration. Emphasizes production-grade reliability and hallucination control through grounding, confidence thresholds, validation, retries/fallbacks, and full observability (logging/metrics/traces) with continuous evaluation and feedback loops.”
Mid-level Data Scientist specializing in Generative AI, MLOps, and cloud data platforms
“GenAI/ML engineer (CitiusTech) who has deployed production RAG systems for compliance/operations document Q&A, using Pinecone + FastAPI microservices on Kubernetes with strong monitoring and guardrails. Also built a GenAI-powered incident triage/routing solution in collaboration with non-technical stakeholders, achieving 35% faster response times and 40% fewer misclassified tickets, and has hands-on orchestration experience with Airflow and AutoSys.”
Mid-level Software & Robotics Engineer specializing in AGVs, perception, and motion planning
“Robotics software engineer with real customer deployment impact at Dematic, improving AGV front-guided steering, localization sensor fusion, and control-loop performance while integrating with Beckhoff PLC safety systems. Also built a multi-robot ROS milling cell in graduate work, combining URDF/Gazebo simulation, MoveIt/OMPL planning, ROS performance profiling, and CNN-based defect detection to drive coordinated robotic milling.”
Mid-level Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“Built production LLM systems including a real-time customer feedback analysis and workflow automation platform using RAG and multi-agent orchestration with confidence-based human escalation, addressing privacy and legacy integration challenges. Also automated ML operations with Airflow/Kubernetes (e.g., daily churn model retraining) cutting retraining time to under 30 minutes, and demonstrates a rigorous testing/monitoring approach plus strong non-technical stakeholder collaboration.”
“ML/GenAI engineer with recent CVS Health experience building a production RAG system over unstructured financial/research documents using LangChain, FAISS, and Pinecone, plus LoRA/PEFT fine-tuning of GPT/LLaMA for domain-aware summarization. Demonstrates strong applied MLOps and data engineering skills (Airflow/Prefect, Docker/Kubernetes, CI/CD, MLflow) and measurable impact (sub-second retrieval, ~40% better context retrieval, ~25% entity matching improvement).”
Mid-level Data Scientist & AI Engineer specializing in RAG, agentic AI, and production ML
“AI/data engineer who built a production LLM-powered schema drift detection system (LangChain/LangGraph) to catch semantic data changes before they break downstream analytics/ML. Deployed on AWS with Docker/S3 and implemented an LLM-as-a-judge evaluation framework to improve trust, reduce hallucinations, and control false positives/alert fatigue. Collaborated with non-technical risk/business analytics stakeholders at EY by delivering human-readable drift explanations that improved confidence in financial analytics dashboards.”
Intern Software Engineer specializing in backend, cloud data platforms, and microservices
“Full-stack engineer who shipped a group scheduling SaaS feature with live availability updates using Next.js App Router + TypeScript, owning production reliability after launch (auth debugging, monitoring, polling/backoff tuning). Has hands-on experience with Postgres schema/index design and query optimization (EXPLAIN ANALYZE) and building durable orchestrated backend workflows with retries and idempotency.”
Senior Full-Stack & GenAI Engineer specializing in healthcare and financial services
“Built and deployed a production LLM-powered customer support assistant using a RAG backend in Python, focused on deflecting repetitive Tier-1 tickets and reducing resolution time. Demonstrates strong production engineering instincts around reliability (confidence scoring + human fallback), scalability/cost optimization (multi-stage pipelines), and workflow orchestration/observability (LangChain, custom DAGs, structured logging, step metrics).”
Junior MLOps Engineer specializing in LLMs and cloud infrastructure
“Built a production multimodal LLM system (Gemini on GCP) to automate behavioral coding of family-involved science experiment videos, including preprocessing for inconsistent lighting/audio and LangGraph-orchestrated parallel workflows. Also developed rubric-based AI grading workflows and partnered closely with non-technical education stakeholders through explainability-focused walkthroughs and manual-vs-AI evaluation alignment.”
Junior Machine Learning Engineer specializing in LLM evaluation and GenAI pipelines
“LLM/agent engineer who built a production LangGraph multi-agent orchestrator connecting GitHub and APM/observability signals with a chain-of-verification loop for root-cause analysis. Emphasizes pragmatic architecture (start simple with state summaries), performance tuning (async LLM calls, Docker), and rigorous evaluation (LLM-as-judge, adversarial testing, hallucination/instruction adherence metrics, tool-call tracing) while iterating with non-technical stakeholders via A/B testing.”