Pre-screened and vetted.
Junior Machine Learning Engineer specializing in computer vision for medical imaging
“Applied ML/LLM practitioner working in healthcare-facing products, using RAG and LoRA fine-tuning on medical data and implementing production monitoring (confidence scoring) for clinician oversight. Has hands-on experience debugging agentic/LLM pipelines (including OCR preprocessing fixes) and regularly delivers technical demos to doctors, investors, and conferences—contributing to adoption and even helping close a funding round through end-to-end pipeline walkthroughs.”
Junior AI/ML Engineer specializing in LLMs, RAG, and multimodal agents
Mid-level Machine Learning Engineer specializing in MLOps and applied AI
Mid-level GenAI & Analytics Engineer specializing in LLM and cloud cost/finance analytics
Mid-level AI/ML Engineer specializing in NLP, MLOps, and compliance-focused ML systems
Junior AI Research Engineer specializing in NLP, speech and generative AI
Intern AI/Data Science Engineer specializing in LLM agents, data engineering, and predictive analytics
Mid-level Machine Learning Engineer specializing in LLMs, RAG, and document intelligence
Mid-level AI/ML Engineer specializing in NLP, MLOps, and financial risk & fraud analytics
Mid-level AI Engineer specializing in Generative AI and LLM/RAG systems
Junior Full-Stack/Cloud Engineer specializing in AI and data-driven applications
Mid-level AI/ML Product & Solutions Specialist specializing in GenAI and MLOps
Mid-level Machine Learning Engineer specializing in LLMs, multimodal AI, and backend systems
Mid-level Machine Learning Engineer specializing in NLP, LLMs, and multimodal modeling
“Built and productionized a telecom-focused RAG assistant by LoRA fine-tuning LLaMA-2 and integrating LangChain+FAISS behind a FastAPI service, with dashboards and a human feedback UI for engineers. Demonstrated measurable impact (≈40% faster document lookup, +8–10% retrieval precision) and strong MLOps rigor via Airflow orchestration, CI/CD, and monitoring for drift and failures.”
Senior Machine Learning Engineer specializing in optimization, LLMs, and on-device AI
“Engineer with hands-on experience debugging and hardening a fixed-point implementation for an internal PoC, quickly diagnosing overflow/underflow issues that caused intermittent failures across thousands of runs and delivering a code fix. Comfortable presenting technical solutions with layered slide depth and doing follow-up deep dives for interested stakeholders, though has limited direct customer/sales partnership experience.”
Director-level AI & Data Science leader specializing in GenAI, LLMs, and MLOps
“ML/NLP engineer currently working in NYC on a system that connects complex unstructured data sources to deliver personalized insights, using embeddings + vector DB retrieval and a RAG architecture (LangChain, Pinecone/OpenSearch). Strong focus on production constraints—especially low-latency retrieval—using FAISS/ANN, PCA, index partitioning, and Redis caching, plus PEFT fine-tuning (LoRA/QLoRA) and KPI/SLA-driven promotion to production.”
Mid-level GenAI Engineer specializing in production RAG and LLM fine-tuning
“LLM engineer who built a production seller-support RAG system at eBay using hybrid retrieval (BM25 + Pinecone vectors) with Cohere reranking, LangGraph orchestration, and citation-grounded answers. Strong focus on reliability: semantic/structure-aware chunking, automated Ragas-based evaluation with nightly regressions, and production observability (LangSmith) plus drift monitoring (Arize). Also implemented a multi-agent fraud pipeline with AutoGen using JSON-schema contracts and explicit termination conditions.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps on AWS
“AI engineer who built a production RAG-based internal analyst tool at BlackRock, fine-tuning an LLM on proprietary financial data and adding four layers of guardrails (input/retrieval/generation/output) to improve grounding and reduce hallucinations. Implemented a LangChain-based multi-agent orchestration (7 major agents) deployed on AWS ECS, with reliability measured via internal human evaluation, LLM-as-judge, and RLHF/drift monitoring.”
Intern AI/ML Engineer specializing in LLM applications, RAG, and model evaluation
“Backend/ML engineer who built production LLM-enabled systems at PRGX, including an interpretable contract opportunity scoring engine (Bradley-Terry pairwise ranking) that reached 0.82 weighted Spearman agreement with SME auditors and was integrated into workflow. Also built a Duke student advisor chatbot and hardened it for real-world reliability/security with schema-driven tool calling, normalization, and off-domain defenses; led staged production rollouts with shadow testing and achieved 0.90 F1 on a new extraction field before shipping.”
Intern AI/ML Engineer specializing in robotics and computer vision
“Worked on Sophia the humanoid robot, building production animation pipelines and enhancing human-robot interaction via perception and behavior orchestration. Experienced in stabilizing noisy perception-driven state transitions and designing smooth, user-centered behavioral flows, collaborating closely with artists, animators, and experience designers to translate creative intent into measurable system behavior.”
Mid-level AI/ML Engineer specializing in NLP, MLOps, and scalable data pipelines
“Built and shipped a production LLM-powered personalized client engagement assistant in the financial domain, balancing real-time recommendations with strict privacy/compliance requirements. Demonstrates strong MLOps/LLMOps depth (Airflow + MLflow, containerized microservices, drift monitoring) and a privacy-by-design approach validated in collaboration with risk and compliance teams.”
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
“AI/ML engineer with HP experience building and productionizing an LLM-powered document intelligence platform (LangChain + Pinecone) to deliver semantic search and contextual Q&A across millions of enterprise support documents. Demonstrates strong MLOps and scaling expertise (Airflow, Kubernetes autoscaling, Triton GPU inference, monitoring with Prometheus/W&B) plus a structured approach to evaluation (A/B tests, shadow deployments, failover) and effective collaboration with non-technical stakeholders.”
Mid-level Machine Learning Engineer specializing in LLMs and NLP classification systems
“Internship experience building a production RAG+LLM pipeline to map messy card transaction descriptions to merchant brands, including a custom modified-ROUGE evaluation approach for weak/variant ground truth. Improved scalability and cost by moving from a managed LLM endpoint (e.g., Bedrock) to self-hosted vLLM, and orchestrated massive embedding backfills (5,000+ files, 10B+ rows) using an Airflow-triggered SQS + ECS worker architecture with robust retry/DLQ handling.”
Mid-level Machine Learning Engineer specializing in forecasting, NLP, and GenAI
“GenAI/ML engineer with production experience building multilingual LLM systems (English/Spanish) and RAG-based clinical documentation summarization at Walgreens, combining prompt engineering, structured output validation, and rigorous evaluation (ROUGE + pharmacist review). Also orchestrated end-to-end ML pipelines for demand forecasting using Apache Airflow, PySpark, and MLflow with scheduled retraining and production monitoring.”