Pre-screened and vetted.
Mid-level Data Scientist specializing in LLMs and NLP for financial analytics
Senior Data Scientist specializing in machine learning and cloud analytics
Mid-level AI/ML Engineer specializing in MLOps, NLP/CV, and fraud detection
Senior AI/ML Engineer specializing in LLMs, NLP, and production MLOps
Mid-level Machine Learning Engineer specializing in MLOps and healthcare analytics
Mid-level Data Scientist specializing in ML, NLP, and cloud deployment
Mid-level Data Scientist / ML Engineer specializing in NLP, GenAI, and cloud ML deployment
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Built an LLM-powered academic research assistant for a professor (LangChain + OpenAI + arXiv) focused on synthesizing papers quickly, with emphasis on reliability (ReAct prompting, citation verification) and cost control (caching). Has production MLOps/orchestration experience at Cisco and HCL Tech using Kubernetes, plus MLflow and GitHub Actions for lifecycle management and CI/CD.”
Mid-level AI/ML Engineer specializing in NLP, LLMs, and RAG for finance and healthcare
“Built an AI lending assistant (RAG + DeBERTa) used by credit analysts to retrieve policies and past loan decisions, tackling real production issues like hallucinations, document quality, and sub-second latency. Deployed a modular, Dockerized AWS architecture (ECS/EMR + load balancer) with load testing, caching/precomputed embeddings, and CloudWatch monitoring, and used Airflow to automate scheduled data/embedding/vector DB refresh pipelines with retries and alerts.”
Mid-level Data Scientist specializing in Generative AI and multimodal systems
“Recent J&J intern who built a conversational RAG agent and led a shift from a monolithic model to a modular RAG workflow, cutting response time from several days to under a second by tackling data fragmentation, context retention, and embedding/latency optimization. Also worked on a large (7B-parameter) multimodal VQA pipeline for healthcare research and stays current via NeurIPS/ICLR and open-source contributions.”
Mid-level Machine Learning Engineer specializing in deep learning and generative AI
“AI/ML engineer who has deployed transformer-based NLP systems to production via Python REST APIs and Kubernetes on AWS/Azure, with a strong focus on latency optimization (p95), reliability, and scalable orchestration. Demonstrates pragmatic model tradeoff decision-making and strong stakeholder collaboration—improving adoption by making outputs more actionable with summaries, extracted fields, and confidence indicators.”
Senior Data Scientist/ML Engineer specializing in scalable ML and LLM systems
“Built and deployed an end-to-end product that brings a research-paper approach into production for large-scale time-series clustering, with attention to partitioning, latency, and scalability. Also designed a Python-based backend validation service (comparing outputs to database ground truths) and handled production reliability issues by reproducing dataset-specific crashes and hardening corner-case behavior with client-friendly errors.”
Principal AI/ML Leader specializing in Generative AI, MLOps, and NLP
“Founding member of Tausight, building AI systems to detect and protect PHI for healthcare organizations; helped take the company through post–Series A funding and exited after ~6 years. Drove a strategic collaboration with Intel’s OpenVINO team—becoming the first to deploy it in a real production system and improving model performance by ~30% on customer Intel-CPU machines.”
Mid-level AI/ML Engineer specializing in Generative AI and production ML systems
“Built and deployed a production SecureAIChatBot (RAG-based) for secure internal information retrieval, using embeddings/vector search, GPT models, monitoring, and safety filters. Focused on real-world production challenges like latency and output consistency, applying caching, retrieval scoping, smaller models, and controlled prompting, and used LangChain to orchestrate the end-to-end workflow.”
Mid-level Data Scientist specializing in Generative AI, NLP, and MLOps
“Built and deployed an LLM-powered claims-document summarization system (insurance domain) that cut agent review time from 4–5 minutes to under 2 minutes and saved 1,200+ hours per quarter. Hands-on across orchestration and production infrastructure (Airflow retraining DAGs, Kubernetes, SageMaker endpoints, FastAPI) and recent RAG workflows using n8n + Pinecone, with a strong focus on reliability, cost, and explainability for non-technical stakeholders.”
Mid-level Data Scientist specializing in Generative AI and NLP for financial risk
“Built and shipped production generative AI/RAG assistants in regulated financial contexts (S&P Global), automating compliance-oriented Q&A over earnings reports/filings with grounded answers and citations. Experienced across the full stack—AWS-based ingestion (PySpark/Glue), vector retrieval + LangChain agents, GPT-4/Claude model selection, and production reliability (monitoring, caching, retries) plus rigorous evaluation and regression testing.”
Mid-level AI/ML Engineer specializing in NLP, RAG systems, and real-time risk modeling
“AI/ML Engineer with 4+ years of experience (Capital One, Odin Technologies) and a master’s in Data Analytics (4.0 GPA) who has deployed LLM/RAG systems to production for compliance/risk and document review. Strong in orchestration and MLOps (Airflow, Kubernetes, MLflow, GitHub Actions) and in tackling real-world LLM constraints like latency, context limits, and data privacy, with measurable impact (20%+ manual review reduction; 33% faster release cycles).”
Junior Machine Learning Engineer specializing in Generative AI and analytics automation
“AI/LLM engineer who built a production intelligent support system using RAG over a vectorized documentation library, addressing real-world issues like lost-in-the-middle context failures and doc freshness via automated GitHub-driven re-embedding pipelines. Emphasizes rigorous agent evaluation (component/E2E/ops) and prefers lightweight, decoupled workflow automation using message brokers (Redis/RabbitMQ) over heavyweight orchestration frameworks.”
Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Healthcare-focused LLM engineer who deployed a production triage and clinical knowledge retrieval assistant using RAG and LangGraph-orchestrated multi-agent workflows. Emphasizes clinical safety and compliance with robust hallucination controls, HIPAA/PHI protections (tokenization, encryption, audit logging, zero-retention), and human-in-the-loop escalation; reports a 75% latency reduction in a healthcare agent system.”
Intern Data Scientist specializing in ML engineering and LLM agentic workflows
“Built an agentic, multi-step LLM system that generates full-stack code for API integrations using LangChain orchestration, Pinecone/SentenceBERT RAG, and a human-in-the-loop feedback loop for iterative code refinement. Also collaborated with non-technical content writers and PMs during a Contentstack internship to deliver a Slack-based AI workflow that generates and brand-checks articles with one-click approvals.”
Mid-level QA Engineer specializing in AI/ML model validation and data quality
“ML practitioner with a QA background who has built end-to-end ML pipelines for a health risk prediction use case (lifestyle + demographics), emphasizing robustness through strict data validation, leakage prevention, and cross-validation. Collaborated with a dietician to sanity-check predictions and refine feature interpretation for real-world practicality; has not yet deployed LLM/AI systems to production and has no hands-on orchestration framework experience but is willing to learn.”