Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Built an LLM-powered academic research assistant for a professor (LangChain + OpenAI + arXiv) focused on synthesizing papers quickly, with emphasis on reliability (ReAct prompting, citation verification) and cost control (caching). Has production MLOps/orchestration experience at Cisco and HCL Tech using Kubernetes, plus MLflow and GitHub Actions for lifecycle management and CI/CD.”
Mid-level AI/ML Engineer specializing in LLMs, GenAI, and NLP
“AI/ML Engineer who built a production RAG-based LLM system for insurance policy documents, turning thousands of messy PDFs into a searchable index using LangChain, Azure AI Search vectors, hybrid retrieval, and FastAPI. Strong focus on evaluation (MRR/precision@k/recall@k, REGAS) and performance optimization (vLLM), with prior clinical NLP experience using BERT-based NER validated on ground-truth datasets.”
Mid-level Data Scientist specializing in Generative AI and multimodal systems
“Recent J&J intern who built a conversational RAG agent and led a shift from a monolithic model to a modular RAG workflow, cutting response time from several days to under a second by tackling data fragmentation, context retention, and embedding/latency optimization. Also worked on a large (7B-parameter) multimodal VQA pipeline for healthcare research and stays current via NeurIPS/ICLR and open-source contributions.”
Mid-level Conversational AI Developer specializing in enterprise chatbots and RAG
“ML/AI practitioner with hands-on experience deploying models to production and optimizing for low-latency inference using pruning/quantization, with deployments on AWS SageMaker and Azure ML. Has orchestrated end-to-end ML pipelines with Airflow and Kubeflow (ingestion through evaluation) and emphasizes reproducibility via containerization and version-controlled artifacts, while effectively partnering with non-technical stakeholders using dashboards and business-aligned metrics.”
Mid-level Machine Learning Engineer specializing in deep learning and generative AI
“AI/ML engineer who has deployed transformer-based NLP systems to production via Python REST APIs and Kubernetes on AWS/Azure, with a strong focus on latency optimization (p95), reliability, and scalable orchestration. Demonstrates pragmatic model tradeoff decision-making and strong stakeholder collaboration—improving adoption by making outputs more actionable with summaries, extracted fields, and confidence indicators.”
Mid-level AI/ML Engineer specializing in NLP, LLMs, and RAG for finance and healthcare
“Built an AI lending assistant (RAG + DeBERTa) used by credit analysts to retrieve policies and past loan decisions, tackling real production issues like hallucinations, document quality, and sub-second latency. Deployed a modular, Dockerized AWS architecture (ECS/EMR + load balancer) with load testing, caching/precomputed embeddings, and CloudWatch monitoring, and used Airflow to automate scheduled data/embedding/vector DB refresh pipelines with retries and alerts.”
Senior AI/ML Engineer specializing in healthcare AI and MLOps
“Healthcare AI engineer with hands-on ownership of production ML and LLM systems at McKesson, spanning clinical risk prediction and RAG-based documentation tools. Stands out for combining deep clinical-data experience, HIPAA-aware deployment practices, and measurable impact through reduced readmissions, clinician workflow gains, and 20% to 30% faster ML delivery for engineering teams.”
Senior AI/ML & Full-Stack Engineer specializing in GenAI, RAG, and MLOps platforms
“Backend/data platform engineer who owned end-to-end production services for a fleet analytics/GenAI platform, spanning FastAPI microservices on Kubernetes and AWS (EKS + Lambda) event-driven workloads. Strong in reliability/observability (OpenTelemetry, circuit breakers, idempotency), data pipelines (Glue/Airflow/Snowflake), and measurable performance/cost wins (SQL 10s to <800ms P95; ~30% compute cost reduction).”
Mid-level Machine Learning Engineer specializing in Generative AI and LLM applications
“GenAI engineer who has deployed production LLM/RAG chatbots for internal document search, focusing on reliability (hallucination reduction via prompt guardrails + retrieval filtering) and performance (latency improvements via caching). Experienced with LangChain/LangGraph orchestration for multi-step agent workflows and iterates using monitoring/logs and benchmark-driven evaluation while partnering closely with product and business teams.”
Mid-level Data Scientist specializing in ML, NLP, and Generative AI
“GenAI/ML engineer with production experience at Cognizant and Ally Financial, building end-to-end LLM/RAG systems and ML pipelines. Delivered a domain chatbot trained from 90k tickets and 45k docs, improving intent accuracy (65%→83%), scaling to 800+ concurrent users with 99.2% uptime and sub-150ms latency, and driving +14% customer satisfaction. Strong in Azure ML + DevOps CI/CD, Dockerized deployments, and explainable/PII-safe modeling using SHAP/LIME to satisfy stakeholder trust and GDPR needs.”
Junior Electrical & Computer Engineering student specializing in robotics, embedded systems, and ML
“DXArts PhD researcher and recent UW capstone contributor building autonomous robotics systems with ROS2 (SLAM Toolbox, Nav2) and Gazebo simulation. Currently focused on integrating a 9-DOF SparkFun IMU with motor controls on Raspberry Pi, and developing OpenCV ArUco-marker tracking for an automated BlueROV that can locate and retrieve underwater targets in collaboration with mechanical engineering.”
Mid-level Generative AI Engineer specializing in LLMs and RAG systems
“Built and shipped a production RAG-based enterprise knowledge assistant to replace slow/inaccurate search across millions of documents, using LangChain orchestration with GPT-4/LLaMA and vector databases. Strong focus on production constraints—latency, hallucination control, and cost—using hybrid retrieval, guardrails, LLM-as-judge validation, and model routing, and has experience translating non-technical stakeholder pain points into measurable outcomes.”
Mid-level AI Engineer specializing in GenAI, LLM integration, and RAG pipelines
“Built and led deployment of an autonomous, self-correcting multi-agent knowledge retrieval and validation system at HCA Healthcare to reduce heavy manual research/validation in clinical/compliance documentation. Deeply focused on production reliability and cost—used LangGraph StateGraph orchestration plus ONNX/CUDA/quantization to cut GPU costs by 25%, and partnered with the Compliance VP using real-time contradiction-rate dashboards to hit a 40% automation goal without compromising compliance.”
Mid-level Data Scientist specializing in Generative AI and NLP for financial risk
“Built and shipped production generative AI/RAG assistants in regulated financial contexts (S&P Global), automating compliance-oriented Q&A over earnings reports/filings with grounded answers and citations. Experienced across the full stack—AWS-based ingestion (PySpark/Glue), vector retrieval + LangChain agents, GPT-4/Claude model selection, and production reliability (monitoring, caching, retries) plus rigorous evaluation and regression testing.”
Mid-level AI/ML Engineer specializing in deep learning, MLOps, and LLM applications
“Built and deployed production LLM assistants for internal Q&A and customer-feedback summarization, emphasizing reliability (RAG, prompt tuning, validation/whitelisting) and privacy safeguards. Improved adoption by adding explainable outputs and a user feedback mechanism, and has hands-on orchestration experience with Aflow and Azure Logic Apps.”
Mid-level Data Scientist specializing in Generative AI, NLP, and MLOps
“Built and deployed an LLM-powered claims-document summarization system (insurance domain) that cut agent review time from 4–5 minutes to under 2 minutes and saved 1,200+ hours per quarter. Hands-on across orchestration and production infrastructure (Airflow retraining DAGs, Kubernetes, SageMaker endpoints, FastAPI) and recent RAG workflows using n8n + Pinecone, with a strong focus on reliability, cost, and explainability for non-technical stakeholders.”
Mid-level Software Engineer specializing in FinTech and ML backend systems
“Backend-leaning full-stack engineer who has shipped real-time, customer-facing dashboards and ticketing/payment features at Freshworks and Global Payments. Strong in Python API design (Django/Flask/FastAPI) and React/TypeScript UIs, with hands-on experience scaling PostgreSQL for high transaction volumes and operating services on AWS, including incident response and HIPAA-aligned security controls.”
Mid-level AI Engineer specializing in LLMs, MLOps, and healthcare NLP
“Built a production, real-time clinical documentation system at HCA that converts doctor–patient conversations into structured clinical summaries using speech-to-text, LLM summarization, and RAG. Demonstrated measurable gains from medical-domain fine-tuning (clinical concept recall +18%, ROUGE-L 0.62 to 0.74) while meeting HIPAA constraints via PHI anonymization and encryption, and deployed via Docker/FastAPI with CI/CD and monitoring.”
Mid-level AI/ML Engineer specializing in GenAI, NLP, and financial systems
“GenAI/ML engineer with hands-on experience building production financial intelligence and document summarization systems at Citibank. Stands out for combining LLM fine-tuning, hybrid RAG, multi-agent workflows, and strong MLOps/observability practices to deliver measurable business impact, including 60% faster analyst retrieval, 31% higher precision, and 99%+ uptime.”
“Built a production multi-agent orchestration platform to automate healthcare claims and HR workflows, combining LangChain/CrewAI/AutoGPT with RAG (FAISS/Pinecone) and fine-tuned open-source LLMs (LLaMA/Mistral/Falcon) in private Azure ML environments to meet HIPAA requirements. Emphasizes rigorous agent evaluation/observability (trajectory eval, adversarial testing, LLM-as-judge, drift monitoring) and reports measurable outcomes including 35% faster claims processing and 40% fewer chatbot errors.”
Senior AI/ML Engineer specializing in Generative AI, LLMs, and MLOps
“Telecom (Verizon) AI/ML practitioner who built a production multimodal system that ingests messy customer issue reports (calls, chats, emails, screenshots, videos) and turns them into confidence-scored incident summaries with reproducible steps and evidence links. Also built KPI/alarm-to-ticket correlation to rank likely root-cause domains (RAN/Core/Transport), cutting triage from hours to minutes and improving MTTR.”
Mid-level AI/ML Engineer specializing in Generative AI and production ML systems
“At CVS Health, the candidate productionized a RAG-based LLM solution in a regulated healthcare setting, emphasizing reliable data pipelines, LoRA fine-tuning, monitoring, safety guardrails, and A/B testing. They have hands-on experience troubleshooting real-time RAG failures (e.g., chunking/embedding issues) and regularly lead developer-focused demos/workshops while translating technical architecture into business value for stakeholders.”
Mid-level Backend & Applied ML Engineer specializing in LLM systems and scalable APIs
“Backend engineer who significantly evolved an internal analytics/reporting platform (Python API + Postgres) powering self-service dashboards for product/business teams, focusing on reliability under heavy concurrent load and fast query performance. Demonstrates strong production engineering practices across API design (FastAPI), observability, incremental rollouts with feature flags, and data security using JWT/RBAC plus Postgres row-level security.”
Mid-level AI/ML Engineer specializing in GenAI and financial risk & compliance analytics
“Built and deployed a production LLM-powered financial risk and compliance platform to reduce manual trade exception handling and speed up insights from regulatory documents. Implemented a LangChain multi-agent workflow with structured/unstructured data integration (Redshift + vector DB) and emphasized hallucination reduction for regulatory safety using Amazon Bedrock. Strong MLOps/orchestration background across Kubernetes, Airflow, Jenkins, and monitoring/testing with MLflow, Evidently AI, and PyTest.”