Pre-screened and vetted.
Mid-Level Generative AI Engineer specializing in LLM apps, RAG, and cloud deployment
Senior AI Architect specializing in Generative AI and LLM systems
Mid-level GenAI/ML Engineer specializing in agentic AI and RAG systems
“Backend/platform engineer who has owned a Python/FastAPI results API and deployed it on Kubernetes with Helm and GitHub Actions-driven CI/CD. Demonstrates strong production operations mindset across performance tuning, monitoring, safe rollouts/rollbacks, and phased migrations, plus hands-on Kafka streaming experience focused on ordering and idempotency.”
Mid-level GenAI Engineer specializing in AI agents and RAG systems
“Built and deployed a production LLM-based RAG agent platform adopted by multiple business teams (Marketing, GTM, Recruiting, Customer Support) to automate knowledge search, Q&A, and content generation. Emphasizes production-grade reliability (grounding/validation/guardrails), rigorous evaluation/monitoring, and cost-aware scaling via model tiering, prompt/retrieval optimization, and caching using LangChain/LangGraph orchestration.”
Mid-level Data Scientist & Generative AI Engineer specializing in LLMs and RAG
“ML/NLP practitioner who built a retrieval-augmented generation (RAG) system for large financial and operational document sets using Sentence-Transformers (all-mpnet-base-v2) and a vector DB (e.g., Pinecone), with a strong focus on retrieval evaluation and chunking strategy optimization. Experienced in entity resolution (rules + embedding similarity with type-specific thresholds) and in productionizing scalable Python data workflows using Airflow/Dagster and Spark.”
Mid-level Machine Learning Engineer specializing in Generative AI and LLM applications
“GenAI engineer who has deployed production LLM/RAG chatbots for internal document search, focusing on reliability (hallucination reduction via prompt guardrails + retrieval filtering) and performance (latency improvements via caching). Experienced with LangChain/LangGraph orchestration for multi-step agent workflows and iterates using monitoring/logs and benchmark-driven evaluation while partnering closely with product and business teams.”
Mid-level Generative AI Engineer specializing in LLMs and RAG systems
“Built and shipped a production RAG-based enterprise knowledge assistant to replace slow/inaccurate search across millions of documents, using LangChain orchestration with GPT-4/LLaMA and vector databases. Strong focus on production constraints—latency, hallucination control, and cost—using hybrid retrieval, guardrails, LLM-as-judge validation, and model routing, and has experience translating non-technical stakeholder pain points into measurable outcomes.”
Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Healthcare-focused LLM engineer who deployed a production triage and clinical knowledge retrieval assistant using RAG and LangGraph-orchestrated multi-agent workflows. Emphasizes clinical safety and compliance with robust hallucination controls, HIPAA/PHI protections (tokenization, encryption, audit logging, zero-retention), and human-in-the-loop escalation; reports a 75% latency reduction in a healthcare agent system.”
“Built a production multi-agent orchestration platform to automate healthcare claims and HR workflows, combining LangChain/CrewAI/AutoGPT with RAG (FAISS/Pinecone) and fine-tuned open-source LLMs (LLaMA/Mistral/Falcon) in private Azure ML environments to meet HIPAA requirements. Emphasizes rigorous agent evaluation/observability (trajectory eval, adversarial testing, LLM-as-judge, drift monitoring) and reports measurable outcomes including 35% faster claims processing and 40% fewer chatbot errors.”
Mid-level Data Scientist specializing in Generative AI and LLM production systems
“Built and deployed a production LLM-powered workflow assistant that automated internal marketing/production business tasks (document summarization, repeated Q&A, status updates). Demonstrates end-to-end applied LLM engineering: modular RAG architecture, hallucination/latency mitigation, automated evals to prevent prompt regressions, and Azure-based orchestration (Functions/Logic Apps) with monitoring and controlled rollouts.”
“Designed and deployed a production LLM agent platform at the National Institutes of Health to reduce time spent searching fragmented internal documentation, combining RAG grounding with multi-step tool-calling workflows and integration into legacy services via inference APIs. Emphasizes production-grade reliability through automated evaluation on real queries, guardrails/safe-failure behaviors, and ongoing A/B testing and monitoring, and has experience translating non-technical stakeholder goals into measurable success metrics.”
Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps
Mid-level Full-Stack Developer specializing in cloud backend systems and GenAI
Mid-level Generative AI/ML Engineer specializing in LLMs, RAG, and agentic AI
Mid-level Generative AI Engineer specializing in RAG, multi-agent LLM systems, and LLMOps
Mid-level Generative AI Engineer specializing in banking and healthcare AI
Mid-level AI/ML Engineer specializing in LLMs, RAG, and full-stack development