Pre-screened and vetted.
Junior AI Engineer specializing in LLM pipelines, RAG, and computer vision
“Built and deployed an on-prem, HIPAA-compliant LLM pipeline for oncology-focused clinical note generation and decision support, emphasizing grounded differential diagnosis and explainable reasoning via RAG to reduce hallucinations. Also created a LangGraph-based multi-agent academic paper search system integrating Tavily, arXiv, and Semantic Scholar with an orchestrator that routes tasks to specialized sub-agents.”
Mid-level Full-Stack Engineer specializing in AI and FinTech platforms
“Full-stack engineer building real-time internal banking operations dashboards (Java/Spring Boot microservices + React/TypeScript) with Kafka-based streaming and post-launch performance optimizations. Also shipped a production internal AI support assistant using RAG (Confluence/PDF/support docs ingestion, embeddings + vector DB retrieval) with guardrails, evaluation loops, and observability to reduce hallucinations and prevent regressions.”
Mid-level Generative AI Engineer specializing in LLM fine-tuning, RAG, and agentic systems
“Built and deployed a production multi-agent RAG system at JPMorgan Chase to automate regulated credit analysis and compliance clause discovery across large internal policy/document libraries. Implemented LangGraph-based supervisor orchestration with structured state management (Azure OpenAI) to support long-running, resumable workflows, plus hybrid retrieval + re-ranking and guardrails for reliability. Strong at evaluation/observability (trace logging, LLM-judge, HITL) and at communicating results to non-technical stakeholders via Power BI embeds and Streamlit prototypes.”
Mid-level Generative AI Engineer specializing in enterprise LLM and healthcare AI solutions
“Built and owned an end-to-end LLM-powered fraud investigation assistant that automated case summaries and risk analysis, cutting analyst investigation/documentation time by 40%. Stands out for translating RAG concepts into a production-grade internal platform with strong evaluation, monitoring, and reusable Python service architecture that improved both analyst trust and engineering velocity.”
Senior AI/ML Engineer specializing in LLMs, NLP, and enterprise conversational AI
“Built and owned a production conversational AI platform for a healthcare contact center, including RAG-based agent assist, hybrid retrieval, safety guardrails, and production monitoring. Stands out for combining LLM product delivery with strong operational rigor, driving a reported 25-30% improvement in handling time in a sensitive healthcare environment.”
Mid-level Software Engineer specializing in backend, cloud infrastructure, and AI systems
“Built and launched a production self-healing MLOps agent that autonomously diagnosed and fixed model training failures on Kubernetes GPU infrastructure. Combines deep AI infrastructure knowledge with full-stack product ownership, and has delivered measurable impact including 35% less infrastructure waste, nearly 50% less troubleshooting time, and 60% lower LLM API costs.”
Mid-level AI/ML Engineer specializing in LLM agents and RAG systems
“LLM/agentic systems builder at Verizon who deployed a LangGraph-orchestrated multi-agent ticket-automation platform with RAG (FAISS) to replace brittle rule-based bots. Improved routing correctness by ~30–40%, hit ~300ms latency targets via model routing, and reduced ops workload by ~60% through tight iteration with non-technical stakeholders and strong testing/observability practices.”
Senior AI Engineer specializing in LLMs, agentic systems, and MLOps
“Built and shipped PromptGuard, a production middleware proxy that secures GenAI RAG/agent systems against prompt injection and unsafe tool use using risk scoring, graded policy actions, and least-privilege tool gating. Also replaced LangChain abstractions with a custom state-machine runner for a production voice agent to reduce latency and improve traceability, and delivered a clinic call assistant by converting front-desk/doctor requirements into scenario-based guardrails and measurable evals.”
Mid-level Machine Learning Engineer specializing in computer vision and LLM pipelines
“ML/LLM engineer who built production systems to speed up artist content-creation workflows, including a fine-tuned image captioning model paired with a RAG layer over image embeddings/captions to improve consistency across changing domains. Experienced orchestrating multi-tool agents with LangChain/LangGraph (planning + critic/reflection) and setting up practical monitoring (caption rejection rate) plus evaluation sets for tool-calling accuracy, output quality, and latency.”
Mid-level Software Engineer specializing in cloud platforms, data engineering, and distributed systems
“Full-stack engineer who built and owned an AI-assisted job-matching dashboard in Next.js App Router/TypeScript, keeping LLM logic server-side and improving performance via deduplication, caching/revalidation, and streaming (35% fewer duplicate LLM calls; 40% faster first render). Also has strong data/backend chops: designed Postgres models and optimized queries at million-record scale (1.8s to 120ms) and built durable AWS multi-region telemetry workflows with idempotency, retries, and monitoring.”
Senior Software Engineer specializing in AI/ML, computer vision, and cloud-native systems
“Independently built a production-grade, containerized enterprise agentic AI platform (stateful orchestration + RAG) focused on real-world reliability—guardrails, citation-based outputs, reranking, query rewriting, and evaluation harnesses to reduce hallucinations. Hands-on with OpenAI SDK, CrewAI, and LangGraph, and has delivered AI solutions for non-technical NGO stakeholders via demos and practical POCs.”
Mid-level GenAI Engineer specializing in RAG, LLMs, and enterprise AI
“Built and shipped production LLM agents that automate document processing and decision workflows, with a strong focus on reliability, guardrails, and measurable business impact. Stands out for combining RAG, tool calling, evals/monitoring, and ERP integration to deliver 30-35% manual effort reduction and higher throughput without additional headcount.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and predictive analytics
“GenAI/LLM engineer who architected and deployed a production RAG “research assistant” for JPMorgan Chase’s regulatory compliance team, focused on safety-critical behavior (mandatory citations, refusal when evidence is missing). Deep hands-on experience with LlamaIndex, Pinecone, Hugging Face embeddings, LangGraph agent workflows, and metric-driven evaluation (golden sets, TruLens), including a reported 28% relevancy lift via cross-encoder re-ranking.”
Entry Software Engineer specializing in embedded systems, full-stack, and AI/ML
“AI-focused engineer who treats models as tightly controlled collaborators rather than autonomous replacements. Built and led a LangGraph-based multi-agent research system with separate stages for decomposition, retrieval, synthesis, and validation, emphasizing modularity, debuggability, and robust failure handling.”
Senior Data Scientist / Generative AI Engineer specializing in fraud, risk, and MLOps
“Built and deployed a production LLM/RAG fraud investigation system to replace manual investigator workflows, combining transaction data, historical cases, and policy documents with agent-style steps and LoRA fine-tuning. Demonstrates strong reliability engineering (grounding, citations, abstention paths), performance optimization (retrieval/indexing/caching), and end-to-end MLOps orchestration using Azure ML Pipelines/MLflow plus Kubernetes/Argo with canary and rollback deployments.”
Senior AI & Machine Learning Engineer specializing in GenAI, Agentic AI, and RAG
“Built a production agentic AI system to automate data science work using a layered architecture (executive-summary handling, tool-based execution, and on-the-fly code generation). Demonstrates strong end-to-end agent development practices including RAG with vector databases, prompt engineering, and multi-method evaluation (LLM-as-judge/human/code-based), plus Airflow-based orchestration for ML data pipelines and close collaboration with business end users.”
Mid-level AI/ML Engineer specializing in GenAI, RAG, and enterprise data platforms
“Built and shipped a production LLM-powered RAG assistant for enterprise internal document search (PDFs, knowledge bases, structured data), addressing real-world issues like noisy documents, hallucinations, and latency with grounded prompting, retrieval-confidence fallbacks, and performance optimizations. Also partnered with compliance and business teams at JPMc to deliver a solution aligned with regulatory constraints, supported by monitoring, feedback loops, and systematic evaluation.”
Mid-level Machine Learning Engineer specializing in LLMs and RAG for healthcare
“AI Engineer (Medtronic) who deployed a production RAG-based clinical assistant grounded in curated biomedical literature (no patient-identifiable data). Deep hands-on experience orchestrating and hardening LLM workflows with LangChain/LangGraph, including stateful agentic flows, rigorous testing, and evaluation; reports a 72% accuracy improvement through retrieval enhancements (query rewriting, multi-query expansion, MMR reranking).”
Senior AI/ML Engineer specializing in Generative AI, LLMs, and production ML systems
“ML/AI engineer with hands-on ownership of both classical ML and GenAI systems in production. They built an end-to-end churn prediction service on AWS and also shipped RAG-based document search/summarization features, with clear experience in monitoring, hallucination reduction, cost/latency optimization, and creating shared Python/LLM infrastructure used across teams.”
Director-level AI Architect/Manager specializing in GenAI, MLOps, and enterprise automation
“GenAI/ML engineering leader (player-coach) who built and deployed an image-to-text production system for topology/resource diagrams, combining YOLO-based issue detection with an LLM to generate support-ready reports at scale. Heavy AWS stack (SageMaker, Step Functions, Lambda, CloudWatch, FastAPI, Kubernetes/Docker) with KPI-driven optimization (MTTR, P50), including ~21 custom labels and reported 30–50% faster issue identification while processing thousands of images in production.”
Staff Software Engineer specializing in Healthcare Platforms and Observability
Intern Software Engineer specializing in full-stack development and cloud/AI automation