Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in GenAI, RAG pipelines, and cloud MLOps
“Built and deployed a production LLM + vector search clinical decision support system at UnitedHealth Group, retrieving medical evidence and patient context in real time for prior authorization and risk scoring. Strong in end-to-end RAG architecture (Hugging Face embeddings, Pinecone/FAISS, SageMaker, Redis) plus orchestration (Airflow/Kubeflow) and rigorous evaluation/monitoring, with demonstrated ability to align solutions with clinical operations stakeholders.”
“LLM engineer who has deployed production RAG systems for regulated document QA (PDFs/knowledge bases), emphasizing grounded answers with citations, RBAC, monitoring, and continuous feedback. Demonstrates deep practical expertise in retrieval quality (semantic chunking, hybrid BM25+embeddings, re-ranking), reliability (guardrails, deterministic workflows), and measurable evaluation (golden sets, log replay, A/B tests) while partnering closely with compliance/operations stakeholders.”
Mid-level Machine Learning Engineer specializing in forecasting, NLP, and GenAI
“GenAI/ML engineer with production experience building multilingual LLM systems (English/Spanish) and RAG-based clinical documentation summarization at Walgreens, combining prompt engineering, structured output validation, and rigorous evaluation (ROUGE + pharmacist review). Also orchestrated end-to-end ML pipelines for demand forecasting using Apache Airflow, PySpark, and MLflow with scheduled retraining and production monitoring.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps on AWS
“LLM engineer who built a production document intelligence/RAG pipeline to extract structured data from thousands of unstructured PDFs, cutting manual review time by 60%. Experienced with LangChain and Airflow orchestration plus rigorous evaluation (labeled datasets, prompt testing, HITL review, monitoring) to improve accuracy and reduce hallucinations while partnering closely with non-technical operations stakeholders.”
Mid-level Machine Learning Engineer specializing in data science and cloud systems
“ML engineer who independently pitched and built a recommendation engine at Danske Bank in a legacy fintech environment, creating compliant data pipelines and deployment infrastructure from scratch and delivering a 62% engagement lift with 70%+ advisor adoption. Also worked at AWS on classification and GenAI-powered reporting systems, with strengths spanning production ML, platform setup, monitoring, and research-to-production optimization.”
Mid-level AI/ML Engineer specializing in FinTech risk and fraud systems
“Senior AI/ML engineer focused on production LLM systems, combining RAG, fine-tuning, distributed training, and AI safety to ship scalable real-time moderation and conversational AI platforms. Stands out for pairing deep AWS/Kubernetes MLOps expertise with measurable impact: 40% lower latency/cost, 30-50% fewer hallucinations, and major reliability gains through observability and automation.”
Mid-level AI/ML Engineer specializing in Generative AI and financial services
“ML/AI engineer with hands-on experience shipping regulated financial AI systems at JPMC and Capgemini, spanning credit risk, fraud detection, and generative AI assistants. Stands out for combining modern LLM/RAG architectures with strong MLOps, real-time infrastructure, and explainability/compliance practices, while delivering measurable business impact in latency, accuracy, cost, and risk reduction.”
Mid-level AI/ML Engineer specializing in GenAI, RAG, and healthcare ML
“Built an end-to-end GenAI/RAG platform for financial compliance and research at BlackRock, focused on safe, auditable answers in a highly regulated environment. Combines strong LLM engineering depth with production platform skills and delivered clear business impact, including reducing research/compliance turnaround from hours to seconds, improving retrieval relevance by 22%, and cutting inference costs by 75%.”
Mid-level Machine Learning Engineer specializing in MLOps, NLP, and production ML systems
“Backend/founding-engineer-style builder who designed and evolved a near-real-time customer churn prediction platform (FastAPI + AWS SageMaker/Lambda + Redis + MLflow) to enable real-time retention actions, reporting ~18% churn reduction. Demonstrates strong production engineering in secure API design, incremental migrations with data integrity safeguards, and robustness improvements in async pipelines (idempotency, DLQs, retry visibility).”
Mid-level AI/ML Engineer specializing in MLOps, NLP/LLMs, and computer vision
“Built and shipped a production LLM/RAG risk-case summarization and triage system used by fraud/compliance analysts, with strong grounding controls (evidence-cited outputs and refusal on low confidence). Demonstrates end-to-end ownership across retrieval quality, Airflow-orchestrated indexing pipelines, and compliance-grade privacy (PII redaction, RBAC, encrypted redacted logging, and auditable prompt/model versioning) plus a tight feedback loop with non-technical domain experts.”
“Built and deployed a production LLM-powered RAG assistant for semiconductor manufacturing failure analysis, reducing engineer triage effort by grounding outputs in retrieved evidence and gating responses with SPC + ML signals (LSTM anomaly scores, XGBoost probabilities). Experienced with LangChain/LangGraph to ship reliable, observable multi-step agents with branching/fallback logic, and evaluates impact using both technical metrics and business KPIs like mean time to triage and downtime reduction.”
Mid-level Generative AI Engineer specializing in LLM agents and RAG systems
“Built and deployed a production LLM/RAG knowledge assistant integrating internal docs, wikis, and ticket histories to reduce tribal-knowledge dependency and repetitive questions. Emphasizes reliability via grounding + a validation layer, and achieved major latency gains (>50%) through vector index optimization, caching, quantization, and selective re-validation. Comfortable orchestrating end-to-end LLM/data workflows with Airflow, Prefect, and Dagster, including monitoring and alerting.”
Senior Data Scientist / ML Engineer specializing in cloud ML pipelines and GenAI
“ML/NLP practitioner with experience building a transformer-failure prediction system that combines sensor signals with unstructured maintenance comments using LLM-based extraction and similarity validation. Strong emphasis on production readiness—data leakage controls, SQL-driven data quality tiers, and rigorous bias/fairness validation (including contract/spec evaluation across diverse company profiles).”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps
“Red Hat ML/LLM engineer who designed and deployed a production LLM-powered customer support automation system using RAG, improving latency by 30% via PEFT and vector search optimization. Built security and governance into retrieval (access-level filtering, encrypted Pinecone/ChromaDB) and delivered SHAP-based explainability via a dashboard for non-technical stakeholders. Experienced orchestrating distributed ML/RAG pipelines across AWS SageMaker and OpenShift with Airflow/Prefect, plus multi-agent workflows using CrewAI and LangGraph.”
Mid-level DevOps Engineer specializing in cloud infrastructure, CI/CD, and DevSecOps
“Platform-focused engineer experienced in productionizing ML/LLM systems: containerized a local prototype, implemented CI/CD, deployed to Kubernetes with scaling controls, and added monitoring/logging. Comfortable diagnosing real-time issues in LLM/agent workflows using logs/metrics and incident stabilization tactics, and supports sales calls by clearly explaining production scalability to unblock customer decisions.”
Staff Platform Engineer specializing in multi-cloud platforms and internal developer portals
“Infrastructure reliability/capacity-focused engineer with hands-on IBM Power/AIX (LPAR/DLPAR, HMC, VIOS) performance troubleshooting and modern cloud-native delivery experience. Built production CI/CD and Terraform-managed AWS/EKS environments, and has led real incident recoveries spanning Kubernetes autoscaling and AWS quota constraints with concrete RCA and prevention improvements.”
Mid-level AI/ML Engineer specializing in GenAI, LLMs, RAG, and MLOps
“Built and deployed a production LLM-powered RAG document intelligence/Q&A system for healthcare prior authorization, reducing manual medical document review time and improving decision efficiency. Strong in end-to-end LLM application engineering (LangChain/LangGraph), retrieval quality improvements (hybrid search, embedding tuning, chunking strategies), and rigorous evaluation/monitoring for reliability.”
Senior Full-Stack Software Engineer specializing in microservices and cloud-native systems
“Backend/infra engineer with experience across Nestle, J.P. Morgan, and Capgemini, combining ML systems work (YOLOv8/PyTorch object detection with TFLite edge deployment) with production-grade cloud/Kubernetes operations. Has delivered measurable impact via AWS migrations (25% cost reduction, 99.9% availability), microservice modernization (35% faster processing), and low-latency Kafka streaming for financial dashboards (<100ms) using DLQs and idempotent consumers.”
Junior AI/ML Engineer specializing in LLM systems and retrieval-augmented generation
“Built and deployed a production LLM-powered market intelligence and decision-support platform for noisy, real-time financial data, using a high-throughput embedding + vector DB RAG architecture to reduce hallucinations while keeping latency and cost low. Operated it at scale with GPU-backed inference (continuous batching/quantization), FastAPI on Kubernetes, and Airflow-orchestrated ingestion/embedding/retraining workflows, with strong schema-based reliability and monitoring.”
Mid-level AI/ML Engineer specializing in Generative AI, LLMOps, and MLOps
“Built and deployed an AWS-based LLM/RAG ticket triage and knowledge retrieval system (Pinecone/FAISS + Step Functions + MLflow) that cut support resolution time by 20%. Demonstrates strong production focus on hallucination reduction, PII security, and low-latency orchestration, with measurable evaluation improvements (e.g., ~25% grounding accuracy gain via re-ranking) and proven collaboration with support operations stakeholders.”
Executive Product & Technology Leader specializing in FinTech SaaS, core banking, and payments
“Product/technology executive who has repeatedly built and scaled engineering organizations from early-stage startups to large enterprises (including Temenos with ~2,000 developers). Led major platform modernizations (monolith-to-microservices) to enable SaaS, resilience, and faster delivery, and launched an innovation hub to build proprietary neural-network biometric algorithms for a payments product enabling "wave of the hand" transactions.”
Mid AI/Machine Learning Engineer specializing in FinTech and Generative AI
“AI/ML engineer with hands-on ownership of enterprise LLM deployments at Freshworks, including a large-scale RAG chatbot serving 15,000+ users across six departments. Stands out for combining deep production engineering skills—AWS microservices, Kubernetes, observability, retrieval quality, and faithfulness evaluation—with strong cross-functional stakeholder leadership and prior large-scale fraud data pipeline experience at Socure.”
Mid-level AI/ML Engineer specializing in generative AI, NLP, and MLOps
“ML/AI engineer with hands-on ownership of production GenAI and computer vision systems, spanning experimentation, deployment, monitoring, and iterative optimization. Stands out for shipping an enterprise RAG platform that cut manual review by 50% and a defect detection pipeline that reduced report generation from 15 minutes to under 1 second while maintaining high uptime and strong operational discipline.”