Pre-screened and vetted.
Senior Machine Learning Engineer specializing in recommender systems, search, and NLP/GenAI
Executive Technology Leader specializing in GenAI, cloud infrastructure transformation, and enterprise modernization
Staff Machine Learning Engineer specializing in LLMs and Generative AI
Mid-level AI/ML Engineer specializing in LLM training, RAG, and scalable inference
Senior Machine Learning Engineer & Solution Architect specializing in cloud AI systems
“Backend/ML platform engineer with Google experience leading Python microservices for an AI-driven recommendation/retrieval system, including PyTorch inference and a retrieval-augmented generation workflow. Strong in production Kubernetes + GitOps (ArgoCD), real-time Kafka/Spark pipelines, and phased on-prem/legacy to AWS/GCP cloud migrations with reliability-focused rollout and rollback practices.”
Senior AI/ML Engineer specializing in LLM applications, RAG systems, and MLOps
Intern Machine Learning Engineer specializing in LLM agents and multimodal reasoning
“LLM/agent engineer who built a production code-generation agent at Corvic AI that lets non-technical users query CSV/tabular data in natural language by generating and executing Python. Focused on making LLM systems reliable and scalable via schema-aware validation, sandboxed execution-feedback retries, prompt caching/embeddings, async execution, and high-throughput data processing with Polars; also partnered with Adobe product/marketing to ship brand-aligned AI content generation for email and push notifications.”
Mid-level Machine Learning Engineer specializing in NLP, MLOps, and Generative AI
“Built and deployed a production LLM conversational AI system at OpenAI supporting chat, summarization, and semantic search at 1M+ requests/day, driving major latency (40%) and accuracy (25%) improvements through Pinecone optimization and tighter RAG with re-ranking. Also has Amazon experience improving recommendation systems by translating ML metrics into business terms to boost CTR and conversions, with strong MLOps/orchestration depth (Airflow, MLflow, SageMaker, Kubeflow).”
Mid-level AI/ML Engineer specializing in LLM optimization and real-time fraud/risk modeling
“ML engineer with 5 years at Stripe building and productionizing real-time fraud detection at massive scale (3M+ transactions/day; $5B+ annual payment volume). Delivered measurable impact (22% accuracy lift, 18% loss reduction, +3–5% authorization rates) and has strong MLOps/orchestration experience (Docker, Kubernetes, Airflow, MLflow, CI/CD, monitoring/rollback) plus a structured approach to LLM agent/RAG evaluation.”
Mid-level AI/ML Engineer specializing in LLM training, RAG, and scalable inference
Senior Machine Learning Engineer & Solution Architect specializing in cloud AI systems
Senior Applied Machine Learning Engineer specializing in FinTech & E-commerce
Senior Software Engineer specializing in Python AI/ML integration and experimentation pipelines
Senior AI/ML Engineer specializing in recommender systems, GenAI, and applied ML
Senior Machine Learning Engineer specializing in NLP and Generative AI
Mid-level AI/ML Engineer specializing in Generative AI, LLMs, and RAG systems
Senior Machine Learning Engineer specializing in on-device AI and large-scale deep learning systems
Senior AI/ML Engineering Manager specializing in NLP, computer vision, and MLOps
Senior Applied Machine Learning Engineer specializing in FinTech & E-commerce
Junior Machine Learning Engineer specializing in fraud detection and healthcare ML
Senior AI/ML Engineer specializing in Generative AI, RAG, and MLOps for FinTech
Senior Full-Stack AI Engineer specializing in LLM and speech-to-text products
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and scalable inference
“ML/LLM engineer who built and shipped an LLM-powered internal knowledge assistant at Meta, focusing on production-grade RAG to reduce hallucinations and improve trust. Deep experience with scaling and serving (FSDP/DeepSpeed/LoRA, Triton, Kubernetes autoscaling) and reliability practices (Airflow retraining, MLflow versioning, monitoring with rollback), including sub-100ms latency and ~35% GPU memory reduction.”