Pre-screened and vetted.
Senior Machine Learning Engineer specializing in LLM systems and generative AI
Mid-level AI & Machine Learning Engineer specializing in computer vision and MLOps
Mid-level AI/ML Engineer specializing in production ML, NLP, and computer vision
Mid-level AI/ML Engineer specializing in NLP, Computer Vision, and Generative AI
Senior Infrastructure Platform Architect specializing in hybrid cloud and Kubernetes automation
Senior Machine Learning Engineer specializing in computer vision and healthcare AI
Staff Machine Learning Engineer specializing in MLOps, cloud AI, and generative AI
Principal Machine Learning Architect specializing in AI platforms and data science
Senior Data Engineer specializing in cloud lakehouse platforms and healthcare data
Senior Machine Learning Engineer specializing in MLOps and GenAI platforms
Senior Data Engineer specializing in cloud data platforms and big data pipelines
Senior Software Engineer specializing in cloud backend systems and LLM-powered agents
“Amazon Fire TV Devices engineer who built and shipped a production LLM-powered lab triage and validation system that grounds recommendations in internal runbooks/known-issue data and pushes evidence-based actions via dashboards and Slack. Emphasizes safety and measurability with structured JSON outputs, replay-based evaluation on historical incidents, and production metrics (e.g., disagreement rate and time-to-first-action), plus cost/latency optimizations like caching, batching, and rule-based fast paths.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with experience at Moderna and Block owning high-volume (≈10TB/day) production pipelines on AWS, using Kafka/S3/Glue/dbt/Snowflake with strong data quality and observability practices (schema validation, anomaly detection, CloudWatch monitoring). Also built external financial API ingestion with Airflow retries, throttling/token rotation, and schema versioning, and helped stand up an early-stage biomedical data platform with CI/CD and incident debugging.”
Senior Data Engineer specializing in cloud ETL and real-time streaming pipelines
“Data engineer with eBay experience owning end-to-end pipelines for real-time order and user behavior analytics at 10M+ records/day. Strong in PySpark/SQL transformations, Airflow reliability patterns, and production observability (CloudWatch), with measurable outcomes including improved data quality and 30–40% query performance gains. Also built Python data APIs for analytics/ML consumers with versioning and backward compatibility.”
Senior Data Scientist / ML Engineer specializing in GenAI, LLMs, and NLP
“ML/NLP engineer focused on production GenAI and data linking systems: built a large-scale RAG pipeline over millions of support docs using LangChain/Pinecone and added a LangGraph-based validation layer to cut hallucinations ~40%. Also built scalable PySpark entity resolution (95%+ accuracy) and fine-tuned Sentence-BERT embeddings with contrastive learning for ~30% relevance lift, with strong CI/CD and observability practices (OpenTelemetry, Prometheus/Grafana).”
Staff/Lead Data Scientist specializing in Generative AI, NLP/LLMs, and MLOps
“Lead Data Scientist (10+ years) with recent work in healthcare data: built production pipelines that unify EHR, genomics, and clinical notes using NLP (spaCy/BERT/BioBERT) and scalable Spark-based processing. Also led development of domain-specific LLM/NLP systems for chatbots and semantic search, deploying models via FastAPI/Flask and improving retrieval with FAISS-backed, fine-tuned clinical embeddings and RAG-style workflows.”
Mid-level Generative AI Engineer specializing in RAG, agentic copilots, and regulated AI
“Senior engineer who built and productionized an Azure-based Enterprise AI Copilot for financial/compliance teams, focused on grounded, auditable answers with citations to reduce hallucinations in regulated workflows. Experienced designing multi-step agent orchestration and improving reliability through targeted iterations (e.g., fixing chunking/parsing to materially improve citation accuracy), plus building defensive pipelines for messy ERP/operational finance data.”