Pre-screened and vetted in California.
Senior Laboratory Technician specializing in clinical diagnostics and quality compliance
“Forward-deployed, full-stack/platform engineer who owns production features end-to-end across frontend, backend, data, and infrastructure (AWS serverless, Terraform, React). Has modernized critical fintech/payment systems (zero-downtime monolith-to-microservices with Kafka event sourcing) and productionized AI-native support workflows (LLM + RAG on Pinecone) with measurable gains in latency, incidents, CSAT, and support efficiency.”
Mid-level AI/ML Engineer specializing in LLMs, RAG pipelines, and cloud MLOps
“Built and deployed a production LLM/RAG system at CVS to automate clinical documents, addressing PHI compliance, retrieval accuracy, and latency; achieved a 35–40% reduction in review effort through chunking and FP16/INT8 optimization. Also has experience translating AI outputs into actionable insights for non-technical stakeholders (sports analysts).”
Mid-level Data Engineer specializing in cloud data platforms and AI agents
“Data/Backend engineer who has owned end-to-end merchant analytics systems on AWS: orchestrated multi-source ingestion (FISERV/Shopify/Clover) with Step Functions/Lambda, enforced strong data quality gates, and served curated datasets via Redshift and a FastAPI layer. Also built an early-stage Merchant Insights AI agent that converts natural language questions into SQL using OpenAI models, with full CI/CD and observability.”
Mid-level ML & Data Engineer specializing in GenAI, graph modeling, and fraud/risk analytics
“Built a production AI fraud/risk scoring platform at BlueArc that ingests web business/product/site data, generates text+image embeddings, and connects entities in a graph to detect reuse patterns and links to known bad actors. Optimized for scale with incremental graph re-scoring and delivered investigator-friendly explainability by surfacing the exact signals/relationships behind each score; orchestrated workflows with Airflow and GCP event-driven components (Pub/Sub, Dataflow, Cloud Run) and has recent LLM workflow orchestration experience (retrieval, prompting, scoring).”
Mid-level Data Engineer specializing in cloud big data and streaming pipelines
“Data engineer focused on large-scale financial data platforms, with hands-on ownership of an AWS + Databricks + Snowflake pipeline processing ~2TB/day. Strong in data quality (Great Expectations), schema drift automation, and production reliability (99.9%), plus measurable performance/cost wins (4h→1.2h, ~25% cost reduction). Also built an async Python crawling/ingestion framework with anti-bot mitigation, retries, and Airflow-driven backfills.”
Mid-level Data Engineer specializing in multi-cloud real-time data pipelines
“Data engineer with healthcare/clinical trial domain experience who owned a 100TB+/month AWS pipeline end-to-end (Glue/S3/Redshift/Airflow) and drove measurable outcomes (20% lower latency, 99.9% reliability, 40% less manual reporting). Also built production data services and API-based ingestion on GCP (Cloud Run/Functions/BigQuery) with strong validation, versioning, and safe migration practices, and launched an early-stage RAG solution (LangChain + GPT-4) for researchers.”
Senior Data Engineer specializing in cloud data platforms, ETL pipelines, and analytics
Mid-level Data Analyst specializing in machine learning and analytics
Mid-level Software Engineer and Data Engineer specializing in AI products and real-time analytics
Mid-level Software Engineer specializing in backend microservices and data infrastructure
Mid-level AI Engineer specializing in agentic AI, LLM systems, and healthcare AI
“Healthcare-focused ML/AI engineer who has built production voice agents and clinical question-answering systems end-to-end, from experimentation through deployment, observability, and iteration. Particularly strong in making LLM systems reliable in real workflows via RAG, fine-tuning, guardrails, evaluation pipelines, and shared Python tooling; cites ~20% clinical QA accuracy gains and ~40% faster physician decision turnaround.”
Mid-level Generative AI & ML Engineer specializing in production LLM and RAG systems
“AI/ML engineer who shipped a production blood-test report understanding and personalized supplement recommendation product, using a LangGraph multi-agent pipeline on AWS serverless with OCR via Bedrock and RAG over vetted clinical research. Also built end-to-end recommender system pipelines at ASANTe using Airflow (ingestion, embeddings/features, training, registry, batch scoring/monitoring) with KPI reporting to Tableau, with a strong focus on safety, evaluation, and measurable reliability.”
Mid-level Data Engineer specializing in FinTech data platforms
“Backend-focused engineer with experience at Ramp, Easebuzz, and George Mason University, spanning data pipelines, workflow automation, and production reliability. Stands out for quantifiable performance gains, strong debugging instincts in distributed job systems, and translating ambiguous finance operations processes into measurable automation outcomes.”
Intern AI/Software Engineer specializing in backend systems, cloud infrastructure, and GenAI
Junior Data Engineer specializing in cloud data pipelines and warehousing
Mid-level Forward Deployed Engineer specializing in LLM agents and RAG/CAG systems
Mid-Level Software Engineer specializing in full-stack web and data engineering
Junior Software Engineer specializing in secure backend systems and DevOps automation
Senior Software Engineer specializing in distributed systems and cloud data pipelines
Mid-level Data Engineer specializing in cloud ELT pipelines and analytics engineering
“Data engineer who has owned end-to-end ELT pipelines on Airflow + AWS (S3/Glue/Lambda) with Snowflake/Redshift, processing millions of records per day and tens of GBs via PySpark. Built strong data quality and reliability practices (40% quality improvement, 99%+ uptime), and also designed a resilient web-scraping system with anti-bot defenses and schema-change versioning plus REST APIs for serving curated data.”