Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in fraud detection, credit risk, and NLP
“Built and deployed a production LLM-powered university support chatbot on Azure using a RAG pipeline, focusing on reducing hallucinations, improving latency, and handling ambiguous queries via confidence checks and clarification prompts. Also has hands-on orchestration experience (Airflow/Azure Data Factory), including hardening a demand-forecasting ingestion workflow with sensors, retries, and automated alerts, and uses a metrics-driven testing/monitoring approach for reliable AI agents.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
Mid-level Data Engineer specializing in cloud data pipelines and streaming analytics
Mid-level Data Engineer specializing in AI/ML, streaming, and lakehouse architectures
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Mid-level Data Scientist specializing in GenAI, RAG, and forecasting
“ML/NLP engineer focused on large-scale data linking for e-commerce-style catalogs and customer records, combining transformer embeddings (BERT/Sentence-BERT), NER, and FAISS-based vector search. Has delivered measurable lifts (e.g., +30% matching accuracy, Precision@10 62%→84%) and built production-grade, scalable pipelines in Airflow/PySpark with strong data quality and schema-drift handling.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics engineering
“Built and deployed a production LLM-powered demand and churn forecasting system for an e-commerce client, combining open-source LLMs (LLaMA/Mistral) and Sentence-BERT embeddings to generate business-friendly explanations of forecast drivers. Strong focus on data quality and model trust (validation, baselines, segmented monitoring) and production reliability via Airflow-orchestrated pipelines with readiness checks, retries, and ongoing drift/A-B testing.”
Mid-level Data Scientist specializing in ML, data engineering, and real-time analytics
Junior Data Engineer specializing in cloud ETL/ELT and lakehouse platforms
Mid-Level Software Engineer specializing in full-stack web and data engineering
“Backend/ML engineer who has built both enterprise data pipelines and real-time AI products: modular Python (Flask/FastAPI) services integrating automation scripts and low-latency ML inference (MediaPipe, PyTorch) plus OpenAI-powered feedback. Demonstrated measurable performance wins (~30% faster HR workflows; ~40% faster AWS pipelines across 100+ Oscar Health feeds) and strong multi-tenant/data-isolation patterns (schema-based isolation, RBAC, microservices).”
Mid-level Business Analyst specializing in data analytics and BI
“Healthcare analytics professional with hands-on experience turning messy claims, eligibility, and utilization data into validated BI-ready models using SQL and Python. They combine strong data engineering and KPI design skills with stakeholder-facing delivery, including Power BI prototyping, retention metric operationalization, and analyses that supported care management interventions and cost-control decisions.”
Mid-level Data Scientist & AI Engineer specializing in NLP, LLMs, and predictive analytics
“AI Engineer with production experience building an LLM-powered conversational scheduling assistant (rules-based + OpenAI GPT agents) and improving responsiveness by ~40% through architecture optimization. Strong in orchestration (Airflow), containerized deployments, and data quality (Great Expectations/PySpark), with prior work automating population health reporting pipelines (Azure Data Factory → Snowflake) and delivering insights via Tableau to non-technical stakeholders.”
Senior AI/ML Engineer specializing in Generative AI and healthcare analytics
“ML/AI engineer with strong healthcare insurance domain depth who has owned fraud detection and LLM claims products end-to-end in production. Stands out for combining modern MLOps and RAG architecture with measurable business impact, including millions in fraud savings, 40% faster analysis, and reusable platform tooling that accelerated multiple teams.”
“At Liberty Mutual, built a production underwriting decision assistant combining LLM reasoning with quantitative models and strong auditability. Implemented a claims-based response verification pipeline that cut hallucinations from 18% to 3% and materially improved user trust/validation scores. Experienced orchestrating ML/LLM workflows end-to-end with Airflow, Kubeflow Pipelines, and Jenkins, including SLA-focused pipeline hardening.”
Senior Full-Stack Engineer specializing in web platforms, cloud infrastructure, and data systems
“Full-stack/product-leaning engineer who owned an end-to-end AI Tutor feature (Claude-powered) shipped simultaneously to iOS/Android/web via Expo, with Cloudflare Workers backend and PostHog analytics. Built the company’s GitHub-based CI/CD to coordinate app store releases with backend blue/green deployments. Also has significant data engineering experience (including ~8TB/day workloads) using dbt/Fivetran plus sharding and hashing-based diffing for correctness.”
Mid-level AI/ML Engineer specializing in data engineering, LLM/RAG pipelines, and recommender systems
“Research assistant at St. Louis University who built and deployed a production document-intelligence RAG system (Python/TensorFlow, vector DB, FastAPI) on AWS, focusing on grounding to reduce hallucinations and latency optimization via caching/async/batching. Also developed a personalized recommendation system for the Frenzy social platform and partnered closely with product/UX to define metrics and iterate on hybrid recommenders and cold-start handling.”
“Built and deployed a production AI customer support chatbot at Unique Design Inc. using FastAPI, AWS, Docker, and retrieval-based grounding on internal documents. Stands out for hands-on ownership across discovery, deployment, incident debugging, and post-launch iteration, with a strong focus on making LLM systems reliable and safe in real business workflows.”
Mid-level Data Engineer specializing in cloud data pipelines and Snowflake
“Data engineer who has owned production pipelines end-to-end, ingesting 50–100 GB/day from APIs/S3 and near-real-time Kafka into Snowflake with strong data quality gates (Great Expectations/dbt) and Airflow-based reliability (SLAs, alerting, dashboards). Also built a Snowflake-backed REST data API with caching/pagination and versioned endpoints, and designed a compliant, scalable web-scraping system with anti-bot handling and safe backfills.”
Mid-level Data Engineer specializing in cloud data platforms and real-time pipelines
“Data engineer who has owned production pipelines end-to-end—from Kafka/Airflow ingestion through SQL/Python validation and dbt transformations into Redshift/BI. Also built and operated a large-scale distributed web scraping platform (50–100 sites daily, ~5–10M records/day) with Kubernetes, Kafka queues, robust retries/DLQ, anti-bot measures, and backfill-safe raw HTML storage.”
Mid-level Data Engineer specializing in FinTech data platforms
“Backend-focused engineer with experience at Ramp, Easebuzz, and George Mason University, spanning data pipelines, workflow automation, and production reliability. Stands out for quantifiable performance gains, strong debugging instincts in distributed job systems, and translating ambiguous finance operations processes into measurable automation outcomes.”
Mid-level Data Engineer specializing in cloud data platforms and ETL automation
“Data engineer who has owned high-volume production pipelines end-to-end (200–300 GB/day) on AWS, implementing strong data quality/observability and achieving 99.9% reliability while cutting data issues ~33%. Also built a large-scale external data collection system ingesting millions of records/day with anti-bot/rate-limit handling and backfill tooling, and shipped a versioned REST service exposing curated Snowflake data to downstream teams.”