Pre-screened and vetted.
Mid-level Data Analyst/Data Engineer specializing in machine learning and NLP
Senior Data Analyst specializing in healthcare, insurance, and financial analytics
Senior Data Analyst specializing in BI, data engineering, and predictive analytics
Mid-level Data Engineer specializing in cloud ETL, streaming, and data warehousing
Mid-level Data Engineer specializing in AWS, Snowflake, Databricks, and PySpark
Mid-level AI/ML Engineer specializing in fraud detection, credit risk, and NLP
“Built and deployed a production LLM-powered university support chatbot on Azure using a RAG pipeline, focusing on reducing hallucinations, improving latency, and handling ambiguous queries via confidence checks and clarification prompts. Also has hands-on orchestration experience (Airflow/Azure Data Factory), including hardening a demand-forecasting ingestion workflow with sensors, retries, and automated alerts, and uses a metrics-driven testing/monitoring approach for reliable AI agents.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
Mid-level Data Engineer specializing in cloud data pipelines and streaming analytics
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Mid-level Data Scientist specializing in GenAI, RAG, and forecasting
“ML/NLP engineer focused on large-scale data linking for e-commerce-style catalogs and customer records, combining transformer embeddings (BERT/Sentence-BERT), NER, and FAISS-based vector search. Has delivered measurable lifts (e.g., +30% matching accuracy, Precision@10 62%→84%) and built production-grade, scalable pipelines in Airflow/PySpark with strong data quality and schema-drift handling.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics engineering
“Built and deployed a production LLM-powered demand and churn forecasting system for an e-commerce client, combining open-source LLMs (LLaMA/Mistral) and Sentence-BERT embeddings to generate business-friendly explanations of forecast drivers. Strong focus on data quality and model trust (validation, baselines, segmented monitoring) and production reliability via Airflow-orchestrated pipelines with readiness checks, retries, and ongoing drift/A-B testing.”
Mid-level Data Scientist specializing in ML, data engineering, and real-time analytics
Junior Data Engineer specializing in cloud ETL/ELT and lakehouse platforms
Mid-Level Software Engineer specializing in full-stack web and data engineering
“Backend/ML engineer who has built both enterprise data pipelines and real-time AI products: modular Python (Flask/FastAPI) services integrating automation scripts and low-latency ML inference (MediaPipe, PyTorch) plus OpenAI-powered feedback. Demonstrated measurable performance wins (~30% faster HR workflows; ~40% faster AWS pipelines across 100+ Oscar Health feeds) and strong multi-tenant/data-isolation patterns (schema-based isolation, RBAC, microservices).”
Mid-level Data Scientist & AI Engineer specializing in NLP, LLMs, and predictive analytics
“AI Engineer with production experience building an LLM-powered conversational scheduling assistant (rules-based + OpenAI GPT agents) and improving responsiveness by ~40% through architecture optimization. Strong in orchestration (Airflow), containerized deployments, and data quality (Great Expectations/PySpark), with prior work automating population health reporting pipelines (Azure Data Factory → Snowflake) and delivering insights via Tableau to non-technical stakeholders.”
Senior Full-Stack Engineer specializing in web platforms, cloud infrastructure, and data systems
“Full-stack/product-leaning engineer who owned an end-to-end AI Tutor feature (Claude-powered) shipped simultaneously to iOS/Android/web via Expo, with Cloudflare Workers backend and PostHog analytics. Built the company’s GitHub-based CI/CD to coordinate app store releases with backend blue/green deployments. Also has significant data engineering experience (including ~8TB/day workloads) using dbt/Fivetran plus sharding and hashing-based diffing for correctness.”
“At Liberty Mutual, built a production underwriting decision assistant combining LLM reasoning with quantitative models and strong auditability. Implemented a claims-based response verification pipeline that cut hallucinations from 18% to 3% and materially improved user trust/validation scores. Experienced orchestrating ML/LLM workflows end-to-end with Airflow, Kubeflow Pipelines, and Jenkins, including SLA-focused pipeline hardening.”
Mid-level AI/ML Engineer specializing in data engineering, LLM/RAG pipelines, and recommender systems
“Research assistant at St. Louis University who built and deployed a production document-intelligence RAG system (Python/TensorFlow, vector DB, FastAPI) on AWS, focusing on grounding to reduce hallucinations and latency optimization via caching/async/batching. Also developed a personalized recommendation system for the Frenzy social platform and partnered closely with product/UX to define metrics and iterate on hybrid recommenders and cold-start handling.”
Mid-level Backend Software Engineer specializing in Python APIs and data engineering
Junior Data Engineer specializing in cloud data pipelines and warehousing
Senior Data Engineer specializing in cloud lakehouse and AI/ML pipelines