Pre-screened and vetted.
Mid-Level Software Engineer specializing in backend and full-stack systems
Mid-level Data Engineer specializing in real-time streaming and ML feature pipelines
Principal Machine Learning Architect specializing in AI platforms and data science
Executive VP of Engineering specializing in FinTech platforms, cloud modernization, and AI/ML
Senior Data Engineer specializing in cloud lakehouse platforms and healthcare data
Senior Data Analyst specializing in healthcare and financial analytics
Mid-level Analytics Engineer specializing in dbt, SQL transformation, and Snowflake
Senior Data Scientist specializing in GenAI, LLMs, and Analytics Engineering
Senior Data Engineer specializing in cloud data platforms and big data pipelines
Executive Engineering Leader specializing in cloud, DevSecOps, and large-scale platform modernization
“Co-founded a Digital Loss Prevention (DLP) startup and raised $6M in seed funding by showcasing a controlled, laptop-based technology demo. Post-funding, drove MVP planning and execution by sequencing operations and assembling a team to build an appliance MVP, using an iterative build/evaluate/visualize approach.”
Mid-level Software Engineer specializing in ML platforms and cloud-native backend systems
“Software engineer with experience at Google and the City and County of San Francisco building production AI systems, including a RAG-based internal support chatbot and ML-driven ticket priority tagging. Has scaled data/ML platforms with Airflow on GCP (1M+ records/day, 99.9% SLA) and deployed multi-component systems with Docker and Kubernetes (GKE), using modern LLM tooling (LangChain/CrewAI, Claude/OpenAI, Pinecone/ChromaDB, Bedrock/Ollama).”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with experience at Moderna and Block owning high-volume (≈10TB/day) production pipelines on AWS, using Kafka/S3/Glue/dbt/Snowflake with strong data quality and observability practices (schema validation, anomaly detection, CloudWatch monitoring). Also built external financial API ingestion with Airflow retries, throttling/token rotation, and schema versioning, and helped stand up an early-stage biomedical data platform with CI/CD and incident debugging.”
Senior Data Engineer specializing in cloud ETL and real-time streaming pipelines
“Data engineer with eBay experience owning end-to-end pipelines for real-time order and user behavior analytics at 10M+ records/day. Strong in PySpark/SQL transformations, Airflow reliability patterns, and production observability (CloudWatch), with measurable outcomes including improved data quality and 30–40% query performance gains. Also built Python data APIs for analytics/ML consumers with versioning and backward compatibility.”
Senior Data Scientist / ML Engineer specializing in GenAI, LLMs, and NLP
“ML/NLP engineer focused on production GenAI and data linking systems: built a large-scale RAG pipeline over millions of support docs using LangChain/Pinecone and added a LangGraph-based validation layer to cut hallucinations ~40%. Also built scalable PySpark entity resolution (95%+ accuracy) and fine-tuned Sentence-BERT embeddings with contrastive learning for ~30% relevance lift, with strong CI/CD and observability practices (OpenTelemetry, Prometheus/Grafana).”
Mid-level Data Scientist/ML Engineer specializing in GenAI agents and MLOps
“AI/LLM engineer at Capital One who deployed a production RAG-powered fraud analysis and document intelligence platform using LangChain, OpenAI, Pinecone, Kafka, and AWS. Focused on reliability in real-time investigations via hybrid retrieval, schema-validated outputs, and LLM verification loops, reporting review-time reduction from hours to minutes and ~99% fraud detection precision.”
Senior Data Engineer specializing in cloud lakehouse and real-time streaming pipelines
“Senior data engineer with experience in both healthcare (CVS Health) and financial services (Bank of America), building large-scale Azure lakehouse pipelines (30+ EHR sources, ~5TB) and real-time streaming services (Event Hubs/Kafka) for patient vitals. Strong focus on reliability and data quality (Great Expectations, monitoring/alerting, schema drift automation), with measurable outcomes like 50% runtime reduction and 99%+ uptime for regulatory reporting pipelines.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with Intuit experience owning end-to-end, high-volume financial data pipelines (API/S3 ingestion, Airflow orchestration, Spark/PySpark + SQL transforms, Snowflake marts). Strong focus on reliability and data quality—achieved 99.8% SLA and cut discrepancies by 35% using Great Expectations, reconciliation, schema versioning, and automated backfills; also built near real-time Kafka/API data services with CI/CD and observability.”
Senior Data Engineer specializing in FinTech analytics and ML data platforms
“ML/AI engineer with Goldman Sachs experience building production fraud detection and RAG-based trading insights systems end-to-end. Stands out for combining real-time ML infrastructure, GenAI retrieval systems, and compliance-aware design, with measurable impact including nearly 25% false-positive reduction and improved analyst productivity.”
Intern Software Engineer specializing in data science and machine learning
“Backend engineer with hands-on experience building Flask REST APIs (auth, CRUD, S3 media uploads) and driving measurable Postgres/SQLAlchemy performance gains (p95 reduced to 200–400ms by eliminating N+1s and switching to keyset pagination). Implemented multi-tenant isolation with strict tenant scoping plus Postgres RLS, and built an OpenAI-powered quiz generation pipeline using queued workers, structured JSON outputs, and Celery/Redis optimizations to stabilize high-throughput workloads.”
Mid-level Data & Business Analyst specializing in analytics engineering and BI
“Data/analytics professional with experience across manufacturing and enterprise environments (Wisconsin School of Business project with CNH Industrial; roles/projects at Ascensia Technologies, S&C, and Adobe). Has hands-on work combining warranty/lifecycle tables with technician free-text notes using TF-IDF + tree models (XGBoost/Random Forest), and deep experience in entity resolution/reconciliation across mismatched financial systems using Python/SQL and fuzzy matching, with production-grade pipeline practices in Azure Data Factory/Databricks.”