Pre-screened and vetted.
Junior Robotics & AI/ML Engineer specializing in autonomous systems and computer vision
Mid-Level Software Engineer specializing in backend and full-stack systems
Mid-level Data Engineer specializing in real-time streaming and ML feature pipelines
Executive VP of Engineering specializing in FinTech platforms, cloud modernization, and AI/ML
Senior Data Engineer specializing in cloud lakehouse platforms and healthcare data
Mid-level Analytics Engineer specializing in dbt, SQL transformation, and Snowflake
Senior Data Engineer specializing in cloud data platforms and big data pipelines
Executive Engineering Leader specializing in cloud, DevSecOps, and large-scale platform modernization
“Co-founded a Digital Loss Prevention (DLP) startup and raised $6M in seed funding by showcasing a controlled, laptop-based technology demo. Post-funding, drove MVP planning and execution by sequencing operations and assembling a team to build an appliance MVP, using an iterative build/evaluate/visualize approach.”
Mid-level Software Engineer specializing in ML platforms and cloud-native backend systems
“Software engineer with experience at Google and the City and County of San Francisco building production AI systems, including a RAG-based internal support chatbot and ML-driven ticket priority tagging. Has scaled data/ML platforms with Airflow on GCP (1M+ records/day, 99.9% SLA) and deployed multi-component systems with Docker and Kubernetes (GKE), using modern LLM tooling (LangChain/CrewAI, Claude/OpenAI, Pinecone/ChromaDB, Bedrock/Ollama).”
Mid-level AI/ML Engineer specializing in Generative AI and MLOps
“GenAI/LLM engineer and architect who built and deployed a production generative AI financial forecasting and scenario analysis platform at McKinsey, leveraging Claude (Anthropic), LangChain, Airflow, MLflow, and AWS SageMaker. Demonstrates strong LLMOps/MLOps rigor (monitoring, drift detection, automated retraining) and deep experience implementing global privacy controls (GDPR, differential privacy, audit trails) while partnering closely with finance executives and legal/IT stakeholders.”
Senior Data Scientist / ML Engineer specializing in GenAI, LLMs, and NLP
“ML/NLP engineer focused on production GenAI and data linking systems: built a large-scale RAG pipeline over millions of support docs using LangChain/Pinecone and added a LangGraph-based validation layer to cut hallucinations ~40%. Also built scalable PySpark entity resolution (95%+ accuracy) and fine-tuned Sentence-BERT embeddings with contrastive learning for ~30% relevance lift, with strong CI/CD and observability practices (OpenTelemetry, Prometheus/Grafana).”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with experience at Moderna and Block owning high-volume (≈10TB/day) production pipelines on AWS, using Kafka/S3/Glue/dbt/Snowflake with strong data quality and observability practices (schema validation, anomaly detection, CloudWatch monitoring). Also built external financial API ingestion with Airflow retries, throttling/token rotation, and schema versioning, and helped stand up an early-stage biomedical data platform with CI/CD and incident debugging.”
Senior Data Engineer specializing in cloud ETL and real-time streaming pipelines
“Data engineer with eBay experience owning end-to-end pipelines for real-time order and user behavior analytics at 10M+ records/day. Strong in PySpark/SQL transformations, Airflow reliability patterns, and production observability (CloudWatch), with measurable outcomes including improved data quality and 30–40% query performance gains. Also built Python data APIs for analytics/ML consumers with versioning and backward compatibility.”
Mid-level Data Scientist/ML Engineer specializing in GenAI agents and MLOps
“AI/LLM engineer at Capital One who deployed a production RAG-powered fraud analysis and document intelligence platform using LangChain, OpenAI, Pinecone, Kafka, and AWS. Focused on reliability in real-time investigations via hybrid retrieval, schema-validated outputs, and LLM verification loops, reporting review-time reduction from hours to minutes and ~99% fraud detection precision.”
Senior Data Engineer specializing in cloud lakehouse and real-time streaming pipelines
“Senior data engineer with experience in both healthcare (CVS Health) and financial services (Bank of America), building large-scale Azure lakehouse pipelines (30+ EHR sources, ~5TB) and real-time streaming services (Event Hubs/Kafka) for patient vitals. Strong focus on reliability and data quality (Great Expectations, monitoring/alerting, schema drift automation), with measurable outcomes like 50% runtime reduction and 99%+ uptime for regulatory reporting pipelines.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with Intuit experience owning end-to-end, high-volume financial data pipelines (API/S3 ingestion, Airflow orchestration, Spark/PySpark + SQL transforms, Snowflake marts). Strong focus on reliability and data quality—achieved 99.8% SLA and cut discrepancies by 35% using Great Expectations, reconciliation, schema versioning, and automated backfills; also built near real-time Kafka/API data services with CI/CD and observability.”
Intern Software Engineer specializing in data science and machine learning
“Backend engineer with hands-on experience building Flask REST APIs (auth, CRUD, S3 media uploads) and driving measurable Postgres/SQLAlchemy performance gains (p95 reduced to 200–400ms by eliminating N+1s and switching to keyset pagination). Implemented multi-tenant isolation with strict tenant scoping plus Postgres RLS, and built an OpenAI-powered quiz generation pipeline using queued workers, structured JSON outputs, and Celery/Redis optimizations to stabilize high-throughput workloads.”
Mid-Level Java Full-Stack Developer specializing in cloud-native microservices
“QA/validation-focused engineer with experience at Meta testing an ML+LLM content classification/summarization system, including production-vs-test behavior gaps. Built automated E2E validation and drift monitoring (PSI, KL divergence, embedding cosine similarity) run daily/multiple times per day and gated via CI. Also implemented Jenkins-orchestrated Selenium/API test suites in Docker at Capgemini and partnered with a business analyst to convert business rules into automated AI-driven validation checks.”
Mid-level Data & Business Analyst specializing in analytics engineering and BI
“Data/analytics professional with experience across manufacturing and enterprise environments (Wisconsin School of Business project with CNH Industrial; roles/projects at Ascensia Technologies, S&C, and Adobe). Has hands-on work combining warranty/lifecycle tables with technician free-text notes using TF-IDF + tree models (XGBoost/Random Forest), and deep experience in entity resolution/reconciliation across mismatched financial systems using Python/SQL and fuzzy matching, with production-grade pipeline practices in Azure Data Factory/Databricks.”
Senior Site Reliability Engineer specializing in Azure cloud reliability and data analytics
“AppSec-focused customer advisor with hands-on experience integrating SAST/DAST/SCA into production CI/CD (Azure DevOps) and designing secure agent/scanning deployments in AWS (least-privilege IAM, private subnets, VPC endpoints). Demonstrates strong incident troubleshooting using logs/metrics/traces to diagnose load-related failures (timeouts/retry storms) and drive durable fixes, while tailoring risk/tradeoff communication across engineering, security, and leadership stakeholders.”