Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in LLMs, RAG, and enterprise AI
Mid-Level Python Developer specializing in Django, data pipelines, and automation
Senior Software Engineer specializing in data engineering, BI analytics, and AI/ML
Senior Data Engineer specializing in cloud data platforms and large-scale ETL
“Data engineer focused on large-scale ETL/ELT pipelines across cloud stacks (GCP and AWS), including Spark-based transformations and orchestration with Airflow. Has experience loading up to ~2TB per BigQuery target table and designing atomic loads to multiple downstream systems (Elasticsearch + Kafka), with Kubernetes deployment and Jenkins CI/CD.”
Senior Data Engineer specializing in cloud data platforms and real-time streaming for financial services
“Data engineer with experience at Bloomberg, UBS, and Bank of America building high-volume financial data platforms and services. Owned an end-to-end pipeline processing ~150–200M records/day (Kafka/Cassandra/S3 → Spark/PySpark → Snowflake) with strong data quality controls and Airflow reliability practices, reporting ~99% reliability and major performance gains. Also built large-scale external API ingestion with compliance-minded rate limiting, schema versioning, and quarantine/validation layers.”
Director-level Data Science Manager specializing in ML forecasting, experimentation, and MLOps
“Data/ML engineer with experience at American Express and Amazon, owning an end-to-end rewards redemption/liability ML pipeline (~200GB) with rigorous regulatory/audit validation and quarterly executive reporting. Also built web-scraped product datasets with anti-bot protections at a startup and helped modernize an authn/authz service using AWS, plus led early-stage migration work from an internal warehouse to GCP with CI/CD and cloud observability.”
Senior Full-Stack/Data Engineer specializing in cloud data pipelines for legal and financial platforms
“Data/analytics engineer who built and operated a DocuSign-based real-time analytics platform end-to-end, processing 20–50k webhook events/day with ~99.5% reliability. Strong in idempotent event processing, schema-evolution-safe ingestion (raw JSON + dynamic parsing), and serving data via versioned, low-latency REST APIs with solid CI/CD and observability.”
Senior Backend/Platform Engineer specializing in Python and AWS
“Backend/data engineer with hands-on production experience across Python/FastAPI services and AWS (Lambda, API Gateway, SQS, ECS) delivered via Terraform and GitHub Actions. Built Glue-to-Redshift ETL pipelines with Step Functions retry/catch patterns, schema evolution safeguards, and data quality checks; also modernized a legacy SAS monthly reporting system into Python microservices with rigorous side-by-side parity validation. Demonstrated strong SQL tuning skills with a reported improvement from 5 minutes to 15 seconds.”
Mid-level Business Analyst specializing in BI, reporting automation, and process improvement
“Analytics professional with experience at McKinsey & Company and Dell Technologies, focused on turning messy operational and business data into trusted dashboards and decision tools. They combine SQL, Power BI, and Python to solve data quality issues, define metrics like retention, and deliver measurable impact such as a roughly 30% reduction in manual reporting time.”
Machine learning engineer and software developer with experience across fintech, e-commerce, and gaming.
“ML/AI engineer with hands-on ownership of production systems spanning classical ML fraud detection and GenAI agent workflows. At Fidelity, they built an end-to-end fraud platform that improved review queue Precision@K by 15-20% while reducing false positives 10-15%, and they also shipped RAG-based agent systems that cut manual workflow effort by 30-40%.”
Mid-level Data Engineer specializing in real-time pipelines across FinTech and Healthcare
“Data engineer at Plaid who built greenfield, end-to-end real-time transaction pipelines and FastAPI data services for fraud detection and analytics, handling millions of events per day. Strong focus on reliability and data integrity via Great Expectations validation, Airflow-based monitoring/SLAs, quarantine/staging patterns, and robust external data ingestion with schema versioning and backfills (reported 50% fewer anomalies and ~40% fewer failures).”
Mid-Level AI Engineer specializing in data pipelines and scalable ML systems
“Data engineer/backend developer with experience owning end-to-end, high-volume data pipelines for ML/analytics using Python, Airflow, SQL, and PySpark, reporting ~30% error reduction through improved reliability and data quality checks. Has also built Django-based REST APIs with caching/pagination and strong versioning practices, and operated external data collection/web scraping pipelines with anti-bot measures, monitoring, retries, and idempotent backfills.”
Mid-level Software Engineer specializing in AI/ML and full-stack systems
“Full-stack engineer at Bank of America who built and iterated a real-time transaction monitoring/fraud detection system processing 50K+ daily transactions, improving latency (25%), dashboard performance (30%), and reducing manual investigation time (40%) while meeting PCI DSS via OAuth2 and RBAC. Also built a scalable ETL pipeline for messy financial data with strong reliability/observability (ELK, retries, DLQ), boosting data integrity from 87% to 99% and sustaining 99.8% uptime.”
Senior Full-Stack Software Engineer specializing in FinTech payments and fraud systems
“Backend/data engineer with production experience building credit/fraud enrichment services and checkout pipelines on AWS (EKS + Lambda) using FastAPI, Kafka, Postgres, and Redis, with a strong focus on reliability patterns (timeouts/retries/circuit breakers) and observability. Has also built AWS Glue/PySpark ETL into S3/Redshift with schema evolution and data quality controls, and modernized legacy credit decisioning into Java/Node microservices with parallel-run parity validation and feature-flag rollouts.”
Senior DevOps & Site Reliability Engineer specializing in cloud reliability and observability
“Built and deployed a production AI/ML SRE copilot that uses RAG over real-time Splunk signals plus deployment/runbook data to generate grounded incident summaries and next steps, cutting time-to-contact by 30%. Treats the knowledge corpus like a production dataset (quality gates, semantic chunking, metadata enrichment) and runs golden-dataset automated evals to ensure reliability, while partnering closely with ops/support leaders through discovery sessions and metric-driven demos.”
Executive Engineering Leader specializing in cloud security platforms and enterprise applications
“Owner and service provider of an IT practice exploring a transition into startup founding. Has spent ~15 years researching the VC/studio/accelerator landscape and evaluates new ideas through forecasting and stakeholder conversations, with a current focus on improving prospecting and qualified lead generation.”
Junior AI Engineer specializing in healthcare analytics and compliance
“Primary engineer at Customer Insights AI who built an end-to-end Python pipeline for 340B drug pricing compliance, using ML to detect suspicious pharmaceutical claims and benefit diversion. Stands out for combining healthcare compliance domain knowledge with production reliability practices, and for turning ambiguous analyst-driven review processes into automated workflows that cut manual review by 70%.”
Mid-level Full-Stack Python Engineer specializing in cloud-native payments and data pipelines
Mid-level AI Engineer specializing in LLM agents, evaluation pipelines, and microservices
Executive Technology Leader specializing in AI, Data Platforms, and FinTech