Pre-screened and vetted.
Mid-level Data Engineer and Business Analyst specializing in cloud ETL and analytics
“Data analyst with cross-industry experience spanning insurance analytics at L&T Infotech and experimental imaging analytics at Mylyser. Stands out for building scalable SQL/PySpark data pipelines, standardizing business-critical metrics like claims lifecycle and policy retention, and delivering measurable impact such as 50%+ faster query performance and a 15% reduction in claims settlement time.”
Mid-level Data Analyst specializing in FinTech and e-commerce analytics
“Analytics professional with experience spanning financial services and e-commerce, including NAV reconciliation/reporting at State Street and revenue optimization work on checkout and retention. Stands out for combining strong SQL/Python data engineering with business-facing analytics, delivering measurable gains like 3-5% conversion improvement and 8-10% payment success uplift.”
Mid-level Data Scientist / AI-ML Engineer specializing in RAG, MLOps, and real-time analytics
“Software/ML engineer who built a production automated job-finding and cold-email personalization system for Fortune 500 outreach, using JobSpy for dynamic scraping, LangChain orchestration, and LLM+vector DB semantic search with grounding/relevance metrics and guardrails. Also delivered a predictive investment analytics platform for financial advisors, communicating results via Tableau dashboards and portfolio KPIs like Sharpe ratio and drawdowns.”
Senior Data Scientist specializing in ML, NLP, and production AI systems
“Machine learning/NLP engineer with deep Azure stack experience (Data Factory, Databricks/Spark, Delta Lake, Azure OpenAI, Azure AI Search) who built end-to-end production systems for semantic clustering, entity resolution, and hybrid search. Demonstrated measurable gains from embedding fine-tuning (~15% retrieval precision, ~10–12% nDCG@10) and designed scalable, quality-checked pipelines with MLOps best practices.”
Mid-level Data Engineer specializing in cloud big data and streaming pipelines
“Data engineer focused on large-scale financial data platforms, with hands-on ownership of an AWS + Databricks + Snowflake pipeline processing ~2TB/day. Strong in data quality (Great Expectations), schema drift automation, and production reliability (99.9%), plus measurable performance/cost wins (4h→1.2h, ~25% cost reduction). Also built an async Python crawling/ingestion framework with anti-bot mitigation, retries, and Airflow-driven backfills.”
Junior Data Scientist specializing in agentic AI and RAG pipelines
“LLM/agentic systems builder who shipped production workflows at Angel Flight West and Eureka AI, combining LangGraph + RAG (Postgres/pgvector) with strong observability (LangSmith/Langfuse). Delivered large operational gains (address lookup cut from 10 minutes to 60 seconds; accuracy to 92%) and has a track record of quickly stabilizing customer-critical pipelines (Pydantic-enforced JSON for ETL) while partnering with sales/ops to drive adoption.”
Mid-level Data Scientist specializing in healthcare ML and GenAI
“Healthcare data/NLP practitioner with experience at UnitedHealthcare building production ML systems that connect unstructured call center transcripts and medical notes to structured claims data. Has delivered measurable impact (25% classification accuracy lift; ~30% relevance improvement) using classical NLP, embeddings (Sentence-BERT + FAISS), and AWS SageMaker deployments with robust validation and drift monitoring.”
Mid-level Data Scientist/Data Analyst specializing in ML, BI dashboards, and ETL pipelines
“Data/ML practitioner with experience at Humana and Hexaware, focused on turning messy, semi-structured datasets into production-ready pipelines. Built an age-prediction model from book ratings using heavy feature engineering and multiple regression models, and has hands-on entity resolution (deterministic + fuzzy matching) plus embeddings/vector DB approaches for linking and search relevance.”
Mid-level Data Engineer specializing in cloud data pipelines and machine learning
“Experience spans college-built AWS-hosted Python/Flask web apps and enterprise data work at General Motors, including PostgreSQL query optimization on millions of records and multi-tenant-style data isolation using group-based, column-level permission grants. Also built an AWS-hosted meat price prediction dashboard using Dash/Plotly and ran large nightly data pipelines orchestrated with Apache Airflow.”
Mid-level Data Engineer specializing in multi-cloud real-time data pipelines
“Data engineer with healthcare/clinical trial domain experience who owned a 100TB+/month AWS pipeline end-to-end (Glue/S3/Redshift/Airflow) and drove measurable outcomes (20% lower latency, 99.9% reliability, 40% less manual reporting). Also built production data services and API-based ingestion on GCP (Cloud Run/Functions/BigQuery) with strong validation, versioning, and safe migration practices, and launched an early-stage RAG solution (LangChain + GPT-4) for researchers.”
Mid-level Data Engineer specializing in Azure, Spark, and scalable ETL/ELT pipelines
“Data engineer with banking FP&A experience who led an end-to-end migration of 10+ TB from Teradata to Azure (ADF + Data Lake + Databricks/PySpark + Synapse). Emphasizes reliability (multi-stage validation, monitoring/alerts) and performance (Spark tuning, incremental loads, autoscaling), reporting ~99.5% pipeline reliability while supporting downstream consumers with stable schemas and clear change management.”
Senior Data Engineer specializing in cloud data platforms and real-time analytics
“Data/analytics engineer focused on finance and e-commerce integrations, building end-to-end pipelines and services across Odoo, QuickBooks, Snowflake, and Tableau. Replaced a costly third-party Walmart connector with a serverless AWS Lambda pipeline deployed via Terraform/GitHub and monitored with CloudWatch/Datadog, and shipped a bi-directional Odoo↔QuickBooks invoice sync with distributed locking plus Slack-based finance approvals.”
Mid-level Data Engineer specializing in cloud ETL and streaming data pipelines
“Data engineer in healthcare/clinical data platforms (HarmonCare) who built and operated an end-to-end lakehouse pipeline ingesting HL7/FHIR at ~2–3M records/day on AWS (Glue/Lambda/S3/Spark) and serving trusted datasets in Snowflake. Implemented strong validation/reconciliation gates and a data quality framework that reduced discrepancies ~40%, plus CI/CD (GitHub Actions/Terraform) and monitoring (Airflow/CloudWatch).”
Director-level Engineering Leader specializing in AI Platforms for Enterprise B2B SaaS
“Technical leader/player-coach who architected and shipped an end-to-end computer vision pricing system for a major North American auto seller, using Go + Ray + AWS SageMaker in a low-latency distributed inference architecture. Strong in production governance (logs/tracing/guardrails/AppSec), reliability incident ownership (DNS limits affecting 20% traffic), and measurable delivery acceleration (deployment cycle 16→4 days; delivery speed 5→2 days) through process optimization and AI-assisted enablement.”
Senior Finance & Investment Analyst specializing in TMT, valuation, and quantitative strategy
“VC deal-sourcing professional with Sutton Capital experience who combines quantitative tooling, alternative data, and thesis-driven founder outreach to surface under-the-radar startups early. Distinctive for building Python-based sourcing and tracking engines, converting cold outreach into qualified deal flow, and pairing relationship-building with rigorous unit economics analysis for partner-level investment decisions.”
Junior Business & Data Analyst specializing in FinTech and banking analytics
“Analytics professional with Travelex experience spanning SQL ETL, Python-based machine learning workflows, and Power BI dashboarding in risk, fraud, and AML contexts. Stands out for replacing a $150K+ third-party compliance tool with internal dashboards and for materially improving operational efficiency through alert tuning, cutting alert volume by 50% and false positives by 60%.”
Mid-level Data Analyst specializing in analytics, budgeting, and sports data systems
“Baseball advisor/recruiter with a player-development lens shaped by his own injury experience, combining TrackMan-driven analytics with deep coach and program relationships. He has helped athletes navigate high-stakes draft, rehab, and college decisions, including identifying under-scouted talent like John Klein and supporting his path to the Twins' 40-man roster.”
Mid-level Business Analyst specializing in healthcare data and reporting
“Worked on a CVS Health project transforming large healthcare claims data from databases and APIs into clean reporting tables and Power BI dashboards. Brings hands-on experience in SQL, Python automation, data validation, and stakeholder-driven metric definition for analytics workflows.”
Mid-level Data Scientist & AI/ML Engineer specializing in GenAI, NLP, and predictive modeling
Mid-level Data Scientist specializing in AI/ML for healthcare analytics
Mid-level Data Engineer specializing in multi-cloud data pipelines and real-time analytics
Mid-level Data Scientist specializing in ML, NLP, and cloud data platforms
Mid-level Data Engineer specializing in cloud data pipelines, analytics, and AI/ML