Pre-screened and vetted.
Senior Data Engineer specializing in Azure Lakehouse and LLM/ML data platforms
Mid-level Data Engineer specializing in cloud data platforms and big data pipelines
Mid-level Data Engineer specializing in cloud ETL, streaming, and data warehousing
Mid-level Data Engineer specializing in cloud ETL, big data, and analytics
Mid-level Data Scientist & AI/ML Engineer specializing in GenAI and LLM-driven enterprise systems
Senior Data Engineer specializing in real-time pipelines, cloud data platforms, and healthcare analytics
Senior AI/ML Engineer specializing in LLMs, NLP, and production MLOps
Senior Data Engineer specializing in AWS cloud data platforms and streaming analytics
Mid-level Data Engineer specializing in lakehouse architectures and cloud ELT
Mid-level Data Engineer specializing in cloud data platforms for Healthcare and Financial Services
Mid-level Data Engineer specializing in cloud lakehouse and real-time streaming
Mid-level Data Engineer specializing in AWS data lakes for healthcare and financial services
Senior Data Engineer specializing in cloud data platforms and ML pipelines
“Data engineer focused on AWS-based enterprise data platforms, owning end-to-end pipelines from multi-source batch/stream ingestion (Glue/Kinesis/StreamSets/Airflow) through PySpark transformations into curated datasets for Redshift/Snowflake. Emphasizes production reliability with strong monitoring/observability and data quality gates, and reports ~30% performance improvement plus improved SLAs and latency after optimization.”
Mid-level Data Engineer specializing in AWS lakehouse platforms and scalable ETL/ELT
“Data engineer focused on reliable, production-grade pipelines and data services: has owned end-to-end ingestion-to-serving workflows processing millions of records/day, using Airflow, Python/SQL, and PySpark. Demonstrates strong operational rigor (monitoring, retries, idempotency, backfills) and measurable outcomes (98% stability, ~30% faster processing), plus experience exposing curated warehouse data via versioned REST APIs.”
Mid-level Data Engineer specializing in cloud data platforms and lakehouse architectures
“Data engineer in a banking context who has owned end-to-end Azure lakehouse pipelines ingesting financial/vendor data from APIs, Azure SQL, and flat files into Databricks/Delta (bronze-silver-gold). Emphasizes production reliability via schema-drift validation, data quality controls, monitoring/alerting, retries/checkpointing, and Spark/Delta performance tuning, with outputs served to BI/reporting teams (e.g., Tableau).”
Mid-level Data Engineer specializing in cloud data pipelines for healthcare and financial services
“Data engineer with ~4 years of experience (Cigna) building and operating Azure Data Factory pipelines for healthcare claims/member/provider data at 2–3M records/day. Emphasizes reliability and downstream safety via schema/data-quality validation, quarantine workflows, idempotent processing, and backfills; also improved runtime ~20% through SQL optimization and served curated datasets through versioned views and well-documented, analyst-friendly interfaces.”
Mid-level Data Engineer specializing in cloud-native healthcare and enterprise data platforms
“Data Engineer (TCS) who owned an end-to-end CRM analytics pipeline for Bayer’s eSalesWeb integration, ingesting from Salesforce APIs/databases/S3 and serving analytics-ready datasets via PostgreSQL/S3 for Tableau. Drove measurable outcomes: ~60% reduction in manual data-quality effort, ~30% lower latency through SQL optimization, and ~35% improved stability via monitoring, retries, and idempotent processing.”
Senior AI/ML Engineer specializing in healthcare AI and MLOps
“Healthcare AI engineer with hands-on ownership of production ML and LLM systems at McKesson, spanning clinical risk prediction and RAG-based documentation tools. Stands out for combining deep clinical-data experience, HIPAA-aware deployment practices, and measurable impact through reduced readmissions, clinician workflow gains, and 20% to 30% faster ML delivery for engineering teams.”
Mid-level Data Engineer specializing in multi-cloud data platforms for healthcare and finance
“Data engineer with Cigna experience building and operating an end-to-end AWS-based healthcare claims pipeline processing ~2TB/day, using Glue/Kafka/PySpark/SQL into Redshift. Strong focus on data quality and reliability (schema validation, monitoring/alerting, retries/checkpointing/backfills), reporting improved accuracy (~99%) and reduced latency, plus experience serving real-time Kafka/Spark data to downstream analytics with documented data contracts.”
Mid-level Data Engineer specializing in cloud ETL and real-time streaming
“Data engineer focused on AWS + Spark/Databricks pipelines, including an end-to-end nightly loan-data ingestion flow (~2.2M records) from Postgres/S3 through Glue and Databricks into a DWH with layered validation and alerting. Also built real-time streaming with Kafka + Spark Structured Streaming and a master’s project streaming Reddit data for sentiment analysis under ambiguous requirements and tight budget constraints.”