Pre-screened and vetted.
Mid-level Data Engineer specializing in healthcare analytics and AI pipelines
Senior DevOps & Site Reliability Engineer specializing in multi-cloud CI/CD and IaC
Senior Data Engineer specializing in cloud data platforms and real-time streaming pipelines
Senior Full-Stack Developer specializing in cloud-native microservices (AWS)
Senior Data Engineer specializing in cloud data platforms and real-time streaming
“Data engineer focused on building reliable, production-grade data systems end-to-end: batch and real-time pipelines (Airflow/Kafka/Spark) with strong data quality, monitoring/alerting, and incident response. Has experience integrating external API/web data with retries, throttling, and schema-change handling, and serving curated datasets to analytics (Power BI) and backend consumers with performance optimizations like Redis caching.”
Mid-Level Software Development Engineer specializing in distributed systems and event-driven architectures
“Built and maintained an internal JavaScript/React real-time event monitoring UI used by multiple Goldman Sachs teams (e.g., Private Wealth Management and Bulk Trading Systems). Focused on scaling performance under hundreds of events/sec—using profiling, memoization, batching, and debouncing—and paired it with strong internal documentation and disciplined incident diagnosis via synthetic load testing and logs/metrics.”
Senior Data Engineer specializing in multi-cloud data platforms and generative AI
Senior Cloud/DevOps Engineer specializing in Azure, Kubernetes, and Infrastructure as Code
“Azure cloud platform engineer with strong enterprise Linux operations background who designs multi-region HA/DR on Azure (and AWS) using Azure Site Recovery, Traffic Manager, AKS autoscaling, and geo-replicated Azure SQL. Built secure Azure DevOps CI/CD pipelines for .NET/Python microservices to AKS/VMs and provisions full environments via Terraform modules with remote state, drift checks, and staged rollouts; has not directly owned IBM Power/AIX at scale.”
Senior Data Engineer specializing in Palantir Foundry and Snowflake for regulated industries
“Data engineer focused on high-volume transaction pipelines (2M+ per day) using Snowflake/Snowpipe, Spark/PySpark, Kafka, and Airflow, with a strong emphasis on schema/data-quality enforcement and reliability improvements. Also built a greenfield compliance-focused RAG solution, using CloudWatch monitoring and adding ingestion validation to prevent malformed OCR documents from degrading search quality.”
Senior Cloud Solutions Architect specializing in AWS and regulated healthcare environments
“Cloud/platform engineer with hands-on ownership of AWS EKS Kubernetes platforms built and upgraded via Terraform, including AWS networking/security, EBS/EFS/S3 storage integration, and reliability validation through CloudWatch plus Prometheus/Grafana. Also has on-prem VMware/vSphere administration experience and day-to-day hybrid on-prem-to-AWS operations (VPN/Direct Connect), with examples of resolving pod instability from an application memory leak and fixing a production connectivity drop via routing/firewall troubleshooting.”
Senior Data Engineer specializing in cloud data platforms and real-time streaming
“Data engineer in healthcare (HCA) who owned end-to-end Azure-based pipelines at very large scale (50M+ daily claims/patient records). Strong focus on reliability: schema-drift fail-fast validation, quarantine layers, and Python/SQL data quality checks that reduced issues ~25%, plus performance tuning in Databricks/PySpark and versioned serving in Synapse for downstream consumers.”
Mid-level Data Analyst and Data Engineer specializing in healthcare and financial analytics
“Analytics professional with healthcare and operations experience who turns messy enterprise data from platforms like Teradata, GCP, SQL Server, and Snowflake into trusted reporting layers and reproducible analysis workflows. They combine SQL, Python, PySpark, Power BI, and Tableau to improve reporting accuracy and performance, including a 30% dashboard refresh improvement and 20-25% accuracy gains in healthcare reporting.”
Mid-level Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“Built a production AI-driven contract/document extraction system combining OCR, normalization, and LLM schema-guided extraction, orchestrated with PySpark and Azure Data Factory and loaded into PostgreSQL for analytics. Emphasizes reliability at scale—using strict JSON schemas, confidence scoring, targeted retries, and multi-layer validation to control hallucinations while processing thousands of PDFs per hour—and partners closely with non-technical business teams to refine fields and deliver usable dashboards.”
Mid-level Data Analyst specializing in healthcare and finance analytics
“Built an end-to-end Alexa smart-home IoT application controlling a Wi-Fi bulb, including ESP32 firmware (MQTT) and an AWS serverless backend (IoT Core/Device Shadow, Lambda, DynamoDB) with a REST API. Demonstrates strong real-time scalability patterns (streaming ingestion, stateless processing, partition-key design) and full-stack delivery with Spring Boot + React (JWT auth, CORS, data-heavy dashboards).”
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and real-time fraud detection
“GenAI/ML engineer who has shipped production agentic systems in highly regulated and high-throughput environments, including an AWS Bedrock-based fraud/compliance workflow at U.S. Bank with PII redaction and hallucination detection that cut investigation time by 50%+. Also built and evaluated RAG and recommendation systems at Target, using RAGAS-driven testing, hybrid retrieval with re-ranking, and SHAP explainability dashboards to align model behavior with merchandising business KPIs.”
Mid-level Data & GenAI Engineer specializing in lakehouse, streaming, and RAG platforms
“Built a production internal LLM-powered knowledge assistant using a RAG architecture (Python, LLM APIs, cloud services) that answers employee questions with sourced, grounded responses from internal documents. Demonstrates strong practical depth in retrieval tuning (chunking/metadata filters), orchestration with LangChain, and production reliability practices (latency optimization, automated embedding refresh, evaluation metrics, logging/monitoring) while partnering closely with non-technical operations teams.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics platforms
“Data engineer with healthcare and enterprise experience (Molina Healthcare, Dell Technologies) building and operating high-volume batch + streaming pipelines across AWS and Azure. Strong focus on data quality (schema validation, fail-fast checks), reliability (monitoring/alerts, retries), and performance tuning (Spark/partitioning), with measurable runtime reduction and improved downstream trust.”
Mid-level Data Engineer specializing in cloud data pipelines and financial services warehousing
“Data engineer (Charles Schwab) who took ownership of an unstable, ambiguous nightly financial data pipeline and rebuilt it into a reliable, incremental AWS Glue/Airflow/Redshift system feeding Power BI. Created a custom Python data-quality framework with hard-stop gating and schema drift detection, improving integrity (99.9%), cutting runtime (~20%), and reducing incidents/tickets (35% fewer schema-related dashboard incidents; 30% fewer investigations).”
Mid-level Data Engineer specializing in cloud lakehouse and streaming platforms
“Data engineer focused on building production-grade pipelines on AWS (Kafka/Kinesis/Glue/S3) through to curated serving layers in Snowflake and Delta Lake. Emphasizes automated data quality validation (PySpark + CI/CD), modular dbt transformations for analytics (customer spending, risk metrics), and operational reliability with CloudWatch and DLQs; data consumed by BI tools and ML pipelines for fraud detection and risk analytics.”
Mid-level Data Engineer specializing in multi-cloud real-time and batch data pipelines
“Data engineer with healthcare domain experience who owned 100M+ record pipelines end-to-end (Kafka/Kinesis/ADF → PySpark/dbt validation → Spark SQL transforms → Snowflake/Power BI serving). Built production-grade reliability practices (Airflow orchestration, CloudWatch/Grafana monitoring, pytest + contract/regression tests, idempotent ingestion/backfills) and delivered measurable improvements: 35% lower latency and 40% better query performance.”
Senior Data Scientist and AI/ML Engineer specializing in GenAI and cloud ML
“ML/AI engineer with hands-on experience owning systems from experimentation through deployment and monitoring, including a Bank of Montreal project that improved timely interventions by 12%. Also brings GenAI/RAG experience with evaluation and safety guardrails, plus clinical NLP pipeline work extracting medication data from notes for patient risk prediction.”
Senior Data Analyst specializing in cloud data platforms, experimentation, and predictive analytics
“Healthcare data/ML practitioner with experience at UnitedHealth Group building production ETL and streaming pipelines (Python, BigQuery, Kafka) that unify EHR, IoT device, and lab data for patient risk prediction. Also implemented embedding-based semantic search/linking for noisy clinical notes via domain adaptation and rigorous validation with clinical stakeholders; previously built churn prediction at DirecTV using XGBoost.”