Pre-screened and vetted.
Mid-Level Software Engineer specializing in AI-enabled backend and full-stack web systems
“Backend/AI workflow engineer with experience at AirKitchenz, Uber, and Vivma Software, building production systems on AWS (Lambda, DynamoDB, Step Functions). Has a track record of major performance wins (DynamoDB latency 2s to <150ms; Postgres query 2s to ~180ms) and shipping LLM-powered onboarding and ticket-routing workflows with strong guardrails (schema validation, confidence thresholds, human-in-the-loop escalation).”
Mid-level Data Engineer specializing in cloud data platforms, Spark, and streaming pipelines
“Data/MLOps engineer (Cognizant background) who owned an AWS/Airflow/Snowflake healthcare transactions pipeline processing ~8–10M records/day and cut pipeline/data-quality incidents by ~33%. Also built and deployed a production FastAPI model-inference service on Kubernetes (Docker, HPA) with strong observability (Prometheus/Grafana), versioned endpoints, and resilient backfill/idempotent external data ingestion patterns.”
Mid-level Data Engineer specializing in cloud data pipelines and streaming
“Data engineer with experience at Wells Fargo and Accenture owning end-to-end production pipelines processing hundreds of millions of transactional/risk records daily. Strong focus on data quality and reliability (reconciliation checks, schema drift detection, CloudWatch alerting) plus Spark performance tuning and idempotent backfills using Delta Lake/merge logic across AWS (S3/EMR/Databricks/Redshift) and Azure (ADF/Azure DevOps/Azure Monitor).”
Mid-level Data Engineer specializing in AWS/Azure pipelines and streaming analytics
“Data engineer with experience across healthcare and geospatial risk systems, owning end-to-end pipelines from ingestion through serving on AWS/Azure stacks. Built HIPAA-compliant data quality gates and CDC for millions of daily claims, and also delivered a real-time wildfire risk platform with 20-minute refresh cycles and a 60% data accuracy lift. Strong in streaming (Kafka), Spark performance tuning, and production-grade orchestration/CI/CD (Airflow, Docker, Jenkins, GitHub Actions, Terraform).”
Senior Data Engineer specializing in cloud data platforms and automated data quality
“Data engineer at CenterPoint Energy who built and operated multiple production-grade GCP data systems: a daily Snowflake→BigQuery replication framework (150+ tables) with Monte Carlo/Atlan-driven observability and schema-drift protection, plus a FastAPI metrics service for pipeline health. Demonstrated measurable impact (40% faster dashboard queries, 70% less manual refresh work, zero data loss) and strong operational rigor (scaling Cloud Run jobs, SAP SLT reconciliation, quarantine patterns, CI/CD via GitHub Actions + Terraform).”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Analytics-focused candidate with hands-on experience turning messy CRM, e-commerce, payments, and support data into trusted reporting datasets using SQL and Python. They have owned end-to-end churn and retention analytics work, including RFM-based segmentation, dashboard delivery, and metric standardization across sales, marketing, and finance.”
Mid-level Data Engineer specializing in cloud ETL/ELT and healthcare analytics
“Healthcare-focused data engineer/ML practitioner with experience at Lightbeam Health Solutions and Humana building production entity-resolution and semantic similarity pipelines across EMR, lab, and claims data. Uses NLP/ML (spaCy, scikit-learn, BioBERT/LightGBM) plus Snowflake/Airflow and vector search (Pinecone) to improve linkage accuracy (reported 90%) and semantic match quality (reported +12–15%), while reducing manual cleanup by 40%+.”
Director-level Engineering & Technology Leader specializing in digital transformation and enterprise platforms
“Providing technical guidance to a small team exploring a biomedical startup focused on earlier disease detection, including for remote/underserved areas. The venture is in ideation with initial research completed and is moving toward prototyping while exploring initial investment/support.”
Senior Data & Backend Engineer specializing in cloud data pipelines and LLM/RAG systems
“Data engineer with end-to-end ownership of large-scale retail and clinical data ingestion/processing on AWS, including real-time streaming and batch pipelines. Delivered measurable outcomes: 20M daily transactions processed, latency cut from 4 hours to 5 minutes, ~70% fewer failures, and 120+ pipelines running at 99.8% reliability with full audit compliance.”
Mid-level Data Scientist & AI Engineer specializing in RAG, agentic AI, and production ML
“AI/data engineer who built a production LLM-powered schema drift detection system (LangChain/LangGraph) to catch semantic data changes before they break downstream analytics/ML. Deployed on AWS with Docker/S3 and implemented an LLM-as-a-judge evaluation framework to improve trust, reduce hallucinations, and control false positives/alert fatigue. Collaborated with non-technical risk/business analytics stakeholders at EY by delivering human-readable drift explanations that improved confidence in financial analytics dashboards.”
Mid-Level Data Engineer specializing in cloud data platforms and governed analytics
“Data engineer with Optum experience building end-to-end healthcare data pipelines for HL7/FHIR, processing millions of records daily across Kafka streaming and Databricks/Spark batch. Strong focus on data quality (schema enforcement/validations), reliability (Airflow monitoring/alerts), and analytics-ready serving in Snowflake powering Power BI/Tableau, with CI/CD via Git and Jenkins.”
Mid-level Cloud Data Engineer specializing in Azure/AWS pipelines and medallion architecture
“Data engineer focused on reliability and data quality, owning end-to-end pipelines processing ~100k–300k records/day. Implemented robust validation and monitoring that cut reporting issues by ~30%, and built stable external data collection with anti-bot measures, backfills, and schema-change detection while maintaining backward-compatible internal data services.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Mid-level AI/ML Engineer specializing in GenAI agents, RAG pipelines, and MLOps
“AI/ML engineer who built a production RAG-based internal document intelligence assistant (LangChain + Pinecone) to let employees query enterprise reports in natural language. Demonstrated hands-on pipeline orchestration with Apache Airflow and tackled real production issues like retrieval grounding and latency using tuning, caching, and token optimization, while partnering closely with non-technical business stakeholders through iterative demos.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and external data collection systems on AWS (S3/Lambda/SQS/Glue/EMR) using PySpark/SQL, serving curated datasets to Snowflake/Redshift for finance and fraud teams. Has operated a large-scale crawler ingesting millions of records/day with anti-bot tactics, schema versioning/quarantine, and CloudWatch/Datadog monitoring, and also shipped a versioned REST API with caching and query optimization.”
Mid-level Data Engineer specializing in cloud ETL/ELT and big data pipelines
“Data engineer focused on production-grade pipelines and data services: ingests millions of records/day into S3, performs SQL/Python quality validation and PySpark/SQL transformations, and serves curated datasets via Athena/Redshift. Has experience hardening external data collection with retries/rate-limit handling and shipping versioned internal data APIs with backward compatibility, monitoring, and CI/CD in early-stage environments.”
Mid-level ML Data Engineer specializing in MLOps and scalable healthcare data pipelines
“Data/ML platform engineer with healthcare (Cigna) experience owning an end-to-end pipeline spanning Airflow + Debezium CDC ingestion, PySpark/SQL transformations, rigorous data quality gates, and feature-store/API serving for ML training and inference. Worked at 10+ TB scale and cites a ~30% latency reduction plus stronger reliability via idempotent design, monitoring, and backfill-safe reprocessing; also built pragmatic early-stage data pipelines at Frankenbuild Ventures.”
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and MLOps
“Data engineer at AT&T focused on large-scale telecom (5G/IoT) data platforms, owning end-to-end pipelines from Kafka/Azure ingestion through Databricks/Delta Lake transformations to serving analytics and ML. Has operated at very high volumes (~50+ TB/day) and delivered measurable performance gains (25–30% faster processing) plus improved reliability via Airflow monitoring, robust data quality checks, and resilient external data collection patterns (rate limiting, retries, dynamic schemas).”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer currently at American Airlines who built and owned end-to-end flight operations and booking data pipelines (batch + real-time) using Azure Data Factory, Kafka, Spark/Databricks, Synapse, and Snowflake—processing hundreds of GBs/day. Strong focus on reliability and data quality (idempotency, checkpointing, retries, validation/alerts) and delivered near-real-time analytics powering Power BI dashboards; previously helped stand up an early-stage data platform at Sysco on AWS (Glue/S3/Redshift) with Airflow and Jenkins CI/CD.”
Mid-level AI/ML Engineer specializing in Generative AI and NLP
“Built an end-to-end GenAI underwriting copilot at TD Bank for complex financial documents, combining RoBERTa-based risk classification with Azure OpenAI RAG to deliver grounded, citation-based insights. Drove a 40-50% reduction in manual underwriting review time and created reusable FastAPI ML services that cut integration effort for other teams by 30-40%.”
Mid-level Data Analyst specializing in banking analytics and machine learning
Mid-level AI/ML Engineer specializing in NLP, recommender systems, and Generative AI
Senior Data Engineer specializing in cloud data platforms and LLM/RAG solutions