Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud data pipelines and machine learning
“Experience spans college-built AWS-hosted Python/Flask web apps and enterprise data work at General Motors, including PostgreSQL query optimization on millions of records and multi-tenant-style data isolation using group-based, column-level permission grants. Also built an AWS-hosted meat price prediction dashboard using Dash/Plotly and ran large nightly data pipelines orchestrated with Apache Airflow.”
Mid-level AI Engineer specializing in agentic LLM systems and RAG platforms
“Built and shipped Serrano AI, a multi-tenant SaaS conversational AI platform that automates Odoo ERP workflows and lets ops/finance/supply-chain teams query ERP data in natural language. Implemented a multi-agent architecture (LangChain/LangGraph/CrewAI) with hybrid RAG over ERP schemas, deployed on Heroku/Vercel with production observability, cutting reporting time by ~80% while addressing hallucinations, latency, and schema complexity.”
Mid-level Data Engineer specializing in multi-cloud real-time data pipelines
“Data engineer with healthcare/clinical trial domain experience who owned a 100TB+/month AWS pipeline end-to-end (Glue/S3/Redshift/Airflow) and drove measurable outcomes (20% lower latency, 99.9% reliability, 40% less manual reporting). Also built production data services and API-based ingestion on GCP (Cloud Run/Functions/BigQuery) with strong validation, versioning, and safe migration practices, and launched an early-stage RAG solution (LangChain + GPT-4) for researchers.”
Mid-level Data Engineer specializing in Azure, Spark, and scalable ETL/ELT pipelines
“Data engineer with banking FP&A experience who led an end-to-end migration of 10+ TB from Teradata to Azure (ADF + Data Lake + Databricks/PySpark + Synapse). Emphasizes reliability (multi-stage validation, monitoring/alerts) and performance (Spark tuning, incremental loads, autoscaling), reporting ~99.5% pipeline reliability while supporting downstream consumers with stable schemas and clear change management.”
Mid-level Data Engineer specializing in cloud ETL and streaming data pipelines
“Data engineer in healthcare/clinical data platforms (HarmonCare) who built and operated an end-to-end lakehouse pipeline ingesting HL7/FHIR at ~2–3M records/day on AWS (Glue/Lambda/S3/Spark) and serving trusted datasets in Snowflake. Implemented strong validation/reconciliation gates and a data quality framework that reduced discrepancies ~40%, plus CI/CD (GitHub Actions/Terraform) and monitoring (Airflow/CloudWatch).”
Mid-level Prompt Engineer specializing in Generative AI and RAG systems
Mid-level Data Scientist specializing in AI/ML for healthcare analytics
Mid-level AI/ML Engineer specializing in Generative AI, RAG pipelines, and NLP
Mid-level Data Scientist specializing in ML, NLP, and cloud data platforms
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
Mid-level Business Analyst specializing in healthcare and finance analytics
Mid-Level Software Engineer specializing in backend systems, cloud, and DevOps
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and Snowflake/Databricks
Mid-level Software Engineer specializing in Python, cloud, and ML applications
Mid-level Data Engineer specializing in cloud data pipelines and modern warehousing
Mid-level Data Engineer specializing in real-time streaming and cloud data platforms
Mid-level QA Engineer specializing in test automation and API testing
Mid-level AI/ML Engineer specializing in healthcare analytics and fraud detection
Mid-Level Software Engineer specializing in cloud data pipelines and microservices
Mid-Level Software Engineer specializing in Python backend, React, and cloud AI
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
Mid-level AI/ML Engineer specializing in Generative AI, NLP, and LLM-powered healthcare systems
Mid-level Data Engineer specializing in cloud ETL and real-time streaming pipelines