Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud ETL and financial data platforms
“Data engineer with experience at Capital One and HSBC building and operating GCP-based data platforms. Led an end-to-end Oracle-to-BigQuery migration processing ~200–300GB/day using Dataflow/Beam, Airflow, Dataproc/PySpark, and Looker, achieving ~99.5% pipeline success and ~30% fewer data quality issues. Strong in production reliability, schema drift handling for external APIs, and BigQuery performance/serving patterns (materialized views, authorized views, versioned datasets).”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Analytics-focused candidate with hands-on experience turning messy CRM, e-commerce, payments, and support data into trusted reporting datasets using SQL and Python. They have owned end-to-end churn and retention analytics work, including RFM-based segmentation, dashboard delivery, and metric standardization across sales, marketing, and finance.”
“Built and deployed a production RAG-based internal knowledge assistant that let analysts query company documents in natural language, using LangChain/LangGraph with Pinecone and a FastAPI service for integration. Emphasizes reliability in production through hallucination mitigation (retrieval tuning + prompt guardrails) and measurable evaluation/monitoring (accuracy, latency, task completion, hallucination rate), iterating based on user feedback.”
Mid-level AI Engineer specializing in Generative AI and healthcare search
“AI and platform engineer with 5 years of experience who built a production knowledge assistant for Verizon end-to-end, from architecture through deployment, monitoring, and incident hardening. Stands out for combining modern LLM/RAG systems with enterprise-grade rigor, including validation layers, observability, versioning safeguards, and measurable impact on technician productivity and retrieval quality.”
Mid-level Data Analytics & ML Engineer specializing in NLP, LLMs, and cloud data platforms
“At KPMG, built and productionized a secure RAG-based LLM assistant that lets business and risk stakeholders query data warehouses in natural language, reducing dependence on data engineers for ad-hoc analysis. Demonstrates strong production rigor (Airflow orchestration, CI/CD, containerization), retrieval/embedding tuning (rechunking, semantic abstraction for structured data), and reliability controls (confidence thresholds, refusal behavior, monitoring and canary evals).”
Senior Software Engineer specializing in cloud automation and distributed systems
“Developer with experience across Drupal and Java/Spring Boot applications using React/jQuery for UI and API-driven features. Has handled production issues by tuning reverse proxy timeouts for login problems and troubleshooting data pipeline inaccuracies by fixing database queries, with a focus on performance and careful verification before changes.”
Staff Software Engineer / Technical Architect specializing in cloud data platforms and GenAI agents
“Small-team builder of Promethium’s “Mantra” next-gen agentic text-to-SQL engine, using vector DB + LangGraph tooling and SQL validation/evaluation to improve query accuracy. Experienced in diagnosing production LLM workflow failures via LangSmith traces and in running hands-on developer workshops and pre-sales POCs with live debugging and real customer data.”
Mid-level Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“Built production LLM systems including a real-time customer feedback analysis and workflow automation platform using RAG and multi-agent orchestration with confidence-based human escalation, addressing privacy and legacy integration challenges. Also automated ML operations with Airflow/Kubernetes (e.g., daily churn model retraining) cutting retraining time to under 30 minutes, and demonstrates a rigorous testing/monitoring approach plus strong non-technical stakeholder collaboration.”
“ML/GenAI engineer with recent CVS Health experience building a production RAG system over unstructured financial/research documents using LangChain, FAISS, and Pinecone, plus LoRA/PEFT fine-tuning of GPT/LLaMA for domain-aware summarization. Demonstrates strong applied MLOps and data engineering skills (Airflow/Prefect, Docker/Kubernetes, CI/CD, MLflow) and measurable impact (sub-second retrieval, ~40% better context retrieval, ~25% entity matching improvement).”
Mid-level Data Engineer specializing in cloud ETL/ELT and healthcare analytics
“Healthcare-focused data engineer/ML practitioner with experience at Lightbeam Health Solutions and Humana building production entity-resolution and semantic similarity pipelines across EMR, lab, and claims data. Uses NLP/ML (spaCy, scikit-learn, BioBERT/LightGBM) plus Snowflake/Airflow and vector search (Pinecone) to improve linkage accuracy (reported 90%) and semantic match quality (reported +12–15%), while reducing manual cleanup by 40%+.”
Engineering leader specializing in FinTech ML/AI platforms
“Engineering Manager/player-coach leading Data Infrastructure, ML/DS, and AI Engineering pods who recently shipped multiple production agentic GenAI features. Built privacy-preserving LLM workflows (PII redaction via Microsoft Presidio) and drove an AI expense-approval agent from ambiguous ask to GA, cutting approval time from ~2.5 days to <4 hours with >85% accuracy. Also owned a major LLM cost overrun incident and implemented cost observability plus circuit breakers to prevent runaway agent loops.”
Senior Data Analyst specializing in data pipelines, web scraping, and legal data enrichment
“Data engineer focused on reliable, scalable analytics pipelines and external data collection. Has owned end-to-end pipelines processing 5–10M records/day, serving Snowflake data marts to Power BI/Tableau, and reports ~99% reliability through strong validation/monitoring. Also shipped versioned REST APIs for curated data with query optimization and caching.”
Junior Software Engineer specializing in cloud, full-stack development, and Generative AI
“Built and shipped a production Chrome extension (Promptly) that lets users select text on any webpage and transform it in place (rewrite/shorten/translate) using on-device AI plus external LLMs. Implemented a custom lightweight orchestration layer for prompt chaining, context flow, and output validation, and tackled tricky browser Selection API issues to preserve formatting while keeping the UX simple and fast.”
Mid-level Data Scientist / ML Engineer specializing in secure GenAI and financial compliance
“Built a production "sentinel insight engine" to tame information overload from millions of product reviews and support transcripts, combining Azure OpenAI (GPT-3.5) zero-shot classification with a fine-tuned T5 summarizer to generate weekly actionable product insights. Demonstrated strong MLOps/production engineering by adding drift monitoring with embedding-based detection, integrating REST with legacy SOAP/queue-based CRM via FastAPI middleware, and scaling reliably on Kubernetes with HPA.”
Mid-level Data Scientist & AI Engineer specializing in RAG, agentic AI, and production ML
“AI/data engineer who built a production LLM-powered schema drift detection system (LangChain/LangGraph) to catch semantic data changes before they break downstream analytics/ML. Deployed on AWS with Docker/S3 and implemented an LLM-as-a-judge evaluation framework to improve trust, reduce hallucinations, and control false positives/alert fatigue. Collaborated with non-technical risk/business analytics stakeholders at EY by delivering human-readable drift explanations that improved confidence in financial analytics dashboards.”
Senior Data & Backend Engineer specializing in cloud data pipelines and LLM/RAG systems
“Data engineer with end-to-end ownership of large-scale retail and clinical data ingestion/processing on AWS, including real-time streaming and batch pipelines. Delivered measurable outcomes: 20M daily transactions processed, latency cut from 4 hours to 5 minutes, ~70% fewer failures, and 120+ pipelines running at 99.8% reliability with full audit compliance.”
Mid-Level Data Engineer specializing in cloud data platforms and governed analytics
“Data engineer with Optum experience building end-to-end healthcare data pipelines for HL7/FHIR, processing millions of records daily across Kafka streaming and Databricks/Spark batch. Strong focus on data quality (schema enforcement/validations), reliability (Airflow monitoring/alerts), and analytics-ready serving in Snowflake powering Power BI/Tableau, with CI/CD via Git and Jenkins.”
Mid-level Cloud Data Engineer specializing in Azure/AWS pipelines and medallion architecture
“Data engineer focused on reliability and data quality, owning end-to-end pipelines processing ~100k–300k records/day. Implemented robust validation and monitoring that cut reporting issues by ~30%, and built stable external data collection with anti-bot measures, backfills, and schema-change detection while maintaining backward-compatible internal data services.”
Mid-level AI/ML Software Engineer specializing in cloud-native MLOps and FinTech
“Software engineer with JPMorgan Chase experience delivering end-to-end fintech features (Next.js/React/Node/Postgres on AWS) and measurable performance gains. Built and productionized an AI-native credit decisioning workflow combining LLMs, vector retrieval, and a rules engine with strong governance (bias checks, auditability, human-in-loop), improving precision and cutting underwriting turnaround time by 40%.”
Mid-level Full-Stack Python Developer specializing in cloud, data engineering, and AI/ML
“Full stack Python developer who actively integrates AI coding assistants into day-to-day engineering work, including code generation, debugging, testing, and documentation. Has also coordinated multi-agent workflows across backend, frontend, testing, and code review, showing an applied, productivity-focused approach to AI-enabled software delivery.”
Senior Software Engineer specializing in AI-driven marketing and data platforms
“Backend/data engineer who builds production FastAPI microservices and AWS serverless/Glue pipelines for SMS analytics and marketing segmentation. Led a legacy batch modernization into modular services (FastAPI + Glue/Athena + ClickHouse) using shadow-mode parity checks, feature flags, and incremental rollout. Demonstrated measurable performance wins (12s to sub-second SQL; ~40% CPU reduction) and strong incident ownership with proactive schema-drift prevention.”
Mid-Level Full-Stack Software Engineer specializing in healthcare, cloud, and data platforms
“Backend/platform engineer who owned a real-time customer analytics microservice stack in Python/FastAPI with Kafka streaming into PostgreSQL, including schema enforcement (Avro) and high-throughput optimizations. Strong Kubernetes + GitOps practitioner (EKS/GKE, Helm, Argo CD) who has handled CI/CD reliability issues with automated pre-deploy checks and rollbacks, and supported major migrations (on-prem to AWS; VM to EKS) with blue-green cutover planning.”
Mid-level AI/ML Engineer specializing in Generative AI and NLP
“AI/LLM engineer with production experience building secure, scalable compliance-focused generative AI systems (GPT-3/4, BERT) including RAG over internal regulatory document bases. Has delivered end-to-end pipelines on AWS with PySpark/Airflow/Kubernetes/FastAPI, emphasizing privacy controls, monitoring, and iterative evaluation (A/B testing). Also partnered closely with bank compliance officers using prototypes to refine NLP summarization/classification and reduce document review time.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”