Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and real-time fraud detection
“GenAI/ML engineer who has shipped production agentic systems in highly regulated and high-throughput environments, including an AWS Bedrock-based fraud/compliance workflow at U.S. Bank with PII redaction and hallucination detection that cut investigation time by 50%+. Also built and evaluated RAG and recommendation systems at Target, using RAGAS-driven testing, hybrid retrieval with re-ranking, and SHAP explainability dashboards to align model behavior with merchandising business KPIs.”
Mid-level Data & GenAI Engineer specializing in lakehouse, streaming, and RAG platforms
“Built a production internal LLM-powered knowledge assistant using a RAG architecture (Python, LLM APIs, cloud services) that answers employee questions with sourced, grounded responses from internal documents. Demonstrates strong practical depth in retrieval tuning (chunking/metadata filters), orchestration with LangChain, and production reliability practices (latency optimization, automated embedding refresh, evaluation metrics, logging/monitoring) while partnering closely with non-technical operations teams.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics platforms
“Data engineer with healthcare and enterprise experience (Molina Healthcare, Dell Technologies) building and operating high-volume batch + streaming pipelines across AWS and Azure. Strong focus on data quality (schema validation, fail-fast checks), reliability (monitoring/alerts, retries), and performance tuning (Spark/partitioning), with measurable runtime reduction and improved downstream trust.”
Mid-level Data Engineer specializing in cloud lakehouse and streaming platforms
“Data engineer focused on building production-grade pipelines on AWS (Kafka/Kinesis/Glue/S3) through to curated serving layers in Snowflake and Delta Lake. Emphasizes automated data quality validation (PySpark + CI/CD), modular dbt transformations for analytics (customer spending, risk metrics), and operational reliability with CloudWatch and DLQs; data consumed by BI tools and ML pipelines for fraud detection and risk analytics.”
Mid-level Data Engineer specializing in multi-cloud real-time and batch data pipelines
“Data engineer with healthcare domain experience who owned 100M+ record pipelines end-to-end (Kafka/Kinesis/ADF → PySpark/dbt validation → Spark SQL transforms → Snowflake/Power BI serving). Built production-grade reliability practices (Airflow orchestration, CloudWatch/Grafana monitoring, pytest + contract/regression tests, idempotent ingestion/backfills) and delivered measurable improvements: 35% lower latency and 40% better query performance.”
Senior Data Analyst specializing in healthcare and financial analytics
“Healthcare analytics candidate with hands-on experience turning messy claims data in Redshift and S3 into validated reporting tables, plus automating KPI workflows in Python. They’ve owned end-to-end operational analytics projects, including a claims delay analysis that improved processing efficiency by about 20%, and have experience driving stakeholder adoption of standardized metrics across dashboards.”
Junior Data Analyst specializing in financial and operational analytics
“Analytics professional with experience at KPMG turning messy operational and financial data from SQL Server and AWS S3 into clean reporting datasets and automated Python workflows. They combine SQL, Python, Power BI, and experimentation methods to deliver stakeholder-aligned KPI dashboards and marketing performance insights with a strong focus on data integrity and reproducibility.”
Senior Python Developer specializing in data engineering, MLOps, and cloud platforms
“Backend/data engineer with production experience building secure Django/DRF APIs (JWT RS256 + rotating refresh tokens), background processing with Celery, and strong reliability practices (timeouts, retries/backoff, structured logging, audit trails). Has delivered AWS solutions spanning Lambda + ECS with IaC/CI-CD and built Glue/PySpark ETL pipelines with schema evolution and data-quality quarantine patterns; also modernized a legacy SAS pipeline to Python/PySpark with parallel-run parity validation and phased rollout.”
Junior Data Scientist specializing in fraud analytics and cloud data platforms
“Built and deployed production LLM-powered document summarization/classification systems using embeddings, vector databases (RAG-style retrieval), and automated evaluation (BERTScore/ROUGE), with a focus on monitoring and scalable cloud pipelines. Also partnered with a fraud analytics team to deliver a transaction anomaly detection solution, translating model outputs into Power BI dashboards and actionable KPIs while iterating on thresholds and alerts based on stakeholder feedback.”
Senior Data Analyst specializing in cloud data platforms, experimentation, and predictive analytics
“Healthcare data/ML practitioner with experience at UnitedHealth Group building production ETL and streaming pipelines (Python, BigQuery, Kafka) that unify EHR, IoT device, and lab data for patient risk prediction. Also implemented embedding-based semantic search/linking for noisy clinical notes via domain adaptation and rigorous validation with clinical stakeholders; previously built churn prediction at DirecTV using XGBoost.”
Mid-level Data Engineer specializing in cloud data pipelines and streaming
“Data engineer with experience at Wells Fargo and Accenture owning end-to-end production pipelines processing hundreds of millions of transactional/risk records daily. Strong focus on data quality and reliability (reconciliation checks, schema drift detection, CloudWatch alerting) plus Spark performance tuning and idempotent backfills using Delta Lake/merge logic across AWS (S3/EMR/Databricks/Redshift) and Azure (ADF/Azure DevOps/Azure Monitor).”
Mid-level Data Engineer specializing in AWS/Azure pipelines and streaming analytics
“Data engineer with experience across healthcare and geospatial risk systems, owning end-to-end pipelines from ingestion through serving on AWS/Azure stacks. Built HIPAA-compliant data quality gates and CDC for millions of daily claims, and also delivered a real-time wildfire risk platform with 20-minute refresh cycles and a 60% data accuracy lift. Strong in streaming (Kafka), Spark performance tuning, and production-grade orchestration/CI/CD (Airflow, Docker, Jenkins, GitHub Actions, Terraform).”
Senior Full-Stack Software Developer specializing in IoT and cloud systems
“Frontend-focused engineer who built a full movie recommendation system from concept to production, comparing classic collaborative filtering with LLM-based recommendation approaches on AWS. Emphasizes scalable architecture, strict TypeScript data contracts, and high-quality Next.js/React UI patterns (defensive states, scoped state management, performance optimization) with disciplined QA and feature-flagged rollouts.”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Analytics-focused candidate with hands-on experience turning messy CRM, e-commerce, payments, and support data into trusted reporting datasets using SQL and Python. They have owned end-to-end churn and retention analytics work, including RFM-based segmentation, dashboard delivery, and metric standardization across sales, marketing, and finance.”
“Built and deployed a production RAG-based internal knowledge assistant that let analysts query company documents in natural language, using LangChain/LangGraph with Pinecone and a FastAPI service for integration. Emphasizes reliability in production through hallucination mitigation (retrieval tuning + prompt guardrails) and measurable evaluation/monitoring (accuracy, latency, task completion, hallucination rate), iterating based on user feedback.”
“ML/GenAI engineer with recent CVS Health experience building a production RAG system over unstructured financial/research documents using LangChain, FAISS, and Pinecone, plus LoRA/PEFT fine-tuning of GPT/LLaMA for domain-aware summarization. Demonstrates strong applied MLOps and data engineering skills (Airflow/Prefect, Docker/Kubernetes, CI/CD, MLflow) and measurable impact (sub-second retrieval, ~40% better context retrieval, ~25% entity matching improvement).”
Mid-level Data Scientist specializing in Generative AI, MLOps, and cloud data platforms
“GenAI/ML engineer (CitiusTech) who has deployed production RAG systems for compliance/operations document Q&A, using Pinecone + FastAPI microservices on Kubernetes with strong monitoring and guardrails. Also built a GenAI-powered incident triage/routing solution in collaboration with non-technical stakeholders, achieving 35% faster response times and 40% fewer misclassified tickets, and has hands-on orchestration experience with Airflow and AutoSys.”
Mid-Level Data Engineer specializing in cloud data platforms and governed analytics
“Data engineer with Optum experience building end-to-end healthcare data pipelines for HL7/FHIR, processing millions of records daily across Kafka streaming and Databricks/Spark batch. Strong focus on data quality (schema enforcement/validations), reliability (Airflow monitoring/alerts), and analytics-ready serving in Snowflake powering Power BI/Tableau, with CI/CD via Git and Jenkins.”
Junior Software Engineer specializing in backend systems and full-stack development
“Full-stack developer who uses AI thoughtfully as a productivity multiplier rather than a substitute for engineering judgment. Built a stock search platform with React, Node.js, and MongoDB, and has experimented with multi-agent workflows across frontend, backend, debugging, and documentation while keeping rigorous human review over logic, testing, and maintainability.”
Mid-level AI/ML Engineer specializing in fraud detection and healthcare predictive analytics
“Built and deployed a production LLM-powered calorie-counting chatbot that turns plain-English meal descriptions into normalized food entities, quantities, and calorie estimates using a hybrid transformer + rule-engine pipeline. Emphasizes reliability with schema/constraint guardrails, confidence-based routing (including embedding similarity search fallbacks), and strong observability/metrics (hallucination rate, calibration, latency, cost). Partnered closely with nutritionists to encode domain standards into mappings and validation logic.”
Mid-Level Full-Stack Software Engineer specializing in healthcare, cloud, and data platforms
“Backend/platform engineer who owned a real-time customer analytics microservice stack in Python/FastAPI with Kafka streaming into PostgreSQL, including schema enforcement (Avro) and high-throughput optimizations. Strong Kubernetes + GitOps practitioner (EKS/GKE, Helm, Argo CD) who has handled CI/CD reliability issues with automated pre-deploy checks and rollbacks, and supported major migrations (on-prem to AWS; VM to EKS) with blue-green cutover planning.”
Mid-level Data & AI Engineer specializing in healthcare data pipelines and MLOps
“Built and deployed a production LLM-powered clinical note summarization system used by care managers to speed review of 5–20 page unstructured medical records. Implemented safety-focused validation (prompt constraints, rule-based and section-level checks, human-in-the-loop) to reduce hallucinations while maintaining low latency and meeting privacy/regulatory constraints, integrating via APIs into existing clinical tools.”
Mid-level AI/ML Engineer specializing in MLOps and LLM-powered applications
“AI/ML engineer with production experience building a RAG-based internal analytics assistant (Databricks + ADF ingestion, Pinecone vector store, LangChain orchestration) deployed via Docker on AWS SageMaker with CI/CD and MLflow. Strong focus on real-world constraints—latency/cost optimization (LoRA ~60% compute reduction), hallucination control with citation grounding, and enterprise security/governance. Previously at Intuit, delivered an interpretable churn prediction system (PySpark/Databricks, Airflow/Azure ML) that improved retention targeting ~12%.”
Mid-level AI/ML Engineer specializing in GenAI agents, RAG pipelines, and MLOps
“AI/ML engineer who built a production RAG-based internal document intelligence assistant (LangChain + Pinecone) to let employees query enterprise reports in natural language. Demonstrated hands-on pipeline orchestration with Apache Airflow and tackled real production issues like retrieval grounding and latency using tuning, caching, and token optimization, while partnering closely with non-technical business stakeholders through iterative demos.”