Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in Generative AI, LLMs, and NLP
“AI/ML engineer with forensic analytics and healthcare claims experience (Optum), building production LLM/RAG systems to surface context-driven fraud patterns from unstructured claim notes and explain risk to investigators. Strong in large-scale retrieval performance tuning, legacy API integration with reliability patterns (SQS, circuit breakers), and MLOps orchestration on Airflow/Kubernetes with rigorous testing, monitoring, and stakeholder-friendly interpretability.”
Mid-level Full-Stack Software Developer specializing in cloud-native microservices
“Full-stack engineer with enterprise experience at Metasystems Inc. (and Qualcomm) building high-traffic, security-sensitive systems—owned a secure transaction processing module end-to-end using Java/Spring Boot, Python/Django, and React. Strong AWS production operations (EKS/ECS/Lambda/RDS/DynamoDB) with IaC (Terraform/CloudFormation), observability, and reliability patterns; also delivered resilient ETL/integration pipelines with idempotency/retries/backfills and achieved a 50% deployment-time reduction through CI/CD and modular refactoring.”
Mid-level AI/ML Engineer specializing in Generative AI and data engineering
“IBM engineer who built and deployed a production RAG-based LLM assistant using LangChain/FAISS with a fine-tuned LLaMA model, served via FastAPI microservices on Kubernetes, achieving 99%+ uptime. Demonstrates strong practical expertise in reducing hallucinations (semantic chunking + metadata-driven retrieval) and managing latency, plus mature MLOps practices (Airflow/dbt pipelines, MLflow tracking, monitoring, A/B and shadow deployments) and effective collaboration with non-technical stakeholders.”
Mid-level DevOps/Cloud Engineer specializing in AWS & Azure infrastructure and CI/CD automation
“Infrastructure engineer with hands-on ownership of a scaled IBM Power/AIX estate (AIX 7.x, VIOS, HMC; 2 frames/20+ LPARs) supporting critical middleware and database workloads, including live DLPAR changes and VIOS/SAN outage recovery. Also brings modern DevOps/IaC experience building GitHub Actions pipelines for Docker/Kubernetes deployments and provisioning AWS environments with Terraform (EKS/RDS/VPC/IAM) using modular, review-driven workflows.”
Mid-level Data Engineer specializing in cloud data platforms and scalable ETL pipelines
“Data engineer (~4 years) with full-stack delivery experience (Next.js App Router/TypeScript + React) building a real-time operations monitoring dashboard backed by Kafka and orchestrated data pipelines. Strong production focus: Airflow + CloudWatch monitoring, automated Python/SQL validation (99.5% accuracy), and CI/CD with Jenkins/Docker; has delivered measurable improvements in latency, pipeline reliability, and query performance (Postgres/Redshift).”
Mid-level Data Engineer specializing in cloud ETL/ELT and lakehouse architecture
“Data engineer focused on sales/marketing analytics pipelines, owning ingestion from CRMs/ad platforms through warehouse serving and dashboards at ~hundreds of thousands of records/day. Built reliability-focused systems including dbt/SQL/Python data quality gates with alerting, a resilient web-scraping pipeline (retries/backoff, anti-bot tactics, schema-change detection, backfills), and a versioned internal REST API with caching and strong developer usability.”
Mid-level Data Engineer specializing in real-time streaming and cloud data platforms
“Data engineer with Wells Fargo experience owning an end-to-end lakehouse ETL pipeline on Databricks/Azure Data Factory, processing ~480GB daily and implementing robust data quality/reconciliation across 40+ tables to reach ~99.3% reliability. Strong in performance optimization (cut runtime 5.5h→3.8h), CI/CD and monitoring, and resilient external/API ingestion with retries, schema validation, and backfills.”
Mid-level Data Analyst specializing in business intelligence and cloud data platforms
“Healthcare analytics professional with TCS/Humana experience turning messy claims and eligibility data into reliable reporting assets using SQL and Python. They combine strong data engineering and analytics execution with stakeholder management, including automating monthly claims reporting from half a day to under 5 minutes and driving a provider outreach effort that reduced claim rejection rates by about 20%.”
Senior AI/ML Engineer specializing in Generative AI, NLP, and regulated industries
“Built end-to-end ML and GenAI systems at Northern Trust, including a production RAG-based document intelligence platform for financial reports and contracts. Stands out for combining strong MLOps execution with practical product judgment—improving forecast accuracy by 22%, document review accuracy by 38%, and cutting deployment time by 45% while keeping latency and reliability production-ready.”
Mid-level AI/ML Engineer specializing in NLP and Generative AI
“Built and deployed a production LLM-powered RAG assistant for healthcare teams (care managers/support) to answer questions from clinical and policy documentation, emphasizing trustworthiness via improved retrieval, reranking, and strict grounding prompts to reduce hallucinations. Also has hands-on orchestration experience with Apache Airflow for end-to-end ETL/ML workflows and applies rigorous testing/metrics (hallucination rate, tool-call accuracy, latency, cost) to ensure reliable AI agent behavior.”
Mid-level Data Scientist specializing in predictive modeling, NLP/LLMs, and RAG search systems
“Built production LLM/RAG platforms for financial services to enable natural-language Q&A over large policy/compliance document sets stored in Snowflake and SharePoint. Strong in MLOps and orchestration (Airflow, ADF, Step Functions, MLflow) and in solving real production issues like stale embeddings and model performance, including an incremental Snowflake Streams sync that cut processing time from hours to minutes.”
Senior AI/ML Engineer and Data Scientist specializing in Generative AI and MLOps
“ML/NLP practitioner focused on financial-services document intelligence and compliance workflows—built an end-to-end pipeline to classify documents and extract financial entities from loan applications, emails, and statements stored in S3/internal databases. Strong in entity resolution/record linkage and in productionizing pipelines with GitHub Actions CI/CD, testing, data validation, and Docker, plus semantic search using OpenAI embeddings and a vector database.”
Mid-level AI/ML Engineer specializing in computer vision, NLP/LLMs, and MLOps
“ML/AI engineer with defense and commercial analytics experience: deployed a real-time aerial object detection system at Dynetics (YOLOv5 + TorchServe in Docker on AWS EC2) with drift-triggered retraining and 99.5% uptime, tackling ambiguous targets and weather degradation. Previously at Fractal Analytics, built and explained a churn prediction model for marketing stakeholders using SHAP and delivered it via a Flask API into dashboards, driving a reported 22% attrition reduction.”
Mid-level Data Analyst specializing in cloud ETL, BI, and machine learning
“Data/ML practitioner with experience at UnitedHealth Group building a fraud claims detection solution combining structured claims data and unstructured notes, validated with compliance stakeholders to improve actionable accuracy. Also applied embeddings, vector databases, and fine-tuned language models in a Bank of America capstone to detect threats/anomalies in financial documents, with production-minded Python ETL workflows using Airflow.”
Mid-level Full-Stack Engineer specializing in cloud data platforms and LLM-powered apps
“Full-stack engineer with healthcare and finance experience who has owned end-to-end production systems across Azure and AWS. Built a real-time clinical dashboard at Centene (React + FastAPI + Azure Event Hubs) that cut data latency from ~12 minutes to under 1 minute and was associated with a 30% reduction in intervention delays. Also delivered MVPs in high-ambiguity environments at Accenture during monolith-to-microservices migration, improving uptime and maintainability with measurable results.”
Mid-level Solutions Architect/Engineer specializing in AI and data integrations
“Solutions Engineer specializing in taking LLM copilots from demo to production, with a strong emphasis on enterprise security (RBAC/OAuth), grounded RAG behavior (cite-or-refuse), and operational readiness (eval loops, logging, runbooks). Experienced in real-time diagnosis of agentic/LLM workflow failures and in partnering with Sales/CS to run integration-first POCs that clear security and reliability concerns and accelerate rollout.”
Mid-Level Software Engineer specializing in data engineering and cloud platforms
“Backend-leaning full-stack engineer who has shipped production-critical data/reporting features at Walmart and built an end-to-end workflow automation product (FastAPI + React/TypeScript + PostgreSQL) deployed on AWS. Strong in performance/reliability engineering (parallel ETL, batch DB operations, indexing via EXPLAIN ANALYZE), secure API design (JWT/RBAC), and pragmatic incident-driven scaling (separating workers from API layer).”
Mid-level Data Engineer specializing in cloud lakehouse/warehouse pipelines
“Data engineer with HCA Healthcare experience building and operating end-to-end AWS-based pipelines for clinical and operational reporting (50–100 GB/day), serving curated data into Redshift/Snowflake for Power BI/Tableau. Emphasizes production reliability (Airflow SLAs/retries/alerting, logging/observability) and strong data quality controls (reconciliations, schema/null/duplicate checks), and has shipped versioned REST APIs to expose warehouse data to downstream systems.”
Senior Data Engineer specializing in cloud data platforms and real-time analytics
“Data engineer (Credit One) who built and owned real-time financial transaction and credit risk/fraud data systems end-to-end on AWS + Snowflake. Delivered high-scale pipelines (150k events/hour; ~2TB/week), raised data accuracy to 99%, and cut Snowflake costs 42% while adding strong observability, schema-drift handling, and production-grade APIs/documentation.”
Mid-level Data Engineer specializing in cloud lakehouse platforms and ETL/ELT
“Accenture data engineer who greenfielded a supply-chain lakehouse platform, building an end-to-end medallion/Delta pipeline ingesting ~1.4TB/day from 17+ ERP/WMS/TMS/shipment sources. Delivered Gold datasets to Redshift/Synapse/Databricks SQL powering Power BI/Tableau with a 99.5% SLA, while cutting runtime 30% and cloud costs 16% through Spark/Delta optimizations and robust data quality controls.”
Mid-level Full-Stack Engineer specializing in cloud microservices and AI-powered platforms
“Full-stack engineer with hands-on experience building real-time operational products across banking, insurance, and startup e-commerce environments. They’ve owned features end-to-end—from React/TypeScript dashboards and Redux performance tuning to Spring Boot, Kafka, AWS Lambda, and production monitoring—and have also shipped 0→1 capabilities where business impact was immediate, such as reducing overselling through inventory visibility.”
Senior AI/ML Engineer specializing in supply chain and healthcare systems
“Built and deployed AcademiQ Ai, a production LLM-based teaching assistant using GPT/BERT with RAG (LangChain + Pinecone) to handle large student notes and generate adaptive explanations/quizzes. Demonstrated measurable retrieval-quality gains (18% precision improvement, 22% less irrelevant context) by tuning similarity thresholds and chunking based on user satisfaction signals. Also orchestrated terabyte-scale, real-time demand forecasting pipelines using Airflow and Kubeflow on GCP with strong monitoring, shadow deployment, and feedback-loop practices.”
Mid-level Full-Stack Python Developer specializing in LLM/GenAI for Banking & Healthcare
Senior Data Engineer specializing in cloud data platforms and BI reporting