Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in MLOps and production ML systems
Executive technology leader specializing in backend platforms, cloud, and gaming/FinTech systems
Mid-level AI/ML Engineer specializing in LLMs, RAG, and cloud MLOps
“Backend engineer with insurance/claims domain experience who modernized legacy claims processing systems to support AI-assisted claim review. Emphasizes production-ready API design in Python/FastAPI (schemas, async, caching, graceful degradation), strong observability with Prometheus, and layered security including JWT auth plus database row-level security (Supabase/Postgres).”
“ML/LLM engineer with production experience building a RAG-based LLM support assistant (FastAPI, Redis, Kafka) with multi-layer validation and human-in-the-loop feedback loops to improve accuracy over time. Has orchestration and MLOps depth using Airflow and Kubeflow on Kubernetes (autoscaling, alerting, monitoring) and delivered measurable ops impact (40% ticket efficiency improvement) by partnering closely with customer support teams.”
“ML engineer/data scientist who deployed a production credit risk + insurance claims triage platform at Hartford Financial, combining XGBoost default prediction with BERT-based document classification. Demonstrated strong MLOps by cutting inference latency to sub-500ms and building drift monitoring plus automated retraining/deployment pipelines (MLflow, CloudWatch, GitHub Actions, SageMaker) with human-in-the-loop review and SHAP-based explainability for underwriting adoption.”
Mid-level AI/ML Engineer specializing in NLP, MLOps, and predictive analytics
“AI/ML Engineer at Fifth Third Bank who has shipped production fraud detection and risk analysis systems combining ML models with LLM-powered insights/explanations, including real-time monitoring, drift detection, and automated retraining under regulatory explainability constraints. Also built a hybrid-retrieval internal knowledge-base QA system (+20% top-5 relevance) and delivered a customer support chatbot that reduced first response time by 30% through strong stakeholder collaboration.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and real-time ML pipelines
“Built a production, real-time insurance claims document-understanding and fraud-detection pipeline using TensorFlow + fine-tuned BERT, deployed on AWS (SageMaker/Lambda/API Gateway) with automated retraining via MLflow and Jenkins. Addressed noisy documents and latency using augmentation and model distillation (3x faster), cutting claims ops manual review by ~50% and reducing fraudulent payouts.”
Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms
“AI/ML engineer who led production deployment of a multimodal (text/video/image) RAG system on GCP using Gemini 2.5 + Vertex AI Vector Search, scaling to 10M+ documents with sub-second latency and +40% retrieval accuracy. Strong MLOps/orchestration background (Kubernetes, CI/CD, Airflow, MLflow) with proven impact on reliability (75% fewer incidents) and deployment speed (92% faster), plus experience delivering explainable ML (XGBoost + SHAP + Tableau) to non-technical retail stakeholders.”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Healthcare analytics candidate with hands-on experience turning messy claims and CRM data into validated reporting tables, automating monthly reporting in Python/Airflow, and operationalizing churn metrics in SQL and Tableau. They appear especially strong in stakeholder-aligned metric design and delivered a reported ~10% churn reduction through cohort analysis, segmentation, and at-risk member targeting.”
Mid-level Software Engineer specializing in AI, backend systems, and data platforms
“Built and shipped production AI features for Aiden, including a natural-language agent and a Knowledge Hub ingestion/retrieval system. Stands out for hands-on debugging of real LLM production issues across providers like OpenAI and AWS Bedrock, improving reliability and achieving 90% response/retrieval consistency through direct LiteLLM integration, validation, monitoring, and async system design.”
Mid-level AI Software Engineer specializing in backend systems and FinTech AI
“Data engineering/software development candidate who built a stock market pipeline and uses that project to demonstrate strong architectural thinking across Kafka, Spark, and Airflow. They stand out for a pragmatic approach to AI: using tools like Copilot, ChatGPT, LangChain, and AutoGen to accelerate development while maintaining human oversight, testing, and system-level decision making.”
Mid-level AI & Machine Learning Engineer specializing in Generative AI and MLOps
“Built a production GPT-4/LangChain/Pinecone RAG “AI Copilot” at Northern Trust to automate financial report generation and analyst Q&A over internal structured (SQL warehouse) and unstructured policy data. Focused on real-world production challenges—grounding and latency—achieving major speed gains (seconds to milliseconds) via MiniLM embedding optimization and Redis caching, and implemented rigorous testing/evaluation with MLflow-backed metrics while aligning compliance and finance stakeholders for deployment.”
Senior Laboratory Technician specializing in clinical diagnostics and quality compliance
“Forward-deployed, full-stack/platform engineer who owns production features end-to-end across frontend, backend, data, and infrastructure (AWS serverless, Terraform, React). Has modernized critical fintech/payment systems (zero-downtime monolith-to-microservices with Kafka event sourcing) and productionized AI-native support workflows (LLM + RAG on Pinecone) with measurable gains in latency, incidents, CSAT, and support efficiency.”
Mid-level Applied AI/ML Engineer specializing in agentic systems and LLM automation
“Built a production LLM-powered workflow at Frontier to extract structured signals from messy, high-volume documents and route work to the right teams, replacing a multi-day, error-prone manual process. Emphasizes production reliability with schema/consistency validation, re-prompting and deterministic fallbacks, plus async pipeline optimizations for predictable latency. Experienced with multi-agent orchestration (LangGraph, AutoGen, CrewAI) and AWS workflow tooling (Step Functions, SQS, Lambda), and delivered ~70% safe automation via stakeholder-driven thresholds and human review.”
Mid-level Machine Learning Engineer specializing in deep learning and generative AI
“ML/NLP engineer with hands-on experience building production systems for unstructured insurance claims and customer data linking. Delivered measurable impact at scale (millions of documents), combining transformer-based NLP, vector search (FAISS/Pinecone), and human-in-the-loop validation, and has strong production workflow/observability practices (Airflow, AWS Batch, Grafana/Prometheus).”
Mid-level Full-Stack Developer specializing in cloud data engineering and analytics
“Software developer with hands-on experience owning customer-facing work end-to-end (requirements, implementation, testing, and feedback-driven iteration) using Python and React.js. Also described remodeling an internal legacy page/tool to improve performance and accuracy, and has exposure to microservices and RabbitMQ plus ETL-based system work.”
Intern Data Analyst specializing in data pipelines and LLM/RAG applications
“Built and deployed LLM-powered analytics and reporting systems, including a RAG-based assistant over Snowflake that let business users ask questions in plain English instead of writing SQL. Experienced orchestrating LLM agents (LangChain) and serverless reporting pipelines (AWS Lambda/S3/RDS), with a strong focus on grounded outputs, monitoring/evaluation, and data quality—used daily by non-technical finance and operations teams at Cigna.”
Mid-level Full-Stack Software Engineer specializing in cloud-native microservices and FinTech
“At Delta Airlines, built and shipped a production LLM-powered semantic search/troubleshooting assistant over maintenance logs and operational documentation using OpenAI embeddings and a vector database. Implemented hybrid ranking, query enrichment, and structured filters to improve relevance ~35% while optimizing latency via caching and vector tuning. Also designed a scalable Kafka + AWS (Lambda/SQS) ingestion pipeline with strong reliability/observability and an eval loop using real engineer queries and human review.”
Entry-level AI/ML Engineer specializing in AWS MLOps and computer vision
“Built and shipped a production RAG question-answering system using LangChain/OpenAI, Docker, and FastAPI, then reduced hallucinations through disciplined retrieval tuning and constrained prompting. Also implemented a custom evaluation framework (QA-pair dataset) to measure faithfulness/relevance and deployed containerized ML microservices on AWS ECS/Fargate with ALB and rolling, zero-downtime updates.”
Mid-level Implementation Engineer specializing in enterprise integrations and IAM/PAM
“Data/ML engineer with end-to-end ownership of donor-data deployments for a university foundation, delivering major performance and data-quality gains (500K+ records; 24h to 6h processing; duplicates 5% to 1%). Has put an LLM-assisted enrichment workflow into production with retrieval-grounded business rules, versioned outputs for traceability, and strong operational rigor around validation, logging, and CI/CD.”
Mid-level Data Scientist specializing in machine learning, NLP, and healthcare AI
“Senior data scientist with hands-on ownership of production ML and GenAI systems across enterprise churn, clinical Q&A, and real-time fraud detection. Stands out for combining strong MLOps discipline with measurable business impact, including $2M+ retained revenue, 10K TPS low-latency fraud infrastructure, and a clinician-reviewed RAG system that improved retrieval accuracy by ~38%.”
Mid-level Software Engineer specializing in AI/ML backend systems
“AI/data engineer at ZS Associates focused on production-grade agentic systems, FastAPI microservices, and cloud-native ETL/RAG pipelines at significant scale. They’ve built multi-agent validation and diagnostic workflows inspired by their Copilot/KUBEPILOT AI work, supporting 500K+ records per day while improving ML inference performance by ~30% and cutting manual troubleshooting by 60%.”
Mid-level Software Engineer specializing in backend systems, cloud, and AI pipelines
“Built and owned an end-to-end AI-driven content enrichment pipeline for a news workflow, using n8n, LLM agents, and external APIs to automate ingestion, deduplication, categorization, and approval routing. Stands out for production-minded AI systems work: they improved reliability with schema validation, retries, idempotency, and monitoring, while automating 90% of processing and cutting duplication errors by 95%+.”