Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in healthcare ML and generative AI
“AI/LLM engineer at Humana who built and deployed a HIPAA-aware RAG system for clinical record retrieval, cutting search time dramatically and improving retrieval efficiency by 30%. Experienced with Spark-scale data preprocessing, QLoRA fine-tuning, LangChain orchestration, and MLflow+SageMaker integration, with a strong testing/evaluation discipline (A/B tests, human eval) to hit 95%+ accuracy and production latency targets.”
“AI/ML engineer with banking domain experience (M&T Bank) who built a production credit-risk prediction and reporting platform combining ML models (XGBoost/TensorFlow) with a RAG pipeline (LangChain + GPT-4) over compliance documents. Delivered measurable impact (≈20% better risk detection/precision, 50% less manual reporting) and productionized workflows on Vertex AI/Kubeflow with CI/CD and monitoring; also implemented embedding-based semantic search using FAISS/Pinecone.”
Mid-level AI/ML Engineer specializing in fraud detection and NLP
“Built production AI/RAG-style systems for message Q&A and insurance claims workflows, combining data ingestion, indexing/retrieval, and LLM integration with fallback modes. Has hands-on orchestration experience (Airflow, Prefect, LangChain) and cites large operational gains (claims processing reduced to ~45 seconds; manual review -50%; false alerts -30%) through automated, monitored pipelines and close collaboration with non-technical stakeholders.”
Mid-level AI/ML Engineer specializing in GenAI and cloud MLOps
“Applied LLMs to high-stakes domains (wildfire risk for emergency teams and loan approval via a fine-tuned IBM Granite model), with a strong focus on reliability—using RAG-based cross-validation to reduce hallucinations and continuous ingestion pipelines (MODIS satellite imagery via AWS Lambda) to keep data current. Experienced in production orchestration and MLOps-style workflows using Airflow, AWS Step Functions, and SageMaker Pipelines, and collaborates closely with analysts on KPI-driven evaluation.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and scalable model deployment
“Built and deployed a production autonomous AI data analyst agent (LangChain + GPT + Streamlit on AWS) that turns natural-language questions into validated SQL, visualizations, and insights, cutting manual analysis time by ~50%. Emphasizes reliability and MLOps: schema-aware validation/guardrails to prevent hallucinations, scalable large-data processing, and Azure DevOps CI/CD + MLflow for automated deployment and experiment tracking.”
Mid-level Software Engineer specializing in backend, cloud, and AI systems
“Built and owned an end-to-end AI-driven content enrichment pipeline for a news workflow, using n8n, LLM agents, and external APIs to automate ingestion, deduplication, categorization, and approval routing. Stands out for production-minded AI systems work: they improved reliability with schema validation, retries, idempotency, and monitoring, while automating 90% of processing and cutting duplication errors by 95%+.”
Junior Data Scientist specializing in AI/ML and product analytics
“Applied ML/data scientist who has owned backend-heavy AI systems end-to-end, including a market-signal platform on FastAPI/AWS and rapid MVP delivery in medical computer vision. Particularly interesting for teams needing someone who can combine model development, backend APIs, production debugging, and pragmatic low-latency architecture decisions.”
Mid-Level AI Backend Engineer specializing in Python, LLM/RAG, and healthcare/insurance platforms
“AI Backend Engineer in MetLife’s claims technology group who built and deployed a production LLM-based decision support system that helps claim adjusters quickly find relevant policy rules from long PDFs and historical notes. Designed it as multiple production-grade services with retrieval-first guardrails, continuous validation, and Airflow-orchestrated pipelines for ingestion, embeddings, and vector index updates to keep the system reliable as policies and data evolve.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps
“Built a production RAG-based healthcare chatbot to retrieve patient medical documents spread across multiple platforms, reducing manual and error-prone searching. Implemented semantic search with custom embeddings (Hugging Face) and Pinecone, deployed via FastAPI/Docker on AWS SageMaker with MLflow tracking, and optimized fine-tuning cost using LoRA while orchestrating retraining pipelines in Airflow.”
Mid-level AI/ML Engineer specializing in GenAI, MLOps, and anomaly detection
“LLM/MLOps engineer who has shipped a production RAG-based technical documentation assistant (FastAPI) cutting manual review by 45%, with deep hands-on retrieval optimization in Pinecone/LangChain (HNSW, hybrid + multi-query search, caching). Also brings healthcare domain experience—building Airflow-orchestrated EHR pipelines and delivering FDA-auditability-friendly predictive maintenance solutions using SHAP/LIME explainability surfaced in Power BI.”
Mid-level AI/ML Engineer specializing in NLP, RAG, and MLOps for FinTech
“ML/LLM engineer with production experience building a compliant RAG-based virtual assistant at Intuit, optimizing embeddings and FAISS retrieval (including PCA) for low-latency, privacy-controlled search and deploying via AWS SageMaker containers. Also built scalable Airflow+MLflow pipelines using Docker and KubernetesExecutor, cutting training cycles by 37%, and partnered with civil engineers/project managers at Aegis Infra to deliver predictive maintenance for construction equipment.”
Mid-level AI Engineer & Data Scientist specializing in LLMs, RAG, and multimodal systems
“LLM/GenAI engineer who built a production AI-powered credit risk policy summarization and compliance alerting platform at HCL Tech, focused on factual accuracy and auditability for a financial client. Implemented a multi-retriever LangChain RAG architecture with citations-only prompting, fallback agents, and human-in-the-loop legal review—cutting manual review time by 35% and scaling to 12 teams.”
Mid-level Machine Learning Engineer specializing in NLP, recommender systems, and MLOps
“ML/LLM engineer with production experience at General Motors building Transformer-based search and recommendation personalization for a high-traffic vehicle platform. Delivered significant KPI gains (17% conversion lift, 14% bounce-rate reduction) and optimized real-time inference via ONNX Runtime and INT8 quantization while implementing robust MLOps (Airflow/MLflow, monitoring, drift-triggered retraining) and stakeholder-facing explainability/dashboards.”
Junior Machine Learning Engineer specializing in NLP, computer vision, and MLOps
“ML/LLM engineer with Meta experience building production AI systems for near real-time user-report classification and summarization under strict latency (<250ms), safety, cost, and privacy constraints. Has hands-on MLOps/orchestration experience (Airflow, Spark, MLflow, Kubernetes, Docker, GitHub Actions) plus observability (Prometheus/Grafana) and applies rigorous evaluation, staged rollouts, and A/B testing to keep agent workflows reliable in production.”
Junior Full-Stack Software Engineer specializing in React and FinTech
“Full-stack engineer with banking-domain experience (Cognizant/Kotak) building and optimizing high-usage transaction/account APIs on Spring Boot/Node/PostgreSQL in AWS/Docker, including peak-load performance fixes. Also built an end-to-end retail demand-forecasting feature during a master’s program, spanning data pipelines, ensemble models, dashboards, and operational guardrails like validation and fallbacks.”
Junior Machine Learning Engineer specializing in NLP and multimodal transformers
“Built and deployed LLM-powered agentic chatbot and text-to-SQL systems using LangGraph/LangChain (and Bedrock), structuring workflows as DAGs with planning/replanning and validation to improve tool-calling reliability and reduce hallucinations. Operates production feedback loops with online/offline metrics, drift detection, and LangSmith-based evaluation pipelines, and regularly partners with business stakeholders and clinicians using slide decks and visual charts.”
Mid-level AI/ML Engineer specializing in predictive modeling and cloud ML pipelines
“LLM engineer/data engineer who has deployed production RAG systems for internal-document Q&A, building end-to-end ingestion, embedding, vector search, and FastAPI serving while actively reducing hallucinations and latency through rigorous retrieval tuning and caching. Also experienced in orchestrating cloud data pipelines (Airflow, AWS Glue, Azure Data Factory) and partnering with non-technical business teams to deliver AI solutions like automated document review.”
Mid-level AI/ML Engineer specializing in production ML, RAG systems, and MLOps
“Built and shipped a widely adopted, production-grade RAG internal search assistant that unified scattered engineering knowledge, deployed as a FastAPI service on Kubernetes with FAISS + LangChain. Demonstrates deep practical expertise in retrieval tuning (chunking, hybrid search, re-ranking) and in making LLM workflows reliable in production via guardrails, monitoring, and evaluation, plus strong cross-functional delivery with non-technical operations teams.”
Mid-level Software Engineer specializing in AI/ML for FinTech and Healthcare
“Built and deployed an end-to-end fintech product, FinSight, for bank statement analysis and financial Q&A using a production-style RAG architecture. Stands out for combining FastAPI, OpenAI embeddings, FAISS, hybrid SQL/vector retrieval, and practical reliability work like chunking optimization, validation, and low-latency performance tuning.”
Junior AI & Data Engineer specializing in LLM systems and analytics platforms
“Backend/ML engineer who built a job-search automation SaaS using a modular Selenium ETL pipeline, rigorous testing/observability, and a cost-optimized two-pass LLM ranking approach. Has led high-integrity data extraction from messy multi-city PDF records (95% integrity) and managed modular production rollouts for a 20+ engineer team, with a strong security focus (deny-by-default, row-level access control) in an AI-assisted grading platform.”
Mid-level AI/ML Engineer specializing in MLOps and cloud-deployed ML systems
“ML/AI engineer who built and productionized an NLP system at PurevisitX, orchestrating end-to-end ML workflows with Airflow (S3 ingestion through auto-retraining) and optimizing for drift and low-latency inference. Also partnered with Citibank risk teams on a fraud detection model, translating results via dashboards and iterating thresholds based on stakeholder feedback.”
Mid-level Machine Learning Engineer specializing in LLMs, NLP, and MLOps
“Built a production LLM-RAG system at McKesson to let internal healthcare operations teams query large volumes of unstructured operational documents via natural language with source-backed answers, designed with HIPAA/FHIR compliance in mind. Demonstrated strong production engineering across hallucination mitigation, retrieval quality tuning, and latency/scalability optimization, using LangChain/LangGraph and Airflow plus rigorous evaluation/monitoring practices.”
Mid-level Machine Learning Engineer specializing in real-time pipelines and NLP/GenAI
“ML/MLOps practitioner from Discover Financial who built and deployed a real-time AI fraud detection platform (LSTM + VAE) on AWS SageMaker with Docker/FastAPI and Jenkins-driven CI/CD. Demonstrated measurable impact (30% accuracy lift, 25% fewer false alerts) and deep expertise in class-imbalance mitigation, drift monitoring, and orchestration (Airflow/Kubeflow), plus strong stakeholder adoption via Power BI dashboards for fraud/compliance teams.”
Mid-level GenAI/ML Engineer specializing in LLM systems and RAG chatbots
“Built and shipped a production agentic LLM analytics platform that lets non-SQL business users query relational databases in plain English via a RAG + LangChain/LangGraph workflow and FastAPI service. Emphasizes safety and reliability with guardrails (validation/access control), testing/evaluation frameworks, and performance optimization (caching, monitoring, Dockerized scalable deployment), reducing dependency on data teams and speeding analytics turnaround.”