Reval Logo

Vetted Model Monitoring Professionals

Pre-screened and vetted.

VV

Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps

OH, USA4y exp
Impacter AIUniversity of Dayton

Built an LLM-powered academic research assistant for a professor (LangChain + OpenAI + arXiv) focused on synthesizing papers quickly, with emphasis on reliability (ReAct prompting, citation verification) and cost control (caching). Has production MLOps/orchestration experience at Cisco and HCL Tech using Kubernetes, plus MLflow and GitHub Actions for lifecycle management and CI/CD.

View profile
GS

Mid-level Data Scientist & Generative AI Engineer specializing in LLMs and RAG

Auburn Hills, MI4y exp
StellantisUniversity of Cincinnati

ML/NLP practitioner who built a retrieval-augmented generation (RAG) system for large financial and operational document sets using Sentence-Transformers (all-mpnet-base-v2) and a vector DB (e.g., Pinecone), with a strong focus on retrieval evaluation and chunking strategy optimization. Experienced in entity resolution (rules + embedding similarity with type-specific thresholds) and in productionizing scalable Python data workflows using Airflow/Dagster and Spark.

View profile
SR

Sharanya Rao

Screened

Mid-level AI/ML Engineer specializing in NLP, LLMs, and RAG for finance and healthcare

Remote, USA3y exp
Ally FinancialUniversity of Maryland, Baltimore County

Built an AI lending assistant (RAG + DeBERTa) used by credit analysts to retrieve policies and past loan decisions, tackling real production issues like hallucinations, document quality, and sub-second latency. Deployed a modular, Dockerized AWS architecture (ECS/EMR + load balancer) with load testing, caching/precomputed embeddings, and CloudWatch monitoring, and used Airflow to automate scheduled data/embedding/vector DB refresh pipelines with retries and alerts.

View profile
AM

Mid-level Data Scientist specializing in Generative AI and multimodal systems

Irving, TX5y exp
University of Massachusetts DartmouthUniversity of Massachusetts Dartmouth

Recent J&J intern who built a conversational RAG agent and led a shift from a monolithic model to a modular RAG workflow, cutting response time from several days to under a second by tackling data fragmentation, context retention, and embedding/latency optimization. Also worked on a large (7B-parameter) multimodal VQA pipeline for healthcare research and stays current via NeurIPS/ICLR and open-source contributions.

View profile
RS

Junior AI/ML Engineer specializing in RAG systems and cloud-native MLOps

Austin, TX2y exp
UpstartTexas A&M University-Corpus Christi

Built and shipped a production LLM-powered RAG system at Upstart enabling natural-language search across 50k+ scattered internal technical docs. Delivered sub-300ms p95 latency for ~50 active users with strong hallucination safeguards (retrieval-first, thresholds, citations) plus robust testing/monitoring and cost controls (prompt caching cutting API spend ~20%).

View profile
AF

Alfred Fox

Screened

Senior AI/ML & Full-Stack Engineer specializing in GenAI, RAG, and MLOps platforms

Glendale, Arizona15y exp
RTA FleetArizona State University

Backend/data platform engineer who owned end-to-end production services for a fleet analytics/GenAI platform, spanning FastAPI microservices on Kubernetes and AWS (EKS + Lambda) event-driven workloads. Strong in reliability/observability (OpenTelemetry, circuit breakers, idempotency), data pipelines (Glue/Airflow/Snowflake), and measurable performance/cost wins (SQL 10s to <800ms P95; ~30% compute cost reduction).

View profile
SP

Surya Pavan

Screened

Mid-level Machine Learning Engineer specializing in Generative AI and LLM applications

Baltimore, MD5y exp
AcerCalifornia State University, Northridge

GenAI engineer who has deployed production LLM/RAG chatbots for internal document search, focusing on reliability (hallucination reduction via prompt guardrails + retrieval filtering) and performance (latency improvements via caching). Experienced with LangChain/LangGraph orchestration for multi-step agent workflows and iterates using monitoring/logs and benchmark-driven evaluation while partnering closely with product and business teams.

View profile
SG

Entry-Level AI/ML Engineer specializing in LLM apps, RAG pipelines, and production ML systems

1y exp
iFrog Marketing SolutionsUC San Diego

AI/LLM practitioner at iFrog Marketing Solutions who drove a RAG chatbot from prototype to production in a legacy, AI-resistant environment by validating customer needs and building a business case. Implemented production-grade LLM practices (CI/CD eval gating, rollbacks, prompt/context engineering) and led internal workshops to bring non-AI-native developers up to speed while partnering with sales on tailored demos to drive adoption.

View profile
SS

Mid-Level Software Engineer specializing in Cloud, DevOps, and MLOps

Boston, MA3y exp
Northeastern UniversityNortheastern University

Built and productionized a recommendation system from notebook prototype into a low-latency, scalable Cloud Run service using Docker, FastAPI, Terraform, CI/CD (GitHub Actions), and MLOps tooling (Vertex AI, MLflow). Experienced diagnosing real-time workflow issues using structured logging/ELK and GCP metrics, including resolving intermittent 504s by fixing unbounded SQL and adding caching. Also partners with sales/customer teams (Wasabi) to deliver tailored demos, troubleshoot, and drive onboarding/adoption.

View profile
MS

Muaaz Syed

Screened

Mid-level AI/ML Engineer specializing in NLP and conversational AI

Richardson, TX4y exp
CVS HealthUniversity of Texas at Dallas

ML/NLP engineer focused on real-time IT ops analytics, building a predictive maintenance/anomaly detection platform end-to-end (multi-source ETL, streaming, modeling, and production deployment on GCP/Vertex AI). Uses deep learning (LSTMs, autoencoders/VAEs) plus embeddings (SentenceBERT) and vector search to improve incident correlation and search, citing ~40% reduction in duplicate alert noise.

View profile
AB

Alekya Battu

Screened

Mid-level Data Scientist specializing in ML, NLP, and MLOps

USA5y exp
Wells FargoWilmington University

Senior data scientist with ~5 years’ experience building production ML/NLP systems in finance (Wells Fargo) and deep learning for sensor analytics in connected vehicles (Medtronic). Has delivered end-to-end platforms combining time-series forecasting with transformer-based NLP, including automated drift monitoring/retraining (MLflow + Airflow) and standardized Docker/CI/CD deployments; achieved a reported 22% precision improvement after domain fine-tuning.

View profile
SK

Mid-level Data Scientist specializing in real-time fraud detection and MLOps

San Francisco, CA5y exp
Charles SchwabCUNY Graduate Center

ML/NLP engineer with experience at Charles Schwab building an NLP + graph (Neo4j) entity-resolution system to unify fragmented user/device/transaction data and improve downstream model quality and analyst querying. Has applied embeddings (SentenceTransformers + FAISS) with domain fine-tuning to boost hard-case matching recall by ~12% while maintaining precision, and has a track record of hardening scalable Python/Spark pipelines and productionizing fraud models via A/B tests and shadow-mode monitoring.

View profile
HK

Hiya Kothari

Screened

Intern Full-Stack Software Engineer specializing in AI/ML and cloud

San Francisco, CA3y exp
Sparx LabsUC Irvine

Built a Python-based geospatial machine learning backend for PFAS contamination risk mapping, including reproducible feature pipelines, ensemble modeling, and a FastAPI layer for visualization/analysis. Emphasizes data integrity and robustness (CRS/coverage checks, fail-fast validation) and has led safe backend refactors using feature flags, idempotent backfills, and Postgres RLS for secure, queryable results delivery.

View profile
RM

Principal AI/ML Leader specializing in Generative AI, MLOps, and NLP

CA, USA11y exp
iBase-tNortheastern University

Founding member of Tausight, building AI systems to detect and protect PHI for healthcare organizations; helped take the company through post–Series A funding and exited after ~6 years. Drove a strategic collaboration with Intel’s OpenVINO team—becoming the first to deploy it in a real production system and improving model performance by ~30% on customer Intel-CPU machines.

View profile
TT

Mid-level AI/ML Engineer specializing in MLOps and LLM applications

New York, NY4y exp
BNY MellonUniversity at Albany

BNY Mellon engineer who has built and operated production AI systems end-to-end: a LangChain/Pinecone RAG platform scaled via FastAPI + Kubernetes to 1000 RPM with 99.9% uptime, supported by monitoring and data-drift detection. Also deep in data/infra orchestration (Airflow, Dagster, Terraform on AWS/EMR/EC2), processing 500GB+ daily and delivering measurable reliability and performance gains, plus strong compliance-facing model explainability using SHAP and Tableau.

View profile
SV

Sathvik Vanja

Screened

Mid-level AI Engineer specializing in GenAI, LLM integration, and RAG pipelines

Overland Park, KS3y exp
HCA HealthcareVNR Vignana Jyothi Institute of Engineering and Technology

Built and led deployment of an autonomous, self-correcting multi-agent knowledge retrieval and validation system at HCA Healthcare to reduce heavy manual research/validation in clinical/compliance documentation. Deeply focused on production reliability and cost—used LangGraph StateGraph orchestration plus ONNX/CUDA/quantization to cut GPU costs by 25%, and partnered with the Compliance VP using real-time contradiction-rate dashboards to hit a 40% automation goal without compromising compliance.

View profile
VA

Mid-level Data Scientist specializing in Generative AI and NLP for financial risk

Glassboro, NJ4y exp
S&P GlobalRowan University

Built and shipped production generative AI/RAG assistants in regulated financial contexts (S&P Global), automating compliance-oriented Q&A over earnings reports/filings with grounded answers and citations. Experienced across the full stack—AWS-based ingestion (PySpark/Glue), vector retrieval + LangChain agents, GPT-4/Claude model selection, and production reliability (monitoring, caching, retries) plus rigorous evaluation and regression testing.

View profile
AY

Mid-level AI/ML Engineer specializing in deep learning, MLOps, and LLM applications

NY, USA4y exp
DataRobotSt. Francis College

Built and deployed production LLM assistants for internal Q&A and customer-feedback summarization, emphasizing reliability (RAG, prompt tuning, validation/whitelisting) and privacy safeguards. Improved adoption by adding explainable outputs and a user feedback mechanism, and has hands-on orchestration experience with Aflow and Azure Logic Apps.

View profile
CT

Mid-level AI/ML Engineer specializing in LLM systems, RAG, and MLOps

5y exp
HCA HealthcareUniversity of South Florida

Built a production, real-time clinical documentation system at HCA that converts doctor–patient conversations into structured clinical summaries using speech-to-text, LLM summarization, and RAG. Demonstrated measurable gains from medical-domain fine-tuning (clinical concept recall +18%, ROUGE-L 0.62 to 0.74) while meeting HIPAA constraints via PHI anonymization and encryption, and deployed via Docker/FastAPI with CI/CD and monitoring.

View profile
VG

Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps

Glassboro, NJ5y exp
HCLTechRowan University

Healthcare-focused LLM engineer who deployed a production triage and clinical knowledge retrieval assistant using RAG and LangGraph-orchestrated multi-agent workflows. Emphasizes clinical safety and compliance with robust hallucination controls, HIPAA/PHI protections (tokenization, encryption, audit logging, zero-retention), and human-in-the-loop escalation; reports a 75% latency reduction in a healthcare agent system.

View profile
MY

Mid-level AI/ML Engineer specializing in Generative AI and RAG systems

6y exp
Elevance HealthMLR Institute of Technology

Built a production multi-agent orchestration platform to automate healthcare claims and HR workflows, combining LangChain/CrewAI/AutoGPT with RAG (FAISS/Pinecone) and fine-tuned open-source LLMs (LLaMA/Mistral/Falcon) in private Azure ML environments to meet HIPAA requirements. Emphasizes rigorous agent evaluation/observability (trajectory eval, adversarial testing, LLM-as-judge, drift monitoring) and reports measurable outcomes including 35% faster claims processing and 40% fewer chatbot errors.

View profile
VS

Senior AI/ML Engineer specializing in Generative AI, LLMs, and MLOps

Tampa, FL9y exp
VerizonJawaharlal Nehru Technological University

Telecom (Verizon) AI/ML practitioner who built a production multimodal system that ingests messy customer issue reports (calls, chats, emails, screenshots, videos) and turns them into confidence-scored incident summaries with reproducible steps and evidence links. Also built KPI/alarm-to-ticket correlation to rank likely root-cause domains (RAN/Core/Transport), cutting triage from hours to minutes and improving MTTR.

View profile
TP

Tejaswini P

Screened

Mid-level Machine Learning Engineer specializing in NLP, LLMs, and MLOps

Austin, TX3y exp
State StreetUniversity of Central Missouri

Built and deployed an LLM-powered financial/regulatory document analysis platform at State Street, combining fine-tuned transformer models with a RAG pipeline over internal knowledge bases. Owned the productionization stack (FastAPI, Docker, SageMaker, Terraform, CI/CD) plus monitoring for drift/latency/hallucinations, delivering ~40% faster analyst review and improved reliability through chunking/embeddings and grounding.

View profile
BS

Mid-level Software Engineer specializing in full-stack and cloud-native microservices

Dallas, TX4y exp
Northern TrustUniversity of Texas at Arlington

Backend engineer who built a Python/Flask system for high-volume healthcare claims processing, using PostgreSQL as the source of truth and RabbitMQ workers for scalable async processing. Experienced in SQLAlchemy/Postgres performance tuning, multi-tenant data isolation (including Postgres RLS), and integrating/versioning ML model services (scikit-learn/PyTorch/Hugging Face) with controlled rollouts. Drove measurable performance gains by batching background jobs and adding Redis caching (40% less workload; response times cut from ~10s to 2–3s).

View profile

Need someone specific?

AI Search