Pre-screened and vetted.
Junior Machine Learning Engineer specializing in LLM agents, knowledge graphs, and multimodal AI
Mid-level Data Scientist / Software Engineer specializing in AI automation and cloud microservices
Junior Machine Learning Engineer specializing in deep learning and healthcare AI
Intern Data Scientist / ML Engineer specializing in predictive modeling and data pipelines
Mid-level AI/ML Engineer specializing in cloud AI, MLOps, and NLP
Mid-level AI/ML Engineer specializing in MLOps, streaming data, and NLP/CV
Mid-level Applied AI Engineer specializing in LLM agents and RAG systems
Mid-level Full-Stack AI Engineer specializing in agentic RAG and LLM fine-tuning
Mid AI/ML Engineer specializing in LLMs, MLOps, and FinTech analytics
Mid-level AI/ML Engineer specializing in fraud detection and enterprise ML systems
Mid-level AI & Backend Engineer specializing in RAG systems and scalable APIs
“Built and deployed a production LLM-powered document Q&A system using a strict RAG pipeline (LangChain-style orchestration + FAISS) to help users query large internal document sets. Demonstrates strong reliability focus through hallucination mitigation, curated offline evaluation with grounding checks, and production monitoring (latency/fallback rates) plus stakeholder alignment via demos and business metrics.”
Mid-level Data Scientist specializing in GenAI, RAG, and predictive modeling
“Backend engineer who built and evolved Python/FastAPI services (including AWS-deployed ML prediction APIs) for real-time profitability and risk insights at TenXengage. Emphasizes pragmatic architecture, strong validation/observability, and secure access controls (RBAC + row-level filtering), and has led safe migrations via parallel runs and incremental rollouts; reports ~20% forecasting accuracy improvement.”
Senior AI/ML Engineer & Data Scientist specializing in LLMs, RAG, and MLOps
“ML/NLP practitioner who has delivered production systems in regulated domains, including a healthcare compliance pipeline using RAG (GPT-4/Claude) plus TF-IDF retrieval that increased document review throughput 4.5x. Also has hands-on experience improving fraud detection data quality via entity resolution (Levenshtein, Dedupe.py) validated with A/B testing, and building scalable, monitored workflows with Airflow, CI/CD, and AWS SageMaker.”
Mid-level Data Scientist specializing in Generative AI and Healthcare Analytics
“Built a LangGraph-based, tool-routing LLM chatbot to deliver fast, trustworthy investment-stock insights (including tariff impact) and deployed it to production on Snowflake after initially developing in Azure with AI Search and the Microsoft Agent Framework. Improved routing robustness by moving from LLM-based decisions to a deterministic router backed by schema-relationship graphs and YAML metadata, and ran the project iteratively with non-technical stakeholders over an 8-month engagement.”
Entry AI Engineer specializing in LLMs, RAG, and MLOps
“Built and shipped a production Python-based agentic RAG document retrieval system over 80K records using FastAPI, OCR, vector search, and AWS infrastructure, with a strong emphasis on reliability, testing, and observability. Stands out for treating AI failures like production incidents—turning hallucinations, retrieval misses, and OCR issues into regression tests—and for quantifiably reducing document lookup time from about 12 minutes to under 90 seconds.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and Generative AI
“Built and deployed a production LLM-powered text-to-SQL/document intelligence chatbot on AWS that lets non-technical business users query complex enterprise databases in plain English. Demonstrates deep practical expertise in schema-aware prompting, embeddings-based schema retrieval, SQL safety/validation guardrails, and rigorous offline/online evaluation with human-in-the-loop approvals for risky queries.”
Junior Data Analyst specializing in business analytics and machine learning
“Analytics-focused candidate with hands-on project experience in SQL data preparation and Python-based churn modeling. They demonstrated a practical approach to turning messy multi-source data into reporting tables, validating data quality rigorously, and translating churn insights into targeted retention strategies.”