Pre-screened and vetted.
Junior Machine Learning Engineer specializing in deep learning and healthcare AI
Junior AI/ML Engineer specializing in RAG and multi-agent LLM systems
Mid-level AI/ML Engineer specializing in MLOps, streaming data, and NLP/CV
Mid-level AI/ML Engineer specializing in GenAI, RAG, and multi-agent LLM systems
Mid-level Full-Stack AI Engineer specializing in agentic RAG and LLM fine-tuning
Mid-level AI/ML Engineer specializing in fraud detection and enterprise ML systems
Mid-level Data Scientist specializing in GenAI, RAG, and predictive modeling
“Backend engineer who built and evolved Python/FastAPI services (including AWS-deployed ML prediction APIs) for real-time profitability and risk insights at TenXengage. Emphasizes pragmatic architecture, strong validation/observability, and secure access controls (RBAC + row-level filtering), and has led safe migrations via parallel runs and incremental rollouts; reports ~20% forecasting accuracy improvement.”
Junior AI/ML Software Engineer specializing in LLM agents and RAG systems
“AI/back-end engineer at Canon who helped build and operate an internal production LLM platform that acts as a secure middle layer between users and models, defending against jailbreaks/prompt injection while enabling RAG, memory, and grounded responses over company data. Experienced with LangChain/LangGraph orchestration, vector DB retrieval, and reliability practices (testing, monitoring, adversarial prompts) to run high-throughput, low-latency AI workflows in production.”
Junior Software Engineer specializing in full-stack, DevOps, and GenAI
“Robotics software engineer with hands-on hardware integration who built an AI-enabled smart dog door using a Raspberry Pi, camera-based recognition (DeepFace adapted for dogs), and stepper motor control (TB6600/NEMA 17). Experienced in ROS/ROS 2 across perception-to-controls, rigorous bag-driven debugging of SLAM/navigation issues, and deploying robot software with simulation-in-the-loop testing plus Docker/Kubernetes CI/CD.”
Mid-level Data Scientist specializing in Generative AI and Healthcare Analytics
“Built a LangGraph-based, tool-routing LLM chatbot to deliver fast, trustworthy investment-stock insights (including tariff impact) and deployed it to production on Snowflake after initially developing in Azure with AI Search and the Microsoft Agent Framework. Improved routing robustness by moving from LLM-based decisions to a deterministic router backed by schema-relationship graphs and YAML metadata, and ran the project iteratively with non-technical stakeholders over an 8-month engagement.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and Generative AI
“Built and deployed a production LLM-powered text-to-SQL/document intelligence chatbot on AWS that lets non-technical business users query complex enterprise databases in plain English. Demonstrates deep practical expertise in schema-aware prompting, embeddings-based schema retrieval, SQL safety/validation guardrails, and rigorous offline/online evaluation with human-in-the-loop approvals for risky queries.”
Mid-level AI Engineer and Data Scientist specializing in LLM agents and RAG systems
“Built a production-grade LLM evaluation and regression system that stress-tests models across hundreds of iterations, combining LLM-as-judge, semantic similarity, statistical metrics, and rule-based checks, with results delivered via stakeholder-friendly HTML reports and dashboards. Experienced orchestrating multi-agent RAG workflows using LangChain/LangGraph and event-driven GenAI pipelines in n8n integrating OCR, speech-to-text, and external APIs, with strong emphasis on reliability, observability, and explainable failures.”
Mid-level Data Scientist specializing in Generative AI and LLMOps
“Built a production-grade, semi-automated document recognition and classification system for large volumes of scanned PDFs, starting from little/no labeled data and handling highly variable scan quality. Deployed on AWS using SageMaker + Docker and orchestrated on EKS with a microservices design that scales CPU-heavy OCR separately from GPU inference, with strong reliability controls (validation, fallbacks, retries, readiness probes).”
Mid-level AI Engineer specializing in Generative AI and LLM systems
“Built and deployed a production-grade, multi-agent Text-to-SQL assistant that lets non-technical stakeholders query large enterprise databases in natural language. Uses Pinecone-based schema retrieval + LLM reasoning (Gemini/Claude/GPT) with a dedicated validation agent (schema/syntax checks and safe dry runs) to reduce hallucinations and improve reliability, while optimizing latency and cost via async execution and embedding caching.”
Mid-level Full-Stack Engineer specializing in Java/Spring, React, and AWS cloud platforms
“Full-stack/product-leaning engineer in logistics and high-traffic portals who ships production AI features: built an AI-assisted shipment status Q&A system using Pinecone + GPT-4 and a high-volume Python ingestion pipeline (500K+ records/day), delivering 35% fewer support tickets and cutting resolution time from 11 to 4 minutes. Also led a legacy Angular-to-React/TypeScript rebuild that boosted Lighthouse performance from 60 to 90, and has hands-on AWS EKS operations experience including resolving a 3x traffic scaling incident.”
Mid-level AI Engineer specializing in RAG, conversational AI, and agentic systems
“Built and deployed a production RAG-based clinical decision support assistant at MedLib, focused on fast, trustworthy answers from large medical documents. Demonstrates deep practical experience improving retrieval accuracy (semantic chunking + metadata-aware search), controlling hallucinations with grounded generation and thresholds, and adding clinician-requested citations using chunk metadata, with evaluation driven by healthcare professional review.”
Junior AI Data Engineer specializing in Azure Databricks lakehouse and GenAI RAG systems
“Backend/applied AI engineer from Cloud Rack Systems who built production GenAI/RAG and data platforms on Azure/Databricks at enterprise scale (2.5M records/day). Known for making LLM systems behave like deterministic services via strict retrieval contracts, citation-based validation, and strong observability—shipping a knowledge assistant used daily by 50+ users while driving hallucinations near zero and materially improving latency and cost.”
Director-level growth marketer specializing in international SaaS, e-commerce, and AI-native acquisition
“Founder of distribb.io, an 8-month-old AI SEO software company already at $10k MRR. Brings 15 years of startup ecosystem exposure and a pragmatic, market-driven approach to building businesses by targeting proven demand rather than chasing novelty.”
Mid-level DevOps & Platform Engineer specializing in AI/ML infrastructure
“Backend/AI engineer who built production-grade intelligence systems in high-stakes domains including tax/legal document analysis and brain tumor MRI workflows. Stands out for combining LLM/RAG product delivery with strong engineering rigor around retrieval evaluation, grounding, validation, observability, and safe fallbacks—turning impressive demos into systems users could actually trust.”
“Built a production ad-spend optimization system that combined deterministic audit logic with LLM-generated explanations, surfacing severe inefficiencies including 70-90% wasted spend in some Google Ads accounts. Stands out for pairing measurable business impact with pragmatic AI safety and usability decisions, including approval-gated execution and structured, human-readable recommendations.”
Senior AI Engineer specializing in LLMs, RAG, and production ML systems
“Built GynAI, an end-to-end maternal clinical decision support platform for OB/GYN practices and hospitals in North America, combining predictive ML with RAG-based LLM explainability. The candidate emphasizes real production ownership across experimentation, deployment, monitoring, and iteration, with reported impact including fewer delayed interventions in high-risk pregnancies and a 15-20% reduction in false positives.”
Mid-level AI Engineer specializing in Python, LLMs, and production ML systems
“Production-focused ML/AI engineer with hands-on ownership across classical ML and GenAI systems, from CV/NLP services to enterprise RAG. Stands out for combining research-to-production execution with measurable business impact: 40% processing-efficiency gains, 35% fewer support tickets, 5x latency improvement, and 3x throughput gains while maintaining safety and quality.”