Pre-screened and vetted.
Junior Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“AI/ML engineer who has shipped production systems across computer vision and conversational agents: built a YOLOv8-based wheel fitment pipeline at a Techstars-backed automotive startup, focusing on sub-second latency, monitoring, and robust fallback mechanisms that drove 2–3x page view growth and +5–6k users. Also built a voice-based interview platform orchestrating Deepgram + GPT-4 Mini + OpenAI TTS with FSM-driven reliability, and has hands-on RAG experience (LangChain, hybrid retrieval, cross-encoder reranking, custom pseudo-query generation).”
Junior AI/ML Software Engineer specializing in Generative AI and scalable data pipelines
“Built and operated large-scale biodiversity/ecological research platforms, integrating 50+ heterogeneous global datasets into a unified BIEN 3 schema on PostgreSQL/PostGIS and improving data consistency by 35%. Strong production engineering background (Linux monitoring, CI/CD performance gates, Docker on AWS/Azure) plus applied AI work building a Python RAG system (0.90 precision) and halving latency with Elasticsearch.”
Intern Full-Stack Engineer specializing in AI-powered products
“Software engineer (internship experience) who built and owned an AWS serverless multi-user “challenge” feature end-to-end (UI + REST APIs + DynamoDB + deployment), delivering measurable gains in latency (-30%), debugging time (-50%), and join drop-offs (~-30%). Also productionized a multilingual RAG-based QA system with vector retrieval and guardrails, improving accuracy to ~85% and driving ~20% DAU growth.”
Entry-Level GenAI/LLM Engineer specializing in agentic systems and RAG
“LLM/AI agent engineer with consulting/contract experience (Kanhaiya Consulting LLC) who deployed a production AI agent to automate BIM list workflows end-to-end—from database understanding and data cleaning to automated visualizations/dashboards. Worked around restricted real-time data access by generating synthetic data and improving outputs via supervised fine-tuning, and uses AWS-based LLMOps observability (Opic/OPEC) plus hybrid retrieval (vector+BM25 with reranking) to optimize relevance, latency, and cost.”
Mid-level AI Engineer specializing in Generative AI, LLM fine-tuning, and RAG systems
“Built and deployed production LLM applications including a natural-language-to-read-only-SQL system focused on ambiguity handling and query safety (schema whitelisting, intent validation, confidence checks, deterministic execution). Experienced with LangChain-based, modular agent orchestration and RAG document QA for large PDFs, with a metrics-driven testing/evaluation approach and cross-functional delivery with marketing on an AI content recommendation/search tool.”
“Built a production AI-powered university marking system that automates question generation and grading from PDF course materials using a RAG pipeline (S3 + Pinecone) orchestrated with LangChain/LangGraph and deployed on AWS ECS via Docker/ECR and GitHub Actions CI/CD. Addressed a key real-world LLM challenge—grading consistency—by implementing rubric-based scoring, retrieval re-ranking, and standardized context summarization, validated against human instructors.”
Mid-level AI/ML Engineer specializing in GenAI, NLP, and production MLOps
“AI/LLM engineer who built and deployed a production healthcare RAG chatbot ("DoctorBot") with strict medical safety guardrails, an 85% confidence-gated verification layer, and latency optimizations that cut responses from ~8s to ~2–3s. Also worked on finflow.ai to generate finance/banking test cases from BRDs, collaborating closely with non-technical domain stakeholders, and has hands-on orchestration experience with LangChain/LangGraph and agentic evaluation/monitoring practices.”
Senior ML/AI Engineer specializing in LLMs, RAG, and healthcare AI
“Built a production-grade clinical and insurance document AI system in a HIPAA/PHI-regulated environment, taking it from experimentation through Azure deployment, monitoring, and iterative improvement. Stands out for translating RAG/LLM research into reliable microservices with strong safety controls, drift monitoring, and human-in-the-loop workflows that cut manual review time by 60-70%.”
Mid-level Generative AI Developer specializing in Python and LLM applications
“Currently working on Kavia AI, an end-to-end AI coding platform that lets users generate enterprise applications from prompts and existing codebases via SCM integrations. The candidate has hands-on experience across the GenAI stack—prompt engineering, LangGraph-based multi-agent orchestration, RAG, knowledge graphs, FastAPI, and AWS monitoring—with a focus on making software creation accessible to non-technical users.”
Senior Product Manager specializing in mobile apps, API platforms, and AI-native developer tools
Mid-level Generative AI Engineer specializing in LLMs and RAG for enterprise and FinTech
Junior Machine Learning Engineer specializing in computer vision and LLM/VLM systems
Junior Applied AI Engineer specializing in conversational and voice agent platforms
Mid-level Backend/Android Engineer specializing in Kotlin and applied ML
Mid-level Machine Learning Engineer specializing in LLMs, RAG, and agentic automation
Mid-level AI/ML Engineer specializing in GenAI, RAG platforms, and ML pipelines
Senior AI Engineer & Data Scientist specializing in LLM agents and forecasting
Executive Technical Founder & Full-Stack Engineer specializing in FinTech platforms
Mid-Level Software Engineer specializing in full-stack, APIs, and embedded/IoT systems
Junior AI Engineer specializing in LLM agents, RAG, and computer vision
Mid-level AI Engineer specializing in Generative AI and LLM agent systems
Junior Data Scientist specializing in production ML, LLM systems, and cloud analytics
Mid-Level Machine Learning Engineer specializing in NLP and Generative AI