Pre-screened and vetted.
Mid-Level Software Developer specializing in AI-powered full-stack web applications
Junior Machine Learning Engineer specializing in LLM agents, knowledge graphs, and multimodal AI
Mid-level AI/ML Engineer specializing in LLMs, RAG, and agentic AI systems
Mid-level Software Engineer specializing in full-stack and AI-enabled platforms
Mid-Level Machine Learning Engineer specializing in LLMs and RAG systems
Junior Machine Learning Engineer specializing in deep learning and healthcare AI
Intern Data Scientist / ML Engineer specializing in predictive modeling and data pipelines
Senior Robotics Software Engineer specializing in C++/Python and ROS2 navigation
Mid-level AI/ML Engineer specializing in cloud AI, MLOps, and NLP
Mid-level AI/ML Engineer specializing in MLOps, streaming data, and NLP/CV
Mid-level AI/ML Engineer specializing in GenAI, RAG, and multi-agent LLM systems
Mid-level Data Analyst / Business Analyst specializing in healthcare and operations analytics
Mid-level Software Engineer specializing in AI, backend systems, and full-stack development
Mid AI/ML Engineer specializing in LLMs, MLOps, and FinTech analytics
Junior AI Integration Engineer specializing in LLM agents and RAG on cloud platforms
“Built and deployed LLM-powered features for a startup organizational management application, focusing on real-world deployment constraints like latency and cost. Implemented RAG with FAISS and improved retrieval quality by switching embedding models (OpenAI/Hugging Face) and fine-tuning embeddings on medical corpora for a medical-report UI feature. Uses LangChain and LangGraph to orchestrate multi-node LLM API workflows and evaluates systems with metrics like latency, cost per request, and error taxonomy.”
Junior Data & AI Engineer specializing in cloud AI and analytics
“Built production AI backend systems in healthcare and e-commerce, including a healthcare agent that automated clinical workflows like medication refills, immunizations, and scheduling using FHIR APIs and cloud-native infrastructure. Strong in end-to-end backend ownership, LLM orchestration, and adding guardrails/validation for high-stakes and customer-facing AI workflows.”
Entry AI Engineer specializing in LLMs, RAG, and MLOps
“Built and shipped a production Python-based agentic RAG document retrieval system over 80K records using FastAPI, OCR, vector search, and AWS infrastructure, with a strong emphasis on reliability, testing, and observability. Stands out for treating AI failures like production incidents—turning hallucinations, retrieval misses, and OCR issues into regression tests—and for quantifiably reducing document lookup time from about 12 minutes to under 90 seconds.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and Generative AI
“Built and deployed a production LLM-powered text-to-SQL/document intelligence chatbot on AWS that lets non-technical business users query complex enterprise databases in plain English. Demonstrates deep practical expertise in schema-aware prompting, embeddings-based schema retrieval, SQL safety/validation guardrails, and rigorous offline/online evaluation with human-in-the-loop approvals for risky queries.”
Entry-Level Software Engineer specializing in AI APIs and RAG systems
“Junior/entry-level AI/LLM engineer who built a production-oriented RAG onboarding and knowledge assistant that ingests GitHub repos and internal sources (e.g., Confluence/Jira) using ChromaDB, with reliability features like retrieval fallbacks, retries, caching, and monitoring. Currently implementing a LangGraph-based multi-agent workflow with intent routing and Pydantic/Magentic-validated structured outputs, plus CI/CD offline evals and online metrics (Grafana/Prometheus) to improve predictability and reliability.”
Mid-level Data Scientist specializing in Generative AI and LLMOps
“Built a production-grade, semi-automated document recognition and classification system for large volumes of scanned PDFs, starting from little/no labeled data and handling highly variable scan quality. Deployed on AWS using SageMaker + Docker and orchestrated on EKS with a microservices design that scales CPU-heavy OCR separately from GPU inference, with strong reliability controls (validation, fallbacks, retries, readiness probes).”