Pre-screened and vetted.
Mid-level Product Manager specializing in customer experience, analytics platforms, and GenAI
Senior AI/ML Engineer specializing in Generative AI and LLM applications
Mid-level AI/ML Engineer specializing in LLM RAG pipelines and cloud MLOps
Mid-level AI/ML Engineer specializing in GenAI agents and production ML systems
Mid-level AI/ML Engineer specializing in LLM, RAG, and multimodal systems
Senior AI/ML Engineer specializing in healthcare LLMs and conversational AI
Senior AI/ML Engineer specializing in healthcare and fintech AI systems
Senior Full-Stack Python Developer specializing in cloud, data platforms, and GenAI
Mid-level Machine Learning Engineer specializing in fraud detection and recommendations
Mid-level Agentic AI & ML Engineer specializing in LLM agents and RAG systems
Senior Software Engineer specializing in cloud, data platforms, and LLM/RAG applications
Mid-level AI/ML Engineer specializing in cloud MLOps and GenAI for fraud detection
Mid-level Data Scientist specializing in FinTech and product analytics
Senior AI Engineer specializing in healthcare and FinTech AI systems
Junior Machine Learning Engineer specializing in LLMs and data pipelines
“Research Extern at Google DeepMind and former AWS Software Development Engineer Intern with a strong focus on practical, trustworthy AI engineering. Built a multi-agent RAG system for personalized news headline generation using a fine-tuned Flan-T5 model, parallel critic agents, FAISS retrieval, and style embeddings, while also leading a 3-person team on the project.”
Mid-level Full-Stack Developer specializing in cloud-native backend services and real-time data platforms
“Backend/data engineering candidate with Netflix experience designing and migrating analytics platforms from batch to real-time streaming (Kafka/Flink) across AWS and GCP. Delivered measurable improvements (40% lower data delay, 99.9% accuracy) using phased rollouts, automated data validation (Great Expectations), and strong observability (Prometheus/Grafana), and proactively hardened pipelines with idempotency to prevent duplicate Kafka processing.”