Pre-screened and vetted.
Mid-Level Software Engineer specializing in cloud-native microservices and distributed systems
Senior Machine Learning Engineer specializing in GenAI and LLM-powered systems
Mid-level Machine Learning Engineer specializing in NLP, time-series forecasting, and edge AI
Mid-level AI/ML Engineer specializing in NLP, LLMs, and fraud/AML analytics
Mid-level Machine Learning Engineer specializing in healthcare risk prediction and GenAI
Mid-level Machine Learning Engineer specializing in forecasting, NLP, and MLOps
Mid-level Machine Learning Engineer specializing in LLMs, RAG, and MLOps
Mid-level AI/ML Engineer specializing in NLP, fraud detection, and LLM applications
Mid-level AI/ML Engineer specializing in risk modeling, NLP, and Generative AI
Mid-level Machine Learning Engineer specializing in Generative AI, NLP, and recommender systems
Executive CIO and AI Transformation Leader specializing in cloud, cybersecurity, and enterprise automation
Mid-level Machine Learning Engineer specializing in MLOps and LLM/RAG systems
Senior AI/ML Engineer specializing in MLOps and Generative AI (LLMs/RAG)
Mid-level Applied AI Engineer specializing in Generative AI and RAG systems
Intern Data Scientist/ML Engineer specializing in generative AI and ML platforms
“AI Engineering Intern at The Etherloop building the backend for a healthcare lifestyle recommendation app, including a multi-agent RAG-based system that uses curated SME data plus web search to generate personalized supplement recommendations from user lifestyle details and blood biomarkers. Evaluates against 500+ SME ground-truth profiles with ranking metrics and focuses on HIPAA-aligned deployment, privacy/security, and guardrails to reduce hallucinations and unsafe outputs.”
Mid-level AI/ML Engineer specializing in GenAI, computer vision, and MLOps
“AI engineer with experience taking a GPT-4-powered GenAI career coach toward production on Azure AI Foundry, re-architecting the backend with hybrid (vector + keyword) search and RAG optimizations to cut latency by 50%. Also has client-facing TCS experience building healthcare ETL pipelines and delivering error-free monthly reports, plus current work analyzing agentic system reasoning traces and guardrail drift as an AI research fellow.”
Mid-level Machine Learning Engineer specializing in NLP and scalable MLOps
“Data/ML engineer in financial services (Northern Trust) who built a production RAG-based LLM system to connect structured transaction/portfolio data with unstructured market and internal documents for risk teams. Strong in end-to-end pipelines (AWS Glue/Airflow/PySpark), entity resolution, and taking models from prototype to reliable daily production with performance tuning (LoRA + TensorRT) and monitoring.”
Mid-level Machine Learning Engineer specializing in cloud-native GenAI and RAG systems
“Built and productionized an internal GenAI chatbot that makes company policy/SOP knowledge instantly searchable, using a secure RAG architecture on AWS (Bedrock/Titan embeddings/OpenSearch Serverless, Textract/Lambda/S3 ingestion, Claude 3 Sonnet). Demonstrates strong MLOps/orchestration experience (Airflow, Step Functions with Lambda/Glue/SageMaker) and a rigorous reliability approach (RAGAS metrics, A/B testing, citation validation, monitoring), including collaboration with compliance stakeholders via review dashboards.”
Mid-level AI/ML Engineer specializing in predictive modeling, data pipelines, and RAG systems
“Built and productionized an LLM-powered internal knowledge search system in a regulated environment, using embeddings/vector DB retrieval with strict grounding and confidence gating to reduce hallucinations. Reported ~45% accuracy improvement over keyword search and implemented end-to-end orchestration, monitoring, CI/CD, and incremental re-indexing to manage latency and data freshness while driving adoption with business stakeholders.”
Junior Machine Learning Engineer specializing in semantic search and retrieval systems
“Built and shipped a production RAG system (“TROJAN KNOWLEDGE”) for answering questions over technical PDFs, using a 3-stage retrieval stack (BM25 + FAISS + cross-encoder) to lift F1 from 71% to 84%. Drove major performance gains with a 3-level cache (memory/Redis/disk) cutting latency from ~200ms to ~10ms, and added Prometheus/Grafana monitoring plus LangChain-based fallback logic to handle OpenAI rate limits under load.”