Pre-screened and vetted.
Intern Generative AI Engineer specializing in RAG and multi-agent systems
“Built and deployed a production RAG-based multi-agent chatbot during an internship to help consultants answer client questions and guide users through new IT systems with step-by-step instructions. Demonstrates hands-on experience with LangGraph/LangChain/Google ADK, unstructured document parsing and chunking for RAG, and a reliability-first approach to agent workflows (metrics, fallbacks, human-in-the-loop, guardrails).”
Executive Technology Leader (CIO/CTO/CDO) specializing in AI, cloud, and data strategy
“Founder with an end-user-ready product who has self-funded/used angel investing but has not yet launched marketing in earnest. Motivated by identifying a market gap with a differentiated product and is now seeking value-add investors who can provide both capital and go-to-market expertise/connections.”
Intern-level Software Engineer specializing in GenAI, RAG, and backend systems
“AI/LLM engineer focused on shipping production-grade agents that automate support, sales intake, and ERP-connected workflows. Stands out for combining strong orchestration and guardrails with measurable business outcomes, including 45% faster support handling, ~$1.2M annual savings, 18% higher customer satisfaction, and 99.5%+ reliability in production.”
Senior Data Engineer specializing in FinTech analytics and ML data platforms
“ML/AI engineer with Goldman Sachs experience building production fraud detection and RAG-based trading insights systems end-to-end. Stands out for combining real-time ML infrastructure, GenAI retrieval systems, and compliance-aware design, with measurable impact including nearly 25% false-positive reduction and improved analyst productivity.”
“Built and owned end-to-end production systems for a healthcare platform, including a predictive task recommendation feature (React + FastAPI + ML on AWS ECS) that cut backlog 20% and saved coordinators ~10 hours/week. Also productionized an AI-native RAG system (vector DB + LLM) delivering 40% faster query resolution, and led phased modernization of a monolithic FastAPI service into async microservices using feature flags and canary releases.”
Mid-level Data Engineer specializing in cloud-native analytics and enterprise integrations
“Built and productionized an LLM-powered clinical assistant at a healthcare startup, re-architecting a prototype into a robust RAG system on AWS with guardrails, citations, monitoring, and automated tests for clinical reliability. Works closely with clinicians to convert workflow feedback into evaluation criteria and iterative system improvements, and has hands-on experience debugging agentic systems in real time (including during live client demos).”
Mid-level AI/ML Engineer specializing in NLP, LLMs, and MLOps for healthcare and finance
“Built a production LLM-powered RAG agent for healthcare/insurance operations that retrieves and summarizes patient medical documents with grounded citations, scaling to ~4.5M records. Addressed medical shorthand and terminology by fine-tuning ~120 lightweight DistilBERT models by specialty and validating entities against SNOMED/RxNorm, while using SHAP/LIME and human-in-the-loop review to make decisions explainable to stakeholders.”
Intern Data Scientist specializing in generative AI and forecasting
“ML/NLP practitioner working across healthcare and business/finance use cases: currently fine-tuning a domain-specific Llama 3.1 model for safe reasoning over EHRs/clinical notes using RAG + RL/DPO and RAGAS-based evaluation. Has built UMLS-driven entity normalization pipelines with quantified quality gains and developed embedding/vector-DB systems (FAISS) for semantic matching and forecasting/recommendation applications at Aurora AI and Banxico.”
Intern Software Engineer specializing in ML/NLP and LLM applications
“Full-stack AI/LLM engineer who has deployed a production LLM backend (Mistral 14B) on GKE to auto-transform datasets and generate runnable ML training pipelines, addressing hallucinations, schema mismatch, latency, and burst scaling with caching/prompt compression and HPA. Also has internship experience (Splunk, BlackOffer) delivering data automation and 10+ Power BI dashboards for non-technical stakeholders with measurable efficiency gains.”
Mid-level Full-Stack Engineer specializing in AI and FinTech platforms
“Full-stack engineer building real-time internal banking operations dashboards (Java/Spring Boot microservices + React/TypeScript) with Kafka-based streaming and post-launch performance optimizations. Also shipped a production internal AI support assistant using RAG (Confluence/PDF/support docs ingestion, embeddings + vector DB retrieval) with guardrails, evaluation loops, and observability to reduce hallucinations and prevent regressions.”
Intern Full-Stack Software Engineer specializing in AI/LLM platforms and data systems
“Backend/LLM engineer with experience productionizing RAG systems (legal-case natural language querying) and optimizing for latency/cost, including a reported ~40% reduction via Redis caching and batching. Built monitoring and real-time debugging workflows (FastAPI, structured logging, correlation IDs, sandbox repro) and regularly delivered technical demos/workshops. Also partners with BD/sales to translate LLM capabilities into business value, including ESG-metric extraction from corporate filings.”
Senior AI/ML Engineer specializing in Generative AI and agentic multi-agent systems
“Built and shipped a production LLM-powered multi-agent RAG system to automate complex internal support workflows, integrating tool execution (SQL/APIs) with validation guardrails to reduce hallucinations. Optimized for real-world latency and cost via model routing, caching, and async parallel tool calls, and enforced reliability with CI-gated golden test sets derived from anonymized production queries.”
Mid-level Generative AI Engineer specializing in LLM fine-tuning, RAG, and agentic systems
“Built and deployed a production multi-agent RAG system at JPMorgan Chase to automate regulated credit analysis and compliance clause discovery across large internal policy/document libraries. Implemented LangGraph-based supervisor orchestration with structured state management (Azure OpenAI) to support long-running, resumable workflows, plus hybrid retrieval + re-ranking and guardrails for reliability. Strong at evaluation/observability (trace logging, LLM-judge, HITL) and at communicating results to non-technical stakeholders via Power BI embeds and Streamlit prototypes.”
Senior Software Engineer specializing in backend infrastructure, cloud automation, and reliability
“End-to-end deployment owner for Oracle document delivery/print services in a hospital-like production environment, focused on reliability/performance at scale (thousands of systems). Also describes implementing event-driven RAG/agentic LLM workflows with attention to embeddings/index consistency, latency, and measurable improvements in response relevance and operational efficiency.”
Senior AI/ML Engineer specializing in LLMs, NLP, and enterprise conversational AI
“Built and owned a production conversational AI platform for a healthcare contact center, including RAG-based agent assist, hybrid retrieval, safety guardrails, and production monitoring. Stands out for combining LLM product delivery with strong operational rigor, driving a reported 25-30% improvement in handling time in a sensitive healthcare environment.”
Mid-level Software Engineer specializing in backend, cloud infrastructure, and AI systems
“Built and launched a production self-healing MLOps agent that autonomously diagnosed and fixed model training failures on Kubernetes GPU infrastructure. Combines deep AI infrastructure knowledge with full-stack product ownership, and has delivered measurable impact including 35% less infrastructure waste, nearly 50% less troubleshooting time, and 60% lower LLM API costs.”
Mid-level Software Engineer specializing in Java microservices and GenAI automation
“Software engineer (4+ years) with hands-on production GenAI experience: built an AI incident triage assistant that summarizes production logs for on-call engineers and iterated it using real incident metrics (time-to-signal, triage duration). Also shipped a RAG-based customer support knowledge assistant using embeddings + vector retrieval with strong guardrails (relevance thresholds/abstain, sanitization, auditing) and a formal eval loop (500-query gold set) that drove measurable retrieval improvements.”
Mid-level AI/Analytics Product & Data Professional specializing in LLM and dashboard automation
“Built and shipped open-source LLM/RAG systems, including a generative AI assistant grounded on ~30,000 scraped university web pages, improving response accuracy ~30% by moving from TF-IDF-only retrieval to a hybrid sentence-transformer approach with fallback controls. Also partnered with non-technical leadership at Securi.ai to deliver real-time predictive analytics dashboards (Elasticsearch + Jira/ServiceNow) that reduced project overhead by 18%.”
Mid-level Full-Stack Engineer specializing in scalable APIs, cloud infrastructure, and GenAI apps
“Backend/platform engineer with experience across edtech, logistics, and AWS internal systems—owned a production course recommender end-to-end (model serving + APIs + caching/observability), delivering +30% CTR and -20% latency. Has scaled real-time delivery visibility/rerouting on Kubernetes/EKS to sub-200ms P95 during demand spikes and built billion-events/day telemetry pipelines on AWS (Kinesis Firehose, Lambda, S3, Redshift) with schema evolution, dedupe, and replay support.”
Mid-level Software Engineer specializing in cloud platforms, data engineering, and distributed systems
“Full-stack engineer who built and owned an AI-assisted job-matching dashboard in Next.js App Router/TypeScript, keeping LLM logic server-side and improving performance via deduplication, caching/revalidation, and streaming (35% fewer duplicate LLM calls; 40% faster first render). Also has strong data/backend chops: designed Postgres models and optimized queries at million-record scale (1.8s to 120ms) and built durable AWS multi-region telemetry workflows with idempotency, retries, and monitoring.”
Mid-level GenAI Engineer specializing in RAG, LLMs, and enterprise AI
“Built and shipped production LLM agents that automate document processing and decision workflows, with a strong focus on reliability, guardrails, and measurable business impact. Stands out for combining RAG, tool calling, evals/monitoring, and ERP integration to deliver 30-35% manual effort reduction and higher throughput without additional headcount.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and predictive analytics
“GenAI/LLM engineer who architected and deployed a production RAG “research assistant” for JPMorgan Chase’s regulatory compliance team, focused on safety-critical behavior (mandatory citations, refusal when evidence is missing). Deep hands-on experience with LlamaIndex, Pinecone, Hugging Face embeddings, LangGraph agent workflows, and metric-driven evaluation (golden sets, TruLens), including a reported 28% relevancy lift via cross-encoder re-ranking.”
Mid-level Machine Learning Engineer specializing in LLMs and AI products
“Applied ML/LLM engineer currently building AppleCare’s production chat recommender, owning the full lifecycle from transcript cleaning and fine-tuning through distributed deployment, monitoring, and iterative improvement. Their work delivered >10% copy-count improvement, 5% lower modification rate, 60% cost reduction, and $1.1M profitability in 2025, and they also created a reasoning-data generation approach that enabled a reasoning model and a judge model that cut eval time by over 99%.”
Senior AI & Machine Learning Engineer specializing in GenAI, Agentic AI, and RAG
“Built a production agentic AI system to automate data science work using a layered architecture (executive-summary handling, tool-based execution, and on-the-fly code generation). Demonstrates strong end-to-end agent development practices including RAG with vector databases, prompt engineering, and multi-method evaluation (LLM-as-judge/human/code-based), plus Airflow-based orchestration for ML data pipelines and close collaboration with business end users.”