Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in Generative AI and RAG assistants
Mid-level Full-Stack Software Developer specializing in cloud microservices and healthcare interoperability
Senior Full-Stack Software Engineer specializing in Python, React, and LLM-powered applications
Mid-level Generative AI & ML Engineer specializing in production LLM and RAG systems
“AI/ML engineer who shipped a production blood-test report understanding and personalized supplement recommendation product, using a LangGraph multi-agent pipeline on AWS serverless with OCR via Bedrock and RAG over vetted clinical research. Also built end-to-end recommender system pipelines at ASANTe using Airflow (ingestion, embeddings/features, training, registry, batch scoring/monitoring) with KPI reporting to Tableau, with a strong focus on safety, evaluation, and measurable reliability.”
Executive Software Architect specializing in Cloud, Security, and SaaS
“Cloud consulting veteran building DragnCloud, an agentic cloud infrastructure builder aimed at teams outgrowing Heroku/Supabase who need complex setups (data pipelines, custom LLM hosting, multi-tier apps) without hiring consultants. Has SaaS background and has served as a Fractional CTO for funded companies; currently focused on landing the first 20 customers and expanding automation into higher-value workflows.”
Mid-level Generative AI Engineer specializing in LLM agents and RAG applications
“GenAI builder and technical lead with ~2 years of hands-on production experience, including GENIE (a GenAI sandbox for ~44,000 Massachusetts public-sector employees) and A-IEP, a multilingual platform helping parents understand complex IEP documents (cut processing from ~15 minutes to ~2 and used by 1,000+ parents). Strong in RAG/agentic architectures, AWS serverless + Step Functions orchestration, and rigorous evaluation/guardrails for reliable real-world deployments.”
Junior AI Software Engineer specializing in LLM agents, RAG, and healthcare NLP
“Backend engineer who built an agentic LLM system for private equity/finance that answers questions over enterprise contracts and documents using a vector-db RAG pipeline. Differentiator is a trust-focused citation framework (with highlighted source text) to reduce hallucinations in high-stakes workflows, plus strong DevOps experience deploying microservices on Kubernetes with Helm/GitOps and building Kafka real-time pipelines.”
Mid-level AI/ML Engineer and Data Scientist specializing in LLMs and MLOps
“Data science/AI intern at University at Buffalo Business Services who built and deployed production systems spanning classic ML and LLM assistants. Delivered real-time competitor intelligence for a Cornell-partnered, $1B beverage launch by scraping/cleaning 5,000+ SKUs and deploying models via API, then built a domain-aware LLM assistant to modernize Excel-based workflows with strong grounding, privacy controls, and sub-5s latency.”
Senior Machine Learning Researcher/Engineer specializing in temporal modeling and production ML systems
“Backend engineer who built and evolved a startup data-processing backend (Express.js/MySQL) handling millions of user data points, with a microservices pipeline integrating multiple social media APIs. Emphasizes reliability and security through comprehensive testing, robust error/retry handling for sequential pagination constraints, and tight IAM/JWT/OAuth-based access controls.”
Mid-level Software Engineer specializing in AI/ML and cloud data platforms
“ML engineer with hands-on experience taking a Gaussian Process Regression-based intelligent survey timing system from build to real-world deployment, including a 3-week RCT on 120 participants and measurable improvements (15% response rate, 23% data quality). Also served as a key technical resource at CData for customer-facing demos and debugging hundreds of production issues, bridging engineering with Sales and Customer Success.”
Intern AI Engineer specializing in LLMs, NLP, and conversational search
“Student building a production trip-planning LLM agent (LangChain + Streamlit) that routes user queries across multiple tools (maps/places/Wikipedia). Implemented zero-shot multi-label intent detection with priority rules to handle multi-intent requests, and collaborates with a startup product manager to shape tone, features, and user experience.”
Director-level AI Engineer specializing in computer vision and LLM/RAG platforms
“Hands-on LLM/RAG engineer with production experience improving retrieval quality and stability by addressing messy data, vector DB inaccuracy, and top-K issues—ultimately redesigning to hybrid search with tuned keyword/semantic weighting and MCP-based data supplementation. Also brings strong AKS/Kubernetes deployment experience, optimizing CI/CD speed via lightweight local Docker validation and decomposing pods to avoid full rebuilds, plus a metrics-driven approach to agent/workflow testing and traceability.”
Mid-level Software/Data Engineer specializing in LLM apps, RAG pipelines, and cloud microservices
“Backend/data engineer who built an enterprise LLM assistant (AI Genie) at Broadband Insights using a LangChain + GPT-4 + Pinecone RAG pipeline to automate broadband analytics reporting. Developed Python/Dagster ETL processing 10M+ records/day and improved data freshness by 60%, with production-grade scalability patterns (async workers, containerized microservices, Kubernetes) and strong multi-tenant isolation practices.”
Mid-level ML Engineer specializing in LLMs, Generative AI, and MLOps
“AI/ML engineer with production experience building an enterprise network-fault prediction assistant that combines anomaly detection (Isolation Forest + LSTM) with an LLM layer for incident diagnosis and recommended resolutions. Hands-on with orchestration (Airflow, Prefect, Dagster) to run ETL/ELT and automated training/fine-tuning workflows, and has delivered AI solutions with non-technical stakeholders (retail customer support ticket categorization/response suggestions).”
Mid-Level Cloud-Native Software Engineer specializing in microservices, DevOps, and AI integration
“Backend-focused Python engineer who owned high-traffic internal services end-to-end (FastAPI/Django) including REST/GraphQL APIs, PostgreSQL optimization, async task processing via SQS, and full CI/CD. Strong Kubernetes-on-EKS and GitOps (ArgoCD + Helm) experience, plus Kafka real-time streaming work and phased cloud-to-on-prem migration support.”
Mid-level Full-Stack Engineer specializing in AI-powered internal tools
“Backend/platform engineer with strong ownership of production systems, including a full Azure migration from a VM-based monolith to a containerized, event-driven microservices architecture. They combine cloud infrastructure, LLM/RAG optimization, and pragmatic stakeholder management, with measurable wins including 90% infra cost reduction, faster deployments, and significantly improved latency and token efficiency.”
Mid-level AI Engineer specializing in LLM apps, RAG pipelines, and multi-agent systems
“AI Engineer at Humanitarian AI who has built and productionized both a LangGraph-based multi-agent workflow system and a RAG pipeline (OpenAI embeddings + vector DB) with rigorous evaluation/guardrails. Reports strong measurable impact (60% faster workflow delivery, 40% fewer incidents, 70% reduced research time) and has prior enterprise modernization experience at Infosys migrating ETL to microservices with zero production incidents.”
Mid-level AI/ML Engineer & Data Scientist specializing in NLP and Generative AI
“Built and deployed an agentic RAG platform at Centene Health to support healthcare claims and complaints workflows (Q&A for claims agents, executive complaint summarization, and compliance triage/classification). Experienced in LangChain/LangGraph orchestration, production deployment on AWS with FastAPI/Docker/Kubernetes, and implementing HIPAA-compliant guardrails to reduce hallucinations and ensure explainable outputs.”
Mid-level Generative AI Engineer specializing in LLMs, RAG, and agentic systems
“Built a production "Mini RAG Assistant" for internal document Q&A, focusing on grounded answers (anti-hallucination), retrieval quality, and latency/cost optimization. Uses LangChain/LangGraph for orchestration and applies a metrics-driven evaluation loop (including reranking and semantic chunking improvements) while collaborating closely with product stakeholders.”
Junior Full-Stack Software Engineer specializing in React/Node, cloud, and LLM-powered automation
“Master’s program project lead who built and deployed a real-time sound recognition system (Flask + React Native + ML) that was adopted by 200+ university students. Demonstrates strong production engineering and cross-layer debugging—solving latency, unreliable uploads, and observability gaps using microservice separation, chunked/idempotent transfers, and packet-capture-driven network diagnosis—plus AWS/on-prem and IoT edge-to-cloud integration experience.”
Mid-level GenAI Engineer specializing in RAG, LLM agents, and enterprise automation
“Accenture engineer who built and shipped a production RAG-based automation/chatbot for SAP incident triage and troubleshooting, embedding thousands of runbooks/logs/tickets into a semantic search pipeline and integrating it into Teams/Slack. Reported major productivity gains (30–60% time reduction), >90% validated answer accuracy, and sub-2-second responses, with strong orchestration (Airflow/Prefect/LangGraph) and reliability practices (guardrails, testing, monitoring).”
Mid-Level Software Development Engineer specializing in GenAI automation and cloud systems
“Backend Python engineer who architected an event-driven order integration engine connecting EDI vendors to ERP/WMS/3PL systems, including a canonical order model and adapter framework to eliminate per-customer hardcoding. Has hands-on Kubernetes production experience (microservices, Celery workers, CronJobs, HPAs) and implemented GitOps/CI-CD using GitHub Actions, Docker, and ArgoCD, including moving deployments from on-prem to Azure.”
Senior Data Scientist specializing in LLM applications, RAG systems, and production ML
“Senior Data Scientist in consulting who has built production RAG systems for insurance/annuity document search at large scale (100K+ PDF pages), emphasizing grounded answers, guardrails, and low-latency retrieval. Experienced in end-to-end MLOps for LLM apps—monitoring, evaluation sets, drift handling, and safe rollouts—and in orchestrating complex pipelines with Prefect/Airflow and deploying services on Kubernetes.”