Pre-screened and vetted.
Mid-level AI Engineer specializing in LLMs, agentic AI, and machine learning platforms
“New grad focused on AI systems and agent-based development, with hands-on experience using LLMs as a coding partner and building RAG-based document processing workflows. Stands out for practical experimentation with semantic chunking, retrieval optimization, and multi-agent architectures, including redesigning a RAG workflow by adding a reasoning agent to improve response accuracy and reliability.”
Mid-level Conversational AI Engineer specializing in enterprise chatbots and workflow automation
“Built a production LLM/RAG document extraction and game/quiz content workflow using LLaMA 2, LangChain/LangGraph, and FAISS, achieving ~94% accuracy and reducing turnaround from hours to minutes. Demonstrates strong applied MLOps/orchestration (CI/CD, MLflow, Databricks/PySpark), robust handling of noisy/variable document layouts (layout chunking + OCR fallbacks), and practical reliability practices (human-in-the-loop routing, drift monitoring, A/B testing).”
Mid-level AI Engineer specializing in NLP and production ML systems
“AI/LLM engineer who has shipped production RAG chatbots using LangChain/OpenAI with FAISS and FastAPI, focusing on real-world constraints like context windows, concurrency, and latency (reported ~40% latency reduction and <2s average response). Experienced orchestrating AI pipelines with Celery and fault-tolerant long-running workflows with Temporal, and has applied NLP model tradeoff testing (Word2Vec vs BERT) to drive measurable accuracy gains.”
Mid-level Data & Machine Learning Engineer specializing in anomaly detection and forecasting
“Built and productionized an agentic RAG assistant using Ollama + LangChain + MCP + ChromaDB to speed up and standardize access to operational knowledge from tickets and runbooks. Focused on real-world reliability: mitigated timeouts/latency with retries and concurrency limits, improved retrieval via chunking/embedding iteration, and reduced hallucinations through citation-grounding and confidence-based abstention. Also partnered with non-technical ops staff to deliver anomaly detection/monitoring by translating operational needs into model signals, thresholds, and alerting logic.”
Mid-level GenAI Engineer specializing in LLM agents and production AI workflows
“Designed and deployed end-to-end LLM-powered AI agent systems to automate knowledge-intensive workflows across marketing/GTM, recruiting, and support. Brings production reliability rigor (evaluation pipelines, monitoring, testing, A/B experiments) plus orchestration expertise (Airflow, Prefect, custom Python) and a track record of translating non-technical stakeholder goals into working AI solutions (e.g., personalized customer engagement agent at Lara Design).”
Mid-level Software Engineer specializing in Generative AI automation and secure platforms
“Backend/security-focused engineer from VeroTX who built an IdP service (Spring Boot + MongoDB on GCP) for an AI workflow platform and drove major latency improvements via caching and query/index optimization. Also shipped an AI loan-processing agent using LangChain/LangGraph, owning the document ingestion + vector database layer and designing a reliable multi-step workflow with retries, monitoring, and human-in-the-loop safeguards.”
Mid-level Software Engineer specializing in AI RAG systems and full-stack cloud applications
“AI/LLM engineer who shipped a production RAG-based knowledge assistant at SparkPlug serving 10,000+ daily users, streaming GPT-4 answers with inline citations over WebSockets. Demonstrated measurable impact (support resolution time cut 18→12 minutes; retrieval precision +~20%) and strong production rigor across ingestion, monitoring/alerting, evaluation, and messy ERP-style data integration with validation, RBAC, and idempotent operations.”
Junior AI Engineer specializing in Generative AI, RAG, and NLP
“AI/LLM engineer who has shipped a production RAG platform at Ticker Inc. on GCP (Qdrant + Postgres) delivering sub-second retrieval over 550k+ items, with measurable gains in latency and answer quality (HNSW optimization, MMR re-ranking). Also built an asynchronous LangChain/LangGraph multi-agent research system (10x faster cycles) and partnered with Indiana University doctors on synthetic patient records and ML error analysis using clinician-friendly F1/loss dashboards.”
Mid-level AI Engineer specializing in causal inference and LLM research
“LLM engineer who has deployed a production system combining LLMs with causal inference (DoWhy) to enable counterfactual “what-if” analysis for experimental research, including a robust variable-mapping/validation layer to reduce hallucinations. Also partnered with non-technical operations leadership at Irriion Technologies to deliver an AI-assisted onboarding workflow that cut onboarding time by 50% and reduced manual errors by ~40%.”
Mid-level Applied ML Engineer specializing in LLM evaluation and multimodal agent systems
“Full-stack engineer working at the intersection of product and infrastructure, building developer-facing interfaces for AI voice agents in XR/immersive environments plus telemetry-heavy analytics dashboards. Experienced in Postgres telemetry data modeling and performance tuning, and in designing durable multi-step LLM pipelines with idempotency, retries, and strong observability; has operated in fast-moving startup-like teams (Biocom, HandshakeAI).”
Mid-level AI/ML Engineer specializing in Generative AI and LLM-powered NLP
“LLM/AI engineer who built a production automated document-understanding pipeline on Azure using a grounded RAG layer, designed to reduce manual review time for unstructured financial documents. Demonstrates strong real-world scaling and reliability practices (Service Bus queueing, Kubernetes autoscaling, observability, retries/circuit breakers) plus rigorous evaluation (shadow testing, replaying traffic, multilingual edge-case suites) and stakeholder-friendly, evidence-based explainability.”
Mid-level Full-Stack Engineer specializing in modern web applications
“Built and launched a production AI chat assistant inside a data processing platform, focused on helping users understand large table outputs and job results faster. Brings strong end-to-end product engineering across React/TypeScript frontend, backend APIs, and LLM integration, with a clear emphasis on reliability, safe behavior, and iterative quality improvements after launch.”
Entry-level Software Engineer specializing in systems, data, and full-stack development
“Built a production-style hackathon prototype for analyzing healthcare facility data and identifying medical deserts via natural-language queries. Stands out for a pragmatic applied-AI approach: separating retrieval from LLM reasoning, using structured JSON outputs, and designing fallbacks and data-quality checks to keep recommendations grounded and reliable.”
Mid-level Backend & Blockchain Engineer specializing in Cosmos SDK and EVM
“Built and productionized an LLM+RAG lending assistant on AWS to help loan officers quickly answer questions from credit policies and prior decisions, tackling hallucinations with retrieval-only responses and a no-context fallback. Also automated end-to-end ETL and model retraining/deployment using Apache Airflow, and has experience translating clinical stakeholder needs (doctors/care managers) into ML features, metrics, and dashboards.”
Mid-level IT & Cloud Security Specialist specializing in GRC, SOC workflows, and agentic AI automation
“Builder/creator who ships practical AI automations and content workflows: created a no-backend website that uses ChatGPT to generate AI agents/manual workflows, and built an inbound/outbound receptionist using n8n and Retell AI (later migrated to Retell workflows). Also produces an AI-written/produced podcast with 55+ hosts and uses tools like Descript and Sora with make.com for batch content creation and scheduling.”
Mid-level AI Engineer specializing in AI agents, RAG pipelines, and LLM evaluation
“Built and shipped production LLM systems at Founderbay, including a low-latency voice agent and a graph-based multi-agent research assistant. Strong focus on reliability in real workflows—hybrid SERP + full-site scraping RAG, grounding guardrails, validation checkpoints, and transcript-driven evaluation—plus performance tuning with async FastAPI, Redis caching, and containerization. Also partnered with a non-technical ops lead to automate post-call follow-ups via call summarization, field extraction, and tool-triggered actions.”
Mid-level Data Scientist specializing in ML, LLM pipelines, and MLOps
“Built and deployed a production LLM-driven document understanding pipeline using LangChain/LangGraph, focusing on reliability via step-by-step prompting, validation checks, and monitoring. Also partnered with non-technical marketing stakeholders at Heartland Community Network to deliver an XGBoost targeting model surfaced in Power BI, improving campaign conversion by 12%.”
Intern Data Scientist specializing in machine learning and NLP
“Analytics-focused early-career candidate with internship experience owning reporting and system performance analysis projects end to end. They combine SQL data preparation, Python automation, and dashboard delivery with measurable impact, including roughly 50% less manual reporting and about 20% better forecast accuracy.”
Mid-level AI/ML Engineer specializing in LLM systems and MLOps
“Built and deployed an AI tutoring assistant end-to-end at Nexora School, spanning discovery with school districts, multi-agent LangGraph/RAG architecture, AWS Bedrock migration, and post-launch stabilization. Stands out for combining hands-on LLM systems engineering with strong educator-facing trust building, FERPA-driven architecture decisions, and disciplined production practices around evals, logging, and messy document ingestion.”
Senior Full-Stack Engineer specializing in web, mobile, and AI products
“Solo developer who built and operated an AI debate product end-to-end, from architecture and deployment through observability and post-launch stabilization. They show strong practical LLM production experience—using Vercel AI SDK, OpenAI, Langfuse, Mem0, and custom RAG—while improving latency to sub-4 seconds, driving failures near zero, and cutting LLM usage by 20%.”
Intern AI/ML Engineer specializing in LLMs, RAG, and agentic automation
“Built and deployed production NLP/LLM systems including a multilingual (5-language) health misinformation detection pipeline with latency optimization (batching/quantization/caching) and explainability (gradient-based attention visualizations). Experienced orchestrating end-to-end AI workflows with Airflow and Prefect, and partnering with customer support ops to deliver an AI agent for ticket summarization and priority classification with clear, measurable acceptance criteria.”
Junior Data Engineer specializing in LLM agents and RAG pipelines
“Built and deployed “ApartmentFinder AI,” a multi-agent system using Google ADK, Gemini, and Google Maps MCP to automate apartment shortlisting and commute-time analysis, cutting a 45–70 minute user workflow down to ~30 seconds. Also has strong delivery/process chops from serving as an SDLC Release Coordinator, managing 52+ releases and reducing SDLC issues by 84%.”
Mid-level Data Scientist specializing in NLP, recommender systems, and ML deployment
“At Provenbase, built and shipped a production LLM-powered semantic search and candidate matching platform (RAG with GPT-4/Gemini, multi-agent orchestration, Elasticsearch vector search) to scale sourcing across 10M+ candidate records and 1000+ data sources. Drove sub-second performance, cut LLM spend 30% with routing/caching, and improved recruiting outcomes (+45% sourcing accuracy; +38% visibility of underrepresented talent) through bias-aware ranking and tight collaboration with recruiting stakeholders.”
Junior Machine Learning Engineer specializing in predictive modeling and GenAI RAG systems
“LLM engineer who built and deployed an emotionally intelligent AAC communication system using an emotion-aware RAG pipeline (Empathetic Dialogues + GoEmotions) and a PEFT-adapted model. Experienced with LangChain/LangGraph and custom Python orchestration, focusing on reliability (guards, schema validation, fallbacks), latency optimization, and rigorous evaluation (automatic metrics + human-in-the-loop), with a reported 18% user satisfaction improvement.”