Pre-screened and vetted.
Mid-level AI Researcher specializing in LLMs, developer tools, and human-centered AI
“Research-focused AI engineer who built an agentic pipeline to automatically extract Sphinx-based API documentation/changelogs and generate synthetic tasks for a dynamic LLM code benchmark targeting real-world API evolution and deprecations. Experienced with multi-agent orchestration (AutoGen, LangChain, CrewAI) and rigorous evaluation methods, and has prior multi-agent work from a Microsoft Research internship.”
Junior Machine Learning Engineer specializing in LLMs and applied data science
“Built and shipped multiple production AI systems, including Auto DocGen (LLM-generated OpenAPI docs kept in sync via AST diffs, schema-constrained generation, and CI/CD on Render) and a multimodal sign-language recognition pipeline at USC orchestrated with FastAPI, MediaPipe, and PyTorch. Also partnered with Esri’s non-technical community team to fine-tune an LLaMA-based spam classifier with a review UI, cutting moderation time by 70%.”
Senior Data Scientist / ML Engineer specializing in cloud ML pipelines and GenAI
“ML/NLP practitioner with experience building a transformer-failure prediction system that combines sensor signals with unstructured maintenance comments using LLM-based extraction and similarity validation. Strong emphasis on production readiness—data leakage controls, SQL-driven data quality tiers, and rigorous bias/fairness validation (including contract/spec evaluation across diverse company profiles).”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps
“Red Hat ML/LLM engineer who designed and deployed a production LLM-powered customer support automation system using RAG, improving latency by 30% via PEFT and vector search optimization. Built security and governance into retrieval (access-level filtering, encrypted Pinecone/ChromaDB) and delivered SHAP-based explainability via a dashboard for non-technical stakeholders. Experienced orchestrating distributed ML/RAG pipelines across AWS SageMaker and OpenShift with Airflow/Prefect, plus multi-agent workflows using CrewAI and LangGraph.”
Mid-level AI/ML Engineer specializing in robotics perception and AR/VR systems
“AI engineer with robotics perception experience at Forterra, building and deploying moving-object/obstacle detection models into real-time robot pipelines. Addressed training crashes/latency via sub-batch training and optimizer tuning, and improved debugging using ROS/ROS2 tooling with 3D voxel visualization and color-coded validation.”
Intern Software Engineer specializing in backend systems, cloud infrastructure, and ML/LLM tooling
“Infrastructure-leaning engineer who has built real-time ML systems end-to-end: a Jetson-deployed adaptive Whisper ASR service (Flask + WebSockets, React/TS UI) and a high-throughput Postgres schema for live transcription. Also delivered customer-facing AI billing/OCR improvements for a dental startup (Dentite), boosting OCR performance by 38%, and has experience instrumenting open-source ML deployment stacks to add infrastructure visibility.”
Mid-level AI/ML Engineer specializing in GenAI, LLMs, RAG, and MLOps
“Built and deployed a production LLM-powered RAG document intelligence/Q&A system for healthcare prior authorization, reducing manual medical document review time and improving decision efficiency. Strong in end-to-end LLM application engineering (LangChain/LangGraph), retrieval quality improvements (hybrid search, embedding tuning, chunking strategies), and rigorous evaluation/monitoring for reliability.”
Junior Data Scientist and ML Researcher specializing in Transformers, multimodal AI, and autonomy
“Autonomous robotics student who built an end-to-end ROS2 semantic goal navigation system as a solo course project, integrating CLIP-based vision-language understanding with SLAM Toolbox and Nav2 to execute natural-language commands in Gazebo/RViz. Also implemented and tuned an RRT planner from scratch in Python and uses Docker plus GitHub workflows for reproducible, tested robotics codebases.”
Intern Machine Learning Engineer specializing in LLMs, MLOps, and NLP
“Built and deployed a production LLM-driven Dungeons & Dragons game where the model acts as a dungeon master, adding a structured combat system and a macro-state tree to ensure campaigns converge to a clear ending. Fine-tuned Gemini 2.5 Flash on Vertex AI and deployed on GCP with Kubernetes, using RAG over DnD rules/spells plus multi-agent orchestration (intent-based routing between narrative and combat agents) to reduce hallucinations and improve reliability.”
Mid-level Software Engineer specializing in NLP and search systems
“Built an AI journaling app at HackCU 2025 featuring a speaking AI avatar with long-term memory via RAG (ChromaDB) and low-latency microservices coordinated through Kafka, including deployment under AMD/non-CUDA constraints using a quantized Llama 8B model. Also has Goldman Sachs experience deploying a Trade UI on Kubernetes with CI/CD rollback automation, plus a healthcare AI internship at CU Anschutz collaborating closely with physicians on diagnostic reasoning and dataset annotation.”
Senior AI Research Engineer specializing in LLM agents and predictive maintenance
“At Delta Electronics, partnered with automotive firmware teams to productionize an LLM-based coding assistant for identifying safety standard violations and generating bug-fix guidance. Built an agentic workflow with stepwise context extraction, similarity search, and a separate judge model for scoring reasoning/retrieval, and drove internal adoption through pain-point discovery and tailored technical demos using real firmware code.”
Mid-Level Software Engineer specializing in distributed systems and cloud-native backends
“AI/LLM engineer with production experience at Charles Schwab building a RAG-based assistant to help 5,000+ reps answer complex financial policy questions. Implemented a multi-layer anti-hallucination approach (GNN-driven ontology/graph retrieval + citation-only answers) and compliance-focused guardrails (Azure AI Content Safety) in partnership with audit/compliance stakeholders.”
Director-level Product & AI Platform Leader specializing in Enterprise SaaS and IT governance
“UC Berkeley CS–trained hands-on engineering leader with executive experience spanning fundraising and board/customer communication. Led architecture and roadmap for AI-driven fintech platforms (including portfolio data, market signals, document processing, and Bitcoin trading), scaling global orgs (~100 people) and driving modular API-based designs that improved reliability, onboarding speed, and customer retention.”
Junior Software Engineer specializing in ML, distributed systems, and LLM applications
“Interned at Zonda where he built an AI-driven semantic search solution over ~280M housing/builder records. Iterated from local LLMs via llama.cpp quantization to a vector-embedding retrieval system, then boosted semantic accuracy with a custom spaCy NER layer and re-ranking, optimizing for latency through precomputation. Collaborated with economics-focused stakeholders to reduce manual document/paperwork time by enabling natural-language search over internal data.”
Junior Machine Learning Engineer specializing in MLOps and statistical modeling
“Integration engineer at ES Foundry who led deployment of ELsentinel, a production EL image-based solar cell quality monitoring system using a Swin Transformer classifier (>0.8 F1 across 15+ classes) plus a live real-time prediction dashboard. Strong in solving messy labeling/data-quality problems with process-team collaboration and shipping ML systems despite limited compute/infrastructure.”
Junior Machine Learning Engineer specializing in speech and multimodal AI
“New grad who has shipped a production vision-language recommendation feature for a pet camera/mobile app, including building a tagged video dataset with human annotators and optimizing inference by FPS downsampling under device compute limits. Also built a multimodal MLLM benchmark using an LLM-as-judge (GPT-5-thinking) with a feedback loop, validated against human scoring, and measured post-feedback quality gains (12% average score improvement).”
Mid-level Machine Learning Engineer specializing in financial AI, NLP, and MLOps
“AI/ML engineer with experience at Accenture and Morgan Stanley, building production LLM systems (GPT-3 summarization) and finance-focused ML models (credit risk and trading anomaly detection). Combines MLOps depth (Docker/Kubernetes, AWS SageMaker/Glue/Lambda, MLflow, A/B testing, drift monitoring) with practical domain adaptation techniques like few-shot prompting and RAG/knowledge-base integration.”
Mid-level Data Scientist specializing in predictive and generative AI
“AI/ML engineer with production LLM experience in regulated financial services (J.P. Morgan Chase), building a customer response engine to automate first-contact resolution while addressing privacy, bias, compliance, and scale. Strong MLOps/orchestration background (Airflow, Docker/Kubernetes, AWS Step Functions, Azure ML/SageMaker) plus proven ability to integrate with legacy systems and drive stakeholder adoption through dashboards, auditability, and training.”
Mid-level AI/ML Engineer specializing in healthcare NLP and MLOps
“Healthcare/clinical ML practitioner who built and productionized ClinicalBERT-based pipelines to extract and standardize oncology EHR data, improving downstream model F1 from 0.81 to 0.92 while controlling training cost via LoRA/QLoRA. Experienced orchestrating real-time AWS ETL/ML workflows (Glue, Lambda, SageMaker) and partnering with clinicians using SHAP-based interpretability, contributing to an 18% reduction in readmissions and full adoption.”
Principal Software Engineer specializing in AI/ML and cloud-native backend systems
“McKinsey data/ML practitioner who led production deployment of an entity resolution + semantic search platform for unstructured finance and healthcare data, integrating with legacy systems under HIPAA constraints. Deep hands-on stack across transformers (spaCy/HF BERT), embeddings + FAISS, and production MLOps/workflow tooling (Airflow, Docker, CI/CD, Prometheus/Grafana), with reported gains of +30% decision speed and +25% search relevance.”
Mid-level Machine Learning Engineer specializing in MLOps, NLP, and Computer Vision
“ML/AI engineer with production experience across retail and healthcare: built a real-time computer-vision shelf monitoring system at Walmart and optimized edge inference latency by ~30% using TensorRT/ONNX and pruning. Also partnered with CVS Health clinical/pharmacy teams to deliver a medication-adherence predictive model, using Streamlit explainability dashboards and achieving an 18% adherence improvement.”
Intern AI Engineer specializing in LLM agents, RAG, and applied biostatistics
“Siemens AI engineer who shipped production multi-agent LLM systems across cybersecurity and sustainability, including a vulnerability automation agent that cut manual work 70%. Deep in orchestration (LangGraph supervisor-worker state machines), reliability engineering (async fault tolerance, retries, spike handling), and rigorous evaluation (offline benchmarks, LLM-as-a-Judge improving label agreement 28.9%) with measurable production guardrails.”
Mid-level AI/ML Engineer specializing in LLMs, RAG pipelines, and MLOps
“AI/ML engineer who has shipped production AI systems end-to-end, including an automated multi-channel (Gmail/WhatsApp/voice) candidate interviewing workflow and an enterprise RAG knowledge search platform. Demonstrates strong production rigor (monitoring, A/B tests, guardrails, schema validation, shadow testing) with quantified impact: ~60–70% reduction in interview evaluation time and ~20–30% relevance gains in RAG retrieval.”
Mid-level Machine Learning Engineer specializing in fraud detection and LLM applications
“Unreal Engine UI engineer focused on scalable, production-ready UI architecture (C++/Slate/UMG/CommonUI) with strong designer enablement via decoupled, interface-driven patterns and MVVM. Demonstrated measurable performance wins: replaced 200+ per-frame Blueprint bindings to cut UI prepass/paint from 4.2ms to 0.5ms and reduced VRAM by ~120MB using texture streaming proxies.”