Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in LLM alignment and applied reinforcement learning
“AI/LLM engineer who has shipped production systems end-to-end, including a note-taking product (Notey) combining audio/image capture, ASR, summarization, and a semantic chat agent over past notes. Also has applied ML experience in healthcare, collaborating directly with doctors to validate an EEG seizure-detection pipeline, and uses Kubernetes to optimize GPU usage for LLM training.”
Director-level AI Engineer specializing in computer vision and LLM/RAG platforms
“Hands-on LLM/RAG engineer with production experience improving retrieval quality and stability by addressing messy data, vector DB inaccuracy, and top-K issues—ultimately redesigning to hybrid search with tuned keyword/semantic weighting and MCP-based data supplementation. Also brings strong AKS/Kubernetes deployment experience, optimizing CI/CD speed via lightweight local Docker validation and decomposing pods to avoid full rebuilds, plus a metrics-driven approach to agent/workflow testing and traceability.”
Mid-level Full-Stack Software Engineer specializing in Java/Spring, React, and AWS
“Backend/data engineer with production experience across event-driven Python ingestion services on AWS (EventBridge/SQS/MongoDB), serverless APIs (Lambda/API Gateway), and analytics ETL (Glue → Redshift). Has modernized legacy reporting into Node.js/React systems and demonstrated measurable SQL performance wins (minutes to seconds) plus strong incident ownership with validation, DLQs, and alerting.”
Mid-level Software/Data Engineer specializing in LLM apps, RAG pipelines, and cloud microservices
“Backend/data engineer who built an enterprise LLM assistant (AI Genie) at Broadband Insights using a LangChain + GPT-4 + Pinecone RAG pipeline to automate broadband analytics reporting. Developed Python/Dagster ETL processing 10M+ records/day and improved data freshness by 60%, with production-grade scalability patterns (async workers, containerized microservices, Kubernetes) and strong multi-tenant isolation practices.”
Mid-level ML Engineer specializing in LLMs, Generative AI, and MLOps
“AI/ML engineer with production experience building an enterprise network-fault prediction assistant that combines anomaly detection (Isolation Forest + LSTM) with an LLM layer for incident diagnosis and recommended resolutions. Hands-on with orchestration (Airflow, Prefect, Dagster) to run ETL/ELT and automated training/fine-tuning workflows, and has delivered AI solutions with non-technical stakeholders (retail customer support ticket categorization/response suggestions).”
Principal DevOps Architect specializing in cloud platform engineering and SRE
“End-to-end engineer focused on AI-native enterprise systems, including a production generative knowledge platform using RAG + semantic search over internal documentation (React, Python/Flask, GPU-hosted NLP models, Pinecone) with strong CI/CD and observability. Reports concrete outcomes including 40% faster knowledge access and ~75% employee adoption, and has led incremental cloud-native modernization using feature flags, parallel runs, canary releases, and regression testing.”
Mid-Level Cloud-Native Software Engineer specializing in microservices, DevOps, and AI integration
“Backend-focused Python engineer who owned high-traffic internal services end-to-end (FastAPI/Django) including REST/GraphQL APIs, PostgreSQL optimization, async task processing via SQS, and full CI/CD. Strong Kubernetes-on-EKS and GitOps (ArgoCD + Helm) experience, plus Kafka real-time streaming work and phased cloud-to-on-prem migration support.”
Junior Full-Stack AI Developer specializing in LLMs and RAG applications
“Product-minded software engineer who owned a Shopify POS app end-to-end at Swym, shipping an MVP and then scaling iteration speed with E2E automation and CI/CD—resulting in a Shopify Badge, Top-5 App Store ranking, and +40% new user acquisition. Also built an ESG insights tool using React/TypeScript + FastAPI with Snowflake and a RAG pipeline, plus microservices patterns (async jobs, queues, DLQs, autoscaling) and internal Metabase/SQL analytics dashboards.”
Senior AI/ML Engineer specializing in LLMs, RAG, and VR/XR multimodal systems
“PhD researcher (University of Utah) who built a production RAG-powered Virtual Reality Research Assistant to answer lab research questions with concrete citations. Implemented an end-to-end LangChain pipeline using PyPDFLoader, chunking strategies, OpenAI embeddings, and ChromaDB, with emphasis on grounding to reduce hallucinations and ensure research-grade accuracy. Collaborated closely with a non-technical PhD advisor to scope requirements, manage cost constraints, and demo iterative progress.”
Mid-level Full-Stack Engineer specializing in data automation, cloud & AI
“JavaScript engineer who effectively "maintains" an internal open-source-style React/Node.js shared library used by multiple teams—owning API stability, semantic versioning, CI/testing, logging, and documentation. Demonstrates strong cross-team debugging and change-management skills (schema-driven refactors, feature flags, validation layers) to ship new features without breaking existing workflows, plus a profiling/benchmarking-driven approach to performance.”
Senior Full-Stack & AI Engineer specializing in scalable web platforms and LLM automation
“Built a production agentic AI assistant in Python using Playwright plus Google Gemini’s vision capabilities to automatically document and execute UI workflows step-by-step, reducing developer time spent on trivial documentation/knowledge transfer. Also built an Apache Airflow ETL pipeline and has experience evaluating AI agents with human-in-the-loop methods, plus successfully communicated a vision-model-based CMS analytics PoC to non-technical university stakeholders and proposed it to Academic Technology with cost-savings rationale.”
Mid-level AI/ML Engineer specializing in Generative AI and LLM systems
“Senior AI/ML engineer with hands-on experience building production LLM systems in healthcare, including RAG-based clinical question answering and end-to-end MLOps on Vertex AI and Kubernetes. They combine strong platform engineering with applied GenAI work, citing a 35% improvement in factual accuracy and a 30% boost in internal team productivity through modular Python services and CI/CD.”
Senior Machine Learning Engineer specializing in NLP, LLMs, and AI systems
“AI/ML engineer with hands-on experience building a healthcare-focused generative AI application end-to-end, from architecture and data design through deployment, monitoring, and iterative improvement. Particularly strong in multi-agent LLM systems, fine-tuning, and safety guardrails, with measurable impact including a 20% accuracy lift to 91% and 10% latency improvement in a nutrition recommendation pipeline.”
Mid-level ML Software Engineer specializing in real-time AI and backend systems
“AI engineer focused on production-grade LLM systems rather than prompt-only solutions, with hands-on experience building citation-grounded RAG products and multi-agent workflows. Most notably built a financial document intelligence system for SEC filings and contracts that achieved ~92% recall@5, cut latency below 2 seconds, reduced hallucinations, and turned analyst research from hours into seconds.”
Junior Software Engineer specializing in backend systems and AI data pipelines
“Backend engineer with fintech/AI startup experience who built an Azure serverless, event-driven pipeline for large-scale crypto sentiment analysis and semantic search (OCR/NLP to vector search) and integrated LLM + blockchain data for predictive insights. Demonstrated measurable impact (25% lower retrieval latency, 10% fewer data errors, 15% higher engagement) and has led safe microservices migrations with strong security and reliability practices.”
Mid-level AI Engineer specializing in LLM apps, RAG pipelines, and multi-agent systems
“AI Engineer at Humanitarian AI who has built and productionized both a LangGraph-based multi-agent workflow system and a RAG pipeline (OpenAI embeddings + vector DB) with rigorous evaluation/guardrails. Reports strong measurable impact (60% faster workflow delivery, 40% fewer incidents, 70% reduced research time) and has prior enterprise modernization experience at Infosys migrating ETL to microservices with zero production incidents.”
Intern Software Engineer specializing in backend systems and Generative AI
“Built and deployed a scalable, production-ready LLM knowledge assistant using a RAG architecture (LangChain + vector store/FAISS) to replace keyword search for internal documents. Demonstrates hands-on expertise in hallucination reduction and retrieval quality improvements through semantic chunking, similarity tuning, prompt design, and human-in-the-loop validation, plus strong stakeholder communication via demos and visual explanations.”
Mid-level Generative AI Engineer specializing in LLMs, RAG, and agentic systems
“Built a production "Mini RAG Assistant" for internal document Q&A, focusing on grounded answers (anti-hallucination), retrieval quality, and latency/cost optimization. Uses LangChain/LangGraph for orchestration and applies a metrics-driven evaluation loop (including reranking and semantic chunking improvements) while collaborating closely with product stakeholders.”
Junior Full-Stack Software Engineer specializing in React/Node, cloud, and LLM-powered automation
“Master’s program project lead who built and deployed a real-time sound recognition system (Flask + React Native + ML) that was adopted by 200+ university students. Demonstrates strong production engineering and cross-layer debugging—solving latency, unreliable uploads, and observability gaps using microservice separation, chunked/idempotent transfers, and packet-capture-driven network diagnosis—plus AWS/on-prem and IoT edge-to-cloud integration experience.”
Mid-level GenAI Engineer specializing in RAG, LLM agents, and enterprise automation
“Accenture engineer who built and shipped a production RAG-based automation/chatbot for SAP incident triage and troubleshooting, embedding thousands of runbooks/logs/tickets into a semantic search pipeline and integrating it into Teams/Slack. Reported major productivity gains (30–60% time reduction), >90% validated answer accuracy, and sub-2-second responses, with strong orchestration (Airflow/Prefect/LangGraph) and reliability practices (guardrails, testing, monitoring).”
Senior Data Scientist specializing in LLM applications, RAG systems, and production ML
“Senior Data Scientist in consulting who has built production RAG systems for insurance/annuity document search at large scale (100K+ PDF pages), emphasizing grounded answers, guardrails, and low-latency retrieval. Experienced in end-to-end MLOps for LLM apps—monitoring, evaluation sets, drift handling, and safe rollouts—and in orchestrating complex pipelines with Prefect/Airflow and deploying services on Kubernetes.”
Mid-level Software Engineer specializing in Java/Spring Boot microservices
“Full-stack AI engineer who built Skillmatch AI, an LLM/RAG-based job matching platform using FastAPI microservices, Airflow-orchestrated async pipelines, and Pinecone vector search (sub-second retrieval across 50k+ vectors) deployed on GCP with autoscaling. Also partnered directly with a cancer researcher to automate SEER + PubMed-driven report generation via an AI pipeline, emphasizing rapid prototyping and outcome-focused communication.”
Mid-level GenAI/Data Engineer specializing in LLMs, RAG systems, and fraud detection
“ML/NLP engineer with banking domain experience who built a GenAI-powered fraud detection and risk intelligence system at Origin Bank, combining RAG (LangChain + FAISS), fine-tuned BERT NER, and GPT-4/Sentence-BERT embeddings. Delivered measurable impact (25% higher fraud detection accuracy, 40% less manual review) and emphasizes production-grade pipelines on AWS SageMaker/Airflow with strong data validation and scalable PySpark processing.”