Pre-screened and vetted.
Mid-Level AI/ML Software Engineer specializing in agentic LLM systems
“Built and deployed a production LLM-powered multi-agent compliance copilot (life sciences/finance) using LangChain/LangGraph + RAG over vector databases, delivered via async FastAPI on Kubernetes. Emphasizes audit-ready, deterministic outputs with schema constraints and citations, plus rigorous evaluation/monitoring; reports 60%+ reduction in manual research time and successful production adoption.”
Mid-level Full-Stack Developer specializing in AI-powered cloud-native applications
“Full-stack engineer who has owned customer-facing AI recommendation and analytics dashboards end-to-end (backend APIs/data processing through React UI, deployment, and monitoring). Demonstrates strong systems thinking around scaling microservices—using observability, caching, async workflows, and resilience patterns—and also built an internal ops dashboard that became the default tool for on-call incident reviews.”
Mid-level AI Engineer specializing in GenAI, NLP, and MLOps
“LLM/agentic-systems engineer with PayPal experience hardening an LLM-powered fraud support assistant from prototype to production, focusing on low-latency distributed architecture, rigorous evaluation/testing, and security/compliance. Comfortable in customer-facing and GTM contexts—runs technical demos/workshops, builds tailored pilots, and aligns sales/CS with engineering to close deals and drive adoption.”
Junior Machine Learning Engineer specializing in MLOps and LLM/RAG systems
“LLM/agentic workflow builder focused on productionizing document-processing systems. Redesigned pipelines with LangGraph + RAG, schema-aware validation, and eval/monitoring loops; known for fast incident diagnosis (restored accuracy from ~70% to >95% same day). Partners closely with sales and stakeholders to deliver tailored demos and drive adoption (reported +40%).”
Intern Software Engineer specializing in backend systems and cloud infrastructure
“Backend-focused intern who owned real-time livestream features: live comment moderation using AWS Comprehend (sentiment/toxicity/PII) with safe fallbacks, plus AI-generated positive commentary via AWS Bedrock (Claude 3 Haiku). Emphasizes reliability/low-latency design, IAM troubleshooting, and disciplined GitOps-style CI workflows for reproducible deployments.”
Junior Software Developer specializing in AI/ML and data engineering
“Built and owned an end-to-end AV operations automation and dashboarding platform for USC event operations, used daily to coordinate hundreds of live events. Delivered a React/TypeScript full-stack system integrating Smartsheet APIs with strong reliability practices (typed contracts, validation/fallbacks, safe rollouts) and experience with queue-based microservice patterns (idempotency, retries, DLQs, monitoring).”
Mid-Level Software Engineer specializing in Cloud, GenAI, and Federal systems
“Cloud-focused engineer experienced deploying and stabilizing complex production systems that span APIs, infrastructure, and automated workflows, with a strong observability and safe-release mindset (feature flags/canaries/rollbacks). Has hands-on, customer-facing incident leadership, including executing DR regional failover during an AWS us-east-1 outage to maintain service and reportedly save a client ~$10M.”
“LLM/agent workflow engineer with healthcare experience (CVS/CBS Health) who built and deployed a production call-insights platform using Azure OpenAI + LangChain/LangGraph, including sentiment and compliance checks. Demonstrates deep HIPAA/PHI handling (tenant-contained processing, redaction, RBAC/encryption/audit logging) and production rigor (testing, eval sets, validation/retries, autoscaling) to scale to thousands of transcripts.”
“Built and deployed a production Retrieval-Augmented Generation (RAG) platform in a healthcare setting to automate clinical documentation review and summarization, targeting near-real-time, explainable outputs. Emphasizes grounded generation to reduce hallucinations, latency optimizations (chunking/embedding reuse), and PHI-safe workflows with access controls, plus strong orchestration experience using Apache Airflow.”
Mid-level AI/ML Engineer specializing in Generative AI, NLP, and Computer Vision
“Built an LLM-powered learning assistant (EduQuizPro/EduCrest Pro) that uses RAG over URLs and PDFs to generate quizzes, notes, and explanations for students/professors. Emphasizes production robustness—implemented dependency fallbacks (FAISS/Sentence Transformers/Gradio), CLI-safe mode, and NumPy-based indexing—along with a custom orchestration layer to keep multi-step AI workflows reliable.”
Engineering Leader specializing in Digital Health, AI, and Cloud Platforms
“Senior Engineering Manager at Roche leading two Scrum teams building internally shared (“inner-sourced”) tools and libraries for a healthcare enterprise. Has led security/compliance-first architecture decisions (e.g., Python AI modules running inside a Java container) and front-end modularization (Angular monorepo to module federation), with a strong focus on developer experience via automated Swagger/OpenAPI documentation and robust testing/versioning practices.”
Junior AI Engineer specializing in agentic workflows and ML platforms
“Building a production LLM/agent system for a leading US dental provider that extracts rules from payer handbooks/portals and EDI 271 responses to validate and improve patient cost estimates. Combines GCP stack (BigQuery, GKE, Cloud Run, Pub/Sub, Vertex AI) with strong agent reliability practices (observability, validator agents, grounding, PII/hallucination guardrails, confidence scoring) and has led non-technical customer stakeholders on enterprise ServiceNow↔Aha sync and AI-powered enterprise search/summarization.”
Junior Full-Stack Software Engineer specializing in cloud-native microservices
“Backend engineer with hands-on IoT and AI product work: built a decoupled Raspberry Pi + AWS IoT Core weather monitoring backend and a Dockerized FastAPI LLM service on AWS ECS using OpenAI/HuggingFace with an emerging RAG layer. Also delivered measurable performance gains at DAZN by redesigning event-driven/serverless ingestion (SNS, S3->Lambda->DynamoDB), cutting latency ~30% and boosting throughput ~25% while automating ~90% of manual sync work.”
Junior AI/ML Systems Engineer specializing in LLM infrastructure and distributed training
“Built and shipped a production NMT system translating medical documentation for a rare/low-resource language, tackling data scarcity with retrieval-driven pattern matching plus dictionary/grammar- and LLM-based augmentation and validating quality with a linguistic expert. Also develops agentic LLM workflows with LangChain/LangGraph (including a deep-research style system) and has experience aligning medical AI deployments with clinician-defined risk metrics and human-in-the-loop decision making.”
Mid-level Data Scientist / AI-ML Engineer specializing in Generative AI and LLM applications
“Built a production GenAI-powered analytics assistant to reduce reliance on data analysts by enabling natural-language Q&A over Databricks/Power BI dashboards, backed by vector search (Pinecone/Milvus) and a Neo4j knowledge graph, including multimodal support via OpenAI Vision. Demonstrates strong real-world LLM reliability engineering with strict RAG, LangGraph multi-step verification, and Guardrails/custom validators, plus broad orchestration and production monitoring experience (Airflow, ADF, Step Functions, Kubernetes, Prometheus/CloudWatch).”
Director-level AI & Data Science leader specializing in GenAI, LLMs, and MLOps
“ML/NLP engineer currently working in NYC on a system that connects complex unstructured data sources to deliver personalized insights, using embeddings + vector DB retrieval and a RAG architecture (LangChain, Pinecone/OpenSearch). Strong focus on production constraints—especially low-latency retrieval—using FAISS/ANN, PCA, index partitioning, and Redis caching, plus PEFT fine-tuning (LoRA/QLoRA) and KPI/SLA-driven promotion to production.”
Staff Full-Stack Engineer specializing in AI platforms and infrastructure automation
“Backend/full-stack engineer building complex internal platforms and customer-facing demos at the intersection of infrastructure and product. Shipped a no-code Product Lifecycle Manager for manufacturing (3 manufacturers, 1000+ evolving tests) using AWS S3/SQS ingestion and extensible Postgres (EAV+JSONB) with end-to-end traceability. Also built a FastAPI-based company data intelligence platform with Okta-secured RBAC and an LLM/MCP layer for ChatGPT-like analytics over enterprise data sources.”
Mid-level AI/ML Engineer specializing in deep learning, NLP/LLMs, and MLOps
“Built and shipped a real-time oncology risk prediction system used by doctors during patient visits, trained on clinical data in AWS SageMaker and deployed via FastAPI with sub-second responses. Emphasizes clinician-trust features (SHAP explainability, validation checks) and HIPAA-compliant controls (encryption, RBAC, audit logging), plus Kubernetes-based production operations with autoscaling, monitoring, and drift/retraining workflows; collaborated closely with oncologists at Flatiron Health.”
Mid-level Full-Stack Java Developer specializing in cloud-native microservices
“Software engineer with strong compliance-domain experience who built a customer-facing compliance and reporting dashboard using React/TypeScript with Spring Boot microservices. Demonstrates mature production engineering practices—contract-first APIs, event-driven architecture (Kafka/RabbitMQ), caching (Redis), and robust CI/CD + observability (Prometheus/Grafana/ELK)—and also created a Python-based audit automation tool adopted into the standard release process.”
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and Conversational AI
“Built a production RAG-based GenAI copilot backend at Aetna using Python/FastAPI, GPT-4, LangChain, and Azure AI Search, deployed on AKS with Prometheus/Grafana observability. Owned the system end-to-end (ingestion through deployment) and improved peak-time reliability by addressing vector search and embedding bottlenecks with Redis caching, index optimization, and async processing, plus added anti-hallucination guardrails via retrieval confidence thresholds.”
Mid-Level Software Engineer specializing in Cloud Platform & Automation
“Software engineer at Wrap who built production AWS Lambda services for large-scale Parquet dataset generation (50k+ records) and a synthetic traffic/lead generation system using Python, Playwright, and Jenkins. Also built and deployed a full-stack hobby product (MyAnimeListRanker) that ingests MyAnimeList user data and uses an Elo-based ranking workflow, with operational guardrails like rate limiting and monitoring via Vercel/logs.”
Mid-level Cloud DevOps Engineer specializing in AWS/IBM Cloud automation and Kubernetes
“Cloud infrastructure/SRE-style engineer with experience at TCS and ServiceNow focused on IBM Cloud and Linux/RHEL operations, security hardening, and automation in Python. Has led end-to-end production incident response (certificate expiry) and implemented preventive alerting adopted by 20+ teams, plus built Jenkins CI/CD with Vault-based secrets and Terraform-based AWS provisioning.”
Executive CTO & AI Architect specializing in regulated SaaS (InsurTech/Healthcare/FinTech)
“Insurance-tech CTO and repeat founder with 10+ years in insurance startups; was employee #4/CTO at Polly (formerly DealerPolicy) and helped scale it from a PowerPoint to 250 employees while raising $180M+. Currently building and selling AgentCanvas.ai—an extensible AI accelerator platform for large insurance agencies—after coding the product end-to-end and now running demos/POCs with prospective buyers.”
Senior Software Engineer specializing in identity, cloud-native microservices, and reactive web apps
“Product-focused full-stack engineer with Walmart and Dell experience who built and shipped a real-time engagement dashboard end-to-end (Kafka Streams, Spring Boot, React/TypeScript/D3) used daily by business teams, moving them from next-day reports to real-time decisioning. Strong in performance/reliability (Redis caching cut latency ~40%, 90%+ test coverage, Prometheus/CloudWatch monitoring) and production operations on AWS/EKS including handling a cascading failure from a memory leak with zero-downtime rollback and redeploy.”