Pre-screened and vetted.
Mid-level Software Engineer specializing in backend, cloud-native microservices, and LLM apps
“LLM/agentic systems practitioner who repeatedly takes customer-facing LLM prototypes into production by operationalizing prompts, hardening RAG pipelines, and adding monitoring/guardrails. Has hands-on experience debugging intermittent production failures under high traffic (vector store timeouts/empty retrieval) and implementing fail-safe behavior plus alerting. Also partners closely with sales in pilots/POCs, customizing demos with customer data and running side-by-side comparisons to drive adoption.”
Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Healthcare-focused LLM engineer who deployed a production triage and clinical knowledge retrieval assistant using RAG and LangGraph-orchestrated multi-agent workflows. Emphasizes clinical safety and compliance with robust hallucination controls, HIPAA/PHI protections (tokenization, encryption, audit logging, zero-retention), and human-in-the-loop escalation; reports a 75% latency reduction in a healthcare agent system.”
Junior Machine Learning Engineer specializing in Generative AI and analytics automation
“AI/LLM engineer who built a production intelligent support system using RAG over a vectorized documentation library, addressing real-world issues like lost-in-the-middle context failures and doc freshness via automated GitHub-driven re-embedding pipelines. Emphasizes rigorous agent evaluation (component/E2E/ops) and prefers lightweight, decoupled workflow automation using message brokers (Redis/RabbitMQ) over heavyweight orchestration frameworks.”
Intern Data Scientist specializing in ML engineering and LLM agentic workflows
“Built an agentic, multi-step LLM system that generates full-stack code for API integrations using LangChain orchestration, Pinecone/SentenceBERT RAG, and a human-in-the-loop feedback loop for iterative code refinement. Also collaborated with non-technical content writers and PMs during a Contentstack internship to deliver a Slack-based AI workflow that generates and brand-checks articles with one-click approvals.”
“Built a production multi-agent orchestration platform to automate healthcare claims and HR workflows, combining LangChain/CrewAI/AutoGPT with RAG (FAISS/Pinecone) and fine-tuned open-source LLMs (LLaMA/Mistral/Falcon) in private Azure ML environments to meet HIPAA requirements. Emphasizes rigorous agent evaluation/observability (trajectory eval, adversarial testing, LLM-as-judge, drift monitoring) and reports measurable outcomes including 35% faster claims processing and 40% fewer chatbot errors.”
Senior AI/ML Engineer specializing in Generative AI, LLMs, and MLOps
“Telecom (Verizon) AI/ML practitioner who built a production multimodal system that ingests messy customer issue reports (calls, chats, emails, screenshots, videos) and turns them into confidence-scored incident summaries with reproducible steps and evidence links. Also built KPI/alarm-to-ticket correlation to rank likely root-cause domains (RAN/Core/Transport), cutting triage from hours to minutes and improving MTTR.”
Mid-level Full-Stack Software Engineer specializing in cloud-native microservices
“Full-stack engineer who owned end-to-end delivery of a customer-facing financial services web platform and built internal tooling for engineering teams. Strong in microservices and event-driven systems (Kafka/RabbitMQ), distributed transaction management (saga), and production performance/observability—achieving ~40% backend response-time improvement through database and query optimization.”
Mid-Level Full-Stack Software Engineer specializing in AI/ML and cloud-native systems
“At BondiTech, built and deployed customer-facing backend improvements for enterprise dashboards handling 1M+ records, redesigning a .NET/Entity Framework API with server-side pagination/filtering and feature-flagged rollout to cut latency from ~15s to ~2s. Experienced integrating customer systems into existing APIs, including stabilizing a legacy CRM sync by normalizing inconsistent IDs, handling strict rate limits with batching, and adding DLQs plus reconciliation reporting.”
Junior Data Scientist/Data Engineer specializing in ML pipelines and analytics
“Machine Learning Intern at Docsumo who delivered a customer-facing fraud-detection solution end-to-end: rebuilt the pipeline, deployed a Random Forest model, and shipped a Python/Flask microservice on AWS SageMaker. Drove measurable production impact (precision +30%, processing time cut in half, manual review -60%, customer satisfaction +15%) and demonstrated strong customer integration and live-incident response skills.”
Mid-level Software Engineer specializing in AWS cloud infrastructure and data platforms
“Backend/infra-focused software engineer who built an autonomous Python API-orchestration agent using asyncio with strong reliability and observability (trace IDs, structured logs, retries/timeouts) and containerized dev workflow. Experienced deploying Python services to Kubernetes with Helm and running GitOps CI/CD via ArgoCD, plus leading an AWS IAM-to-Identity Center migration using CloudTrail-driven least-privilege role design. Also built and debugged a Kafka/SnapLogic bidirectional pipeline syncing Redshift and HBase, resolving missing-record issues via Kibana-driven investigation.”
Mid-level AI/ML Engineer specializing in Generative AI, LLMs, and NLP
“AI/ML engineer with forensic analytics and healthcare claims experience (Optum), building production LLM/RAG systems to surface context-driven fraud patterns from unstructured claim notes and explain risk to investigators. Strong in large-scale retrieval performance tuning, legacy API integration with reliability patterns (SQS, circuit breakers), and MLOps orchestration on Airflow/Kubernetes with rigorous testing, monitoring, and stakeholder-friendly interpretability.”
Mid-level Full-Stack Software Developer specializing in cloud-native microservices
“Full-stack engineer with enterprise experience at Metasystems Inc. (and Qualcomm) building high-traffic, security-sensitive systems—owned a secure transaction processing module end-to-end using Java/Spring Boot, Python/Django, and React. Strong AWS production operations (EKS/ECS/Lambda/RDS/DynamoDB) with IaC (Terraform/CloudFormation), observability, and reliability patterns; also delivered resilient ETL/integration pipelines with idempotency/retries/backfills and achieved a 50% deployment-time reduction through CI/CD and modular refactoring.”
Mid-level AI/ML Engineer specializing in Generative AI and data engineering
“IBM engineer who built and deployed a production RAG-based LLM assistant using LangChain/FAISS with a fine-tuned LLaMA model, served via FastAPI microservices on Kubernetes, achieving 99%+ uptime. Demonstrates strong practical expertise in reducing hallucinations (semantic chunking + metadata-driven retrieval) and managing latency, plus mature MLOps practices (Airflow/dbt pipelines, MLflow tracking, monitoring, A/B and shadow deployments) and effective collaboration with non-technical stakeholders.”
Intern Machine Learning Engineer specializing in forecasting, NLP, and RAG systems
“Intern who built and deployed a production LLM-powered contract analysis system for finance teams: Azure Document Intelligence for text/table extraction plus Gemini prompting to surface key terms and risks via an async API and simple UI. Emphasizes reliability in production with fallbacks, guardrails against hallucinations, and operational concerns like latency/cost/versioning, delivering summaries in under 30 seconds instead of hours.”
Intern Data Scientist specializing in AI, analytics, and cloud data engineering
“Built a production multimodal LLM-based vendor risk assessment platform that ingests SOC reports and other documents, uses a strict RAG pipeline with grounded evidence (page/paragraph citations), and dramatically reduces analyst review time. Experienced with LangGraph/LangChain/AutoGen for stateful, fault-tolerant agent workflows, and emphasizes reliability (schema validation, guardrails) plus low-latency delivery (~1–2s) through hybrid retrieval, reranking, caching, and model tiering.”
Mid-level AI/ML Engineer specializing in GenAI and financial risk & compliance analytics
“Built and deployed a production LLM-powered financial risk and compliance platform to reduce manual trade exception handling and speed up insights from regulatory documents. Implemented a LangChain multi-agent workflow with structured/unstructured data integration (Redshift + vector DB) and emphasized hallucination reduction for regulatory safety using Amazon Bedrock. Strong MLOps/orchestration background across Kubernetes, Airflow, Jenkins, and monitoring/testing with MLflow, Evidently AI, and PyTest.”
Mid-level AI/ML Engineer specializing in Generative AI and production ML systems
“At CVS Health, the candidate productionized a RAG-based LLM solution in a regulated healthcare setting, emphasizing reliable data pipelines, LoRA fine-tuning, monitoring, safety guardrails, and A/B testing. They have hands-on experience troubleshooting real-time RAG failures (e.g., chunking/embedding issues) and regularly lead developer-focused demos/workshops while translating technical architecture into business value for stakeholders.”
Mid-level AI/ML & Full-Stack Engineer specializing in LLM agents and medical RAG systems
“Full-stack engineer at an early-stage startup building an agentic AI application for enterprise systems, combining customer-facing Next.js/React UI work (30% faster load times) with backend/workflow orchestration using FastAPI + n8n, Redis, and RabbitMQ. Previously at Deloitte USI, built BDD Selenium/Java automation and managed 200+ defects end-to-end using JIRA/JAMA to support on-time production releases.”
Mid-level Full-Stack Engineer specializing in enterprise AI systems
“Built and productionized an AI NL-to-SQL capability inside legacy accounts receivable software (React + Spring Boot + Postgres/pgvector RAG), adding semantic caching and a SELECT-only validation layer to satisfy infosec. Achieved measurable impact (3 days to seconds turnaround, 60% token cost reduction, 50% latency reduction) with strong adoption (40 analysts, 50+ queries/week) and documented/monitored via Confluence + logging and user feedback loops.”
Mid-Level Full-Stack Software Engineer specializing in cloud-native microservices
“Full-stack engineer with experience at Capital One and Prime Softech owning production systems end-to-end: secure authentication (Java/Spring Security + React/Redux) through AWS ECS deployments with Terraform and CI/CD. Strong reliability/observability focus (Prometheus/Grafana/ELK/CloudWatch) with quantified improvements (15% reliability gain, 30% fewer post-release defects). Also led legacy monolith-to-microservices refactors and built real-time Kafka/Spark ingestion pipelines for analytics/fraud detection.”
Staff RPA & Automation Engineer specializing in Financial Services
“Blue Prism RPA developer in a small FinTech-aligned team who owned ~20 production bots and drove both delivery and reliability. Built a shared VDI/locking design that cut infrastructure cost ~20–30% and routinely handled ServiceNow-driven production incidents end-to-end, including hotfixes and longer-term SDLC fixes. Also acted as a player-coach, training junior hires and maintaining high bot success rates (up to 99% within SLA).”
Mid-level AI Engineer specializing in LLM orchestration, RAG, and multi-agent systems
“Research Assistant at the University of Houston who built and live-deployed a production RAG system for 1000+ research documents, using hybrid retrieval (dense+BM25+RRF) with cross-encoder reranking and RAGAS-based evaluation; reported 66% MRR, 0.85+ faithfulness, and 68% lower LLM inference costs. Also built a deployed LangGraph multi-agent research system (Researcher/Critic/Writer) with tool integrations (Tavily, arXiv) and dual memory (ChromaDB + Neo4j), plus freelance automation work delivering a WhatsApp chatbot and n8n workflows for a wholesale clothing business.”
Mid-Level Full-Stack Software Developer specializing in cloud-native microservices and AI/ML
“Backend engineer who optimized an AI-driven portfolio analytics/insights platform at Fidelity, addressing latency and traffic growth by moving services toward microservices, improving service communication, and tuning API/DB performance. Experienced scaling Python/FastAPI services with Docker + Kubernetes autoscaling, and strengthening security/privacy for sensitive client portfolio data used in LLM-based reporting.”
Mid-level AI/ML Engineer specializing in NLP, LLMs, and RAG systems
“Backend engineer who built and evolved a PHI-compliant RAG system (FastAPI + LangChain + embeddings/FAISS) for internal document search and summarization, delivering <400ms p95 latency at ~2,500 daily requests and measurable impact (30% faster investigations, +17% retrieval relevance). Demonstrates strong security and rollout discipline (RBAC/RLS/JWT, redaction/audits, shadow mode, dual writes, canaries) and a focus on reducing hallucination risk via grounded guardrails and confidence-based fallbacks.”