Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and Conversational AI
“Built a production RAG-based GenAI copilot backend at Aetna using Python/FastAPI, GPT-4, LangChain, and Azure AI Search, deployed on AKS with Prometheus/Grafana observability. Owned the system end-to-end (ingestion through deployment) and improved peak-time reliability by addressing vector search and embedding bottlenecks with Redis caching, index optimization, and async processing, plus added anti-hallucination guardrails via retrieval confidence thresholds.”
Junior Machine Learning Engineer specializing in LLM systems and GPU inference
“LLM/agent engineer who shipped a production RAG-based recommendation + explanation system that replaced a traditional recommender stack, delivering ~20% CTR lift (and +8% after a reliability iteration) with strong cold-start performance. Demonstrates strong production rigor: schema-constrained generation, typed tool calling, explicit state/orchestration, deep monitoring/feedback loops, and safe integration with messy ERP inventory/order data using normalization, idempotency, and conflict-resolution guardrails.”
Intern Software Engineer specializing in edge AI deployment and distributed systems
“Full-stack engineer who built an enterprise search platform (Codlens) delivering natural-language Q&A over Jira/Slack using embeddings, vector DB search, re-ranking (RRF), and LLM responses with source grounding. Also designed and benchmarked a distributed IAM system with Postgres transaction-log replication and Raft-based quorum consistency, reporting ~253 TPS at ~60ms latency in a multi-node setup. Experience spans early-stage startups (Zetic AI, Sagwara Capital) and large-scale orgs (Akamai, Atlassian).”
Junior Full-Stack Software Engineer specializing in cloud-native microservices
“Backend engineer with hands-on IoT and AI product work: built a decoupled Raspberry Pi + AWS IoT Core weather monitoring backend and a Dockerized FastAPI LLM service on AWS ECS using OpenAI/HuggingFace with an emerging RAG layer. Also delivered measurable performance gains at DAZN by redesigning event-driven/serverless ingestion (SNS, S3->Lambda->DynamoDB), cutting latency ~30% and boosting throughput ~25% while automating ~90% of manual sync work.”
Intern Software Engineer specializing in LLM agents and full-stack development
“Embedded C++ engineer with Bosch automotive infotainment experience, owning real-time audio middleware modules with strict latency/memory constraints. Strong in profiling/optimizing deterministic behavior, debugging hardware-specific intermittent issues, and building automated test + CI pipelines; currently ramping up on ROS2 concepts (DDS, nodes/topics/services) to transition toward robotics.”
Junior Full-Stack AI Engineer specializing in LLM apps and RAG systems
“Built and shipped a production LLM-powered “Vet agent” that automates pet symptom intake across multimodal inputs (images/files/text/speech) and provides analysis/home-care guidance, reaching thousands of daily active users within two months. Demonstrates strong agent engineering fundamentals: state-machine orchestration with structured JSON, tool/schema validation, high-availability routing/failover, and rigorous offline/online evaluation loops with trace-driven reliability improvements.”
Junior Software Engineer specializing in backend, cloud, and machine learning systems
“Built Digipulse, a university project that ingested and clustered Bluesky tweet data at scale and used Gemini to generate near-real-time topic summaries, processing 1M+ tweets per day. Also brings Intel experience with Prometheus and Kubernetes, including production monitoring and incident troubleshooting.”
Intern AI/ML Engineer specializing in robotics and computer vision
“Worked on Sophia the humanoid robot, building production animation pipelines and enhancing human-robot interaction via perception and behavior orchestration. Experienced in stabilizing noisy perception-driven state transitions and designing smooth, user-centered behavioral flows, collaborating closely with artists, animators, and experience designers to translate creative intent into measurable system behavior.”
Mid-level Data Scientist / Machine Learning Engineer specializing in fraud, risk, and MLOps
“AI/ML practitioner with Northern Trust experience who has shipped production LLM systems (internal support assistant) using RAG, vector databases, orchestration (LangChain/custom pipelines), and rigorous monitoring/feedback loops. Also built AI-driven fraud detection/risk monitoring solutions in a regulated financial environment, emphasizing explainability (SHAP), audit readiness, and stakeholder trust through dashboards and clear communication.”
Intern Machine Learning Engineer specializing in NLP, RAG, and deepfake detection
“Early-career (fresher) candidate who built and deployed a production AI medical document chatbot using a RAG architecture (LangChain + Hugging Face LLM + Pinecone) with a Flask backend on AWS EC2 via Docker. Has experience troubleshooting real deployment constraints (model dependencies, disk space, container stability) and setting up continuous-style evaluation with fixed query test sets tracking relevance, latency, and error rate.”
Mid-level NLP/LLM Researcher specializing in question answering and retrieval-augmented generation
“Built ToolDreamer, a framework for selecting relevant tools for LLM agents by training a retriever on LLM-generated reasoning traces, and has hands-on experience building multi-agent systems in AutoGen (MAG-V) focused on question generation and tool-trajectory verification. Currently works as an AI-guides supervisor at Penn State, regularly communicating AI concepts to non-technical stakeholders.”
Senior Software Engineer specializing in full-stack systems, data pipelines, and ML
“Built and productionized an autonomous research agent (AutoGPT) in a Docker/Kubernetes environment with Pinecone-based long-term memory and custom Python tools for analysis, visualization, and report drafting. Implemented layered guardrails (prompt templates, automated validation, self-critique loops, and monitoring) and achieved ~25% reduction in manual report generation time while scaling the workflow to support multiple concurrent users.”
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
“AI/ML engineer with HP experience building and productionizing an LLM-powered document intelligence platform (LangChain + Pinecone) to deliver semantic search and contextual Q&A across millions of enterprise support documents. Demonstrates strong MLOps and scaling expertise (Airflow, Kubernetes autoscaling, Triton GPU inference, monitoring with Prometheus/W&B) plus a structured approach to evaluation (A/B tests, shadow deployments, failover) and effective collaboration with non-technical stakeholders.”
“GenAI/data engineering practitioner with production experience across Equinix, Optum, and Citibank—built an Azure OpenAI (GPT-4) + LangChain document intelligence platform processing 1.5M+ docs/month and a HIPAA-compliant Airflow healthcare pipeline handling 5M+ claims/day. Also delivered a real-time fraud detection + explainability system using LightGBM and a fine-tuned T5 NLG component, improving fraud accuracy by 15%+ while partnering closely with compliance stakeholders.”
Mid-level AI Engineer specializing in Generative AI, MLOps, and NLP for finance and healthcare
“Built and deployed a secure, production LLM-based document summarization and risk-highlighting tool for financial auditors, running inside a private Azure environment to protect confidential data. Focused on reliability (hallucination mitigation via retrieval-based prompts and source citations) and validated performance through comparisons to auditor summaries plus a user pilot, cutting review time by about half.”
Mid-level GenAI/ML Engineer specializing in LLM applications and enterprise automation
“Built and shipped a production LLM-powered healthcare support agent at UnitedHealthGroup, using LangChain + FAISS RAG on AWS SageMaker with CloudWatch monitoring and human-in-the-loop fallbacks for safety. Strong focus on reliability engineering (confidence gating, retries/timeouts, caching) and continuous evaluation loops; reported ~40% improvement in query resolution efficiency while reducing manual support workload.”
Mid-level AI/ML Engineer specializing in GenAI, RAG pipelines, and cloud MLOps
“Built and deployed a production LLM + vector search clinical decision support system at UnitedHealth Group, retrieving medical evidence and patient context in real time for prior authorization and risk scoring. Strong in end-to-end RAG architecture (Hugging Face embeddings, Pinecone/FAISS, SageMaker, Redis) plus orchestration (Airflow/Kubeflow) and rigorous evaluation/monitoring, with demonstrated ability to align solutions with clinical operations stakeholders.”
Mid-level Full-Stack Engineer specializing in AI/ML data platforms for biotech and FinTech
“AI/ML full-stack practitioner in a small-scale manufacturing/lab operations environment who deployed a production ML system to improve blood cell order fulfillment by predicting yield/success from donor characteristics. Experienced building custom multi-agent orchestration (Python, LangChain/LangGraph, MCP) and balancing reliability, data quality constraints, and token/ROI economics while communicating tradeoffs to VP-level business stakeholders.”
Mid-level Machine Learning Engineer specializing in forecasting, NLP, and GenAI
“GenAI/ML engineer with production experience building multilingual LLM systems (English/Spanish) and RAG-based clinical documentation summarization at Walgreens, combining prompt engineering, structured output validation, and rigorous evaluation (ROUGE + pharmacist review). Also orchestrated end-to-end ML pipelines for demand forecasting using Apache Airflow, PySpark, and MLflow with scheduled retraining and production monitoring.”
Mid-level Machine Learning Engineer specializing in AI/LLM systems
“ML/LLM systems engineer who has owned AI support automation products end-to-end, including ServiceNow-integrated incident routing, RAG-based resolution suggestion systems, and production stabilization. Stands out for combining hands-on platform work across PySpark, AWS Glue, FastAPI, Kubernetes, and Pinecone with measurable operational impact, including 30-35% MTTR reduction and 25-30% improvement in first-touch resolution.”
Mid-level Software Engineer specializing in GenAI and backend systems
“Built and productionized an LLM-based PDF extraction pipeline for Medicaid policy documents by fine-tuning Gemini Flash 2.0 and deploying via Vertex AI, adding validation/guardrails to improve trust and reliability. Also built and scaled a SaaS platform (cnotes) for cable operators and regularly partners with customers and sales teams through interactive demos, rapid iteration, and real-time workflow debugging.”
Entry-Level Software Engineer specializing in ML and backend systems
“Built and deployed a production LLM-based real-time stance detection system for social media, fine-tuning LLaMA 3.1 on A100s with DeepSpeed ZeRO/FSDP and iteratively refining data to handle sarcasm and context-dependent meaning. Also has Kubernetes operations experience (Kafka/Logstash/Elasticsearch observability pipeline) and delivered an OCR automation project during a Worley India internship that saved 20+ hours/week for on-site energy safety stakeholders.”
Mid-Level Full-Stack Python Engineer specializing in cloud APIs and data/ML platforms
“Backend engineer at Goldman Sachs who deployed internal LLM-powered utilities to summarize operational logs/tickets, with a strong emphasis on data sensitivity and reliability. Built deterministic workflows with template-based prompts, confidence checks, and rule-based fallbacks, and used monitoring plus failure-rate metrics to tune performance; also has hands-on Temporal orchestration experience for resilient async backend jobs.”
Mid-Level Software Engineer specializing in Payments and Financial Services
“Software engineer with hands-on experience improving performance and reliability in financial workflows (settlements/loan processing), spanning React/TypeScript and Angular frontends plus Spring Boot microservices. Has delivered measurable latency improvements using PostgreSQL optimization and Redis caching, and has operated Kafka-based systems at scale with idempotent processing and backoff/retry strategies while iterating internal ops tooling with support/finance teams.”