Pre-screened and vetted.
Mid-level Data/ML Engineer specializing in NLP, GenAI, and scalable data pipelines
“AI/ML engineer with production experience building LLM-powered document intelligence and customer support systems in healthcare/insurance, emphasizing high-accuracy RAG, long-document processing, and robust monitoring/fallback mechanisms. Also automates and scales ML lifecycle workflows using Apache Airflow and Kubeflow, and partners closely with non-technical operations stakeholders to drive adoption.”
Mid-level Generative AI Engineer specializing in LLM agents and RAG systems
“Built and deployed a production LLM/RAG knowledge assistant integrating internal docs, wikis, and ticket histories to reduce tribal-knowledge dependency and repetitive questions. Emphasizes reliability via grounding + a validation layer, and achieved major latency gains (>50%) through vector index optimization, caching, quantization, and selective re-validation. Comfortable orchestrating end-to-end LLM/data workflows with Airflow, Prefect, and Dagster, including monitoring and alerting.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps
“Red Hat ML/LLM engineer who designed and deployed a production LLM-powered customer support automation system using RAG, improving latency by 30% via PEFT and vector search optimization. Built security and governance into retrieval (access-level filtering, encrypted Pinecone/ChromaDB) and delivered SHAP-based explainability via a dashboard for non-technical stakeholders. Experienced orchestrating distributed ML/RAG pipelines across AWS SageMaker and OpenShift with Airflow/Prefect, plus multi-agent workflows using CrewAI and LangGraph.”
Mid-level Data Scientist specializing in NLP/LLMs, time series forecasting, and MLOps
“Data/ML practitioner with hands-on experience building NLP systems from prototype to production: delivered a Twitter sentiment classifier with robust preprocessing, SVM modeling, and Power BI reporting, and built entity-resolution pipelines for messy multi-source customer data (reporting ~95% improvement in unique entity identification). Also implemented semantic linking/search using SBERT embeddings with FAISS vector retrieval and domain fine-tuning (reported ~15% precision lift), and applies production workflow best practices (Airflow/Prefect, Docker, Azure ML/Databricks, Great Expectations).”
Mid-level AI/ML Engineer specializing in GenAI, LLMs, RAG, and MLOps
“Built and deployed a production LLM-powered RAG document intelligence/Q&A system for healthcare prior authorization, reducing manual medical document review time and improving decision efficiency. Strong in end-to-end LLM application engineering (LangChain/LangGraph), retrieval quality improvements (hybrid search, embedding tuning, chunking strategies), and rigorous evaluation/monitoring for reliability.”
Mid-level Data Scientist specializing in AI/ML, LLMs, and domain analytics
“BlackRock AI/ML engineer who built and owned a production LLM document intelligence system for regulatory and investment analysis end-to-end. They combined RAG, multi-agent validation, strong evaluation/monitoring, and reusable Python services to process 50K+ documents, cut review time 40-50%, and improve decision accuracy by about 25%.”
Mid-level AI/ML Engineer specializing in GenAI, NLP, and MLOps
“Built and deployed an enterprise GenAI knowledge assistant over thousands of internal PDFs/reports using a RAG stack (GPT-4 + Hugging Face embeddings + vector DB) to reduce manual search and SME escalations. Uses LangGraph/LangChain to orchestrate modular agent workflows with relevance filtering and fallback handling, and applies rigorous evaluation (golden datasets, edge cases, A/B tests) with production monitoring metrics.”
Mid-level AI/ML Engineer specializing in Generative AI and Conversational AI
“GenAI Engineer at Infosys who built and deployed a production multi-agent RAG system for a top-tier bank, scaling to ~50,000 queries/day with 99.9% uptime. Drove measurable gains (45% accuracy improvement, 30% API cost reduction) through open-source LLM fine-tuning, Pinecone indexing/retrieval optimization, and AWS-based MLOps/monitoring, and has experience enabling adoption via developer workshops and customer-facing collaboration.”
Principal Software Engineer specializing in AI/ML and cloud-native backend systems
“McKinsey data/ML practitioner who led production deployment of an entity resolution + semantic search platform for unstructured finance and healthcare data, integrating with legacy systems under HIPAA constraints. Deep hands-on stack across transformers (spaCy/HF BERT), embeddings + FAISS, and production MLOps/workflow tooling (Airflow, Docker, CI/CD, Prometheus/Grafana), with reported gains of +30% decision speed and +25% search relevance.”
Mid-level Software Engineer specializing in cloud-native microservices and AI-powered web applications
“Backend engineer who built and owned an AI-powered SMS survey platform for a nonprofit serving at-risk communities (internet-limited users), using Cloudflare Workers + Twilio and a state-machine survey engine. Scaled it to ~10k active users with near-zero downtime, added English/Spanish support, and iteratively improved LLM behavior (Claude 3.7 Sonnet) to handle nuanced, real-world SMS responses reliably.”
Mid-level Generative AI Engineer specializing in decision intelligence and RAG for regulated enterprises
“Healthcare GenAI engineer who built a HIPAA-compliant, auditable RAG-based claims decision support system at Molina Healthcare, processing 3M claims and delivering major impact (48% faster manual reviews, 43% higher decision accuracy). Deep hands-on experience with LangChain orchestration, vector search (ChromaDB/FAISS), embedding fine-tuning, and safety controls (confidence scoring, rule validation, human-in-the-loop escalation) for clinical workflows.”
Mid-level Machine Learning Engineer specializing in financial AI, NLP, and MLOps
“AI/ML engineer with experience at Accenture and Morgan Stanley, building production LLM systems (GPT-3 summarization) and finance-focused ML models (credit risk and trading anomaly detection). Combines MLOps depth (Docker/Kubernetes, AWS SageMaker/Glue/Lambda, MLflow, A/B testing, drift monitoring) with practical domain adaptation techniques like few-shot prompting and RAG/knowledge-base integration.”
Mid-level Software Engineer specializing in AI/ML and data platforms
“AI/ML engineer who built a production agentic system to automate computational research experiments (simulation execution, parameter exploration, and numerical analysis) and mitigated context-window failures using constrained tool-calling/prompt-chaining patterns in LangChain with OpenAI tool-enabled models. Also has adtech/big-data pipeline experience at InMobi, orchestrating Spark jobs in Airflow to filter bot-like user IDs and publish clean IDs to an online NoSQL store for live serving, plus Apache open-source collaboration experience.”
Intern-level Data Scientist and ML Engineer specializing in analytics and AI systems
“Early-career analytics candidate with hands-on experience in SQL/Python data pipelines, Tableau reporting, and marketing engagement analytics across internship and startup settings. Stands out for combining rigorous data quality practices with practical AI system design, including an end-to-end GPT-4 grading capstone that emphasized explainability and human oversight.”
Mid-level Data Engineer specializing in cloud data platforms and AI/ML pipelines
“Data-engineering-oriented candidate with hands-on experience building an agentic AI product and operational automation workflows. They described automating inventory-to-ERP discrepancy reconciliation with anomaly detection and daily reporting, and also have practical scraping/automation experience dealing with Cloudflare-protected sites using Selenium and Puppeteer.”
Intern Software Engineer specializing in AI and full-stack development
“Early-career software engineer with internship experience at CirrusLabs building a voice-enabled CRM workflow that integrated Google Text-to-Speech and GPT-based processing for automated deal creation. Stands out for a reliability-focused approach to AI integrations, including validation, structured logging, prompt refinement, and hardening asynchronous API/UI behavior in real-world application flows.”
Mid-Level AI Engineer specializing in NLP, computer vision, and LLM applications
“LLM/RAG practitioner who productionized an LLM-driven customer communication and transaction understanding system at PayPal, emphasizing privacy/compliance guardrails and large-scale data normalization. Experienced in real-time debugging of hallucinations via retrieval pipeline tuning and in leading hands-on developer workshops and sales-aligned POCs to drive adoption.”
Junior Data Scientist specializing in ML research, NLP, and healthcare analytics
“Completed an Amazon externship building a GPT-4 + RAG pipeline to summarize themes from hundreds of employee reviews for workforce analytics aimed at improving warehouse retention. Emphasizes production-readiness through labeled-data evaluation, source attribution for explainability, human-in-the-loop review, and rigorous data cleaning/observability to debug real-world LLM workflow issues.”
Mid-Level Full-Stack Software Engineer specializing in FinTech and cloud-native microservices
“Full-stack engineer with fintech/trading domain experience (Fidelity) and startup SaaS CRM/billing platform work (Zoho), building real-time portfolio analytics and trade-processing systems. Strong in microservices, event-driven architectures (Kafka/WebSockets), and AWS/Kubernetes operations with measurable performance gains (~34–35% latency reduction) and maintainability improvements (~40% faster deployments). Targeting a founding full-stack engineer role in NYC with meaningful equity.”
Mid-level AI/ML Engineer specializing in financial risk, fraud detection, and GenAI
“GenAI/ML engineer in Citigroup’s finance environment who has deployed production RAG systems for investment banking under strict privacy and model-risk constraints. Built an internal-VPC Llama2 + Pinecone + LangChain solution with NER redaction and citation-based verification to prevent hallucinations, delivering major time savings, and also partnered with global finance executives to ship an AI early-warning indicator for treasury/liquidity risk.”
Junior Quantitative Analyst and Full-Stack Engineer specializing in FinTech and web platforms
“Backend/distributed-systems engineer with AI infrastructure experience who built an AI-driven video generation platform, focusing on an asynchronous FastAPI-based orchestration layer between user APIs and heavy inference services. Strong in production instrumentation and latency/concurrency optimization; actively learning ROS 2 but has not yet worked on physical robotics or ROS-based deployments.”
Mid-level Machine Learning Engineer specializing in Generative AI and RAG systems
“GenAI/LLM engineer with production deployments in both fintech and retail: built an AI-powered mortgage document analysis/automated underwriting pipeline at Fannie Mae (OCR + custom LLM) cutting underwriting review from 3–4 hours to under an hour with privacy-by-design controls. Also helped build Sephora’s GenAI product advisory bot using LangChain-orchestrated RAG (Azure GPT-4, Azure AI Search, MySQL HeatWave vector search), focusing on grounding, evaluation, and compliance-aware architecture choices.”
Mid-level AI/ML Engineer specializing in scalable ML, NLP, and MLOps
“ML/AI engineer with strong production depth across classical ML, MLOps, LLM/RAG, and scalable Python data platforms, with experience at Cisco and Accenture. Stands out for tying technical decisions to measurable business outcomes, including $1.2M annual savings, 40% faster support resolution, and broad internal adoption of shared engineering frameworks.”
Mid-level AI/ML Engineer specializing in cybersecurity and fraud analytics
“AI/ML engineer with production experience across both classical ML and Generative AI, including a real-time banking fraud detection platform at Deloitte and a RAG-based cybersecurity threat analysis feature at Accenture. Stands out for owning systems end-to-end—from feature pipelines and model tuning through deployment, monitoring, retraining, and API/platform reliability—with measurable impact on fraud accuracy, false positives, and SOC analyst efficiency.”