Pre-screened and vetted.
Mid-level Software Engineer specializing in backend systems, data pipelines, and GenAI automation
Director of Data Science specializing in ML, NLP/LLMs, and MLOps
Mid-level Software Engineer specializing in AI/ML, GenAI agents, and cloud microservices
Mid-level AI/ML Engineer specializing in scalable ML systems and cloud MLOps
Senior Machine Learning Engineer specializing in LLMs, agentic AI, and MLOps
Mid-Level Back-End Software Engineer specializing in regulatory reporting (EMIR)
Mid-level Machine Learning Engineer specializing in NLP, LLMs, and deep learning
Mid-Level Software Engineer specializing in cloud data platforms and full-stack web development
Mid-level AI/ML Engineer specializing in recommender systems, NLP, and MLOps
Mid-Level Full-Stack Software Engineer specializing in cloud microservices and LLM/RAG systems
Senior Data Scientist specializing in Generative AI, NLP, and MLOps
Intern Software Engineer specializing in FinTech and AI platforms
“Systems-focused engineer who built an OS kernel with multithreading, priority scheduling, system calls, and synchronization primitives, and debugged race conditions end-to-end. While not yet hands-on with ROS/SLAM, they clearly connect low-level concurrency and scheduling decisions to deterministic, reliable robotics-style real-time workloads.”
Senior Full-Stack Software Engineer specializing in Python/Django and modern JavaScript
Intern AI/ML Engineer specializing in data science, NLP, and reinforcement learning
Mid-level Software Engineer specializing in backend systems and LLM-powered AI applications
Mid-level Software Engineer specializing in AI, data engineering, and cloud systems
Principal Data Scientist specializing in LLMs, RAG, and enterprise AI products
Junior Data Engineer specializing in Azure data platforms and GenAI analytics
“Data/ML practitioner with experience spanning medical imaging (retinal vessel analysis for hypertension/CVD risk prediction) and enterprise data engineering at Carl Zeiss. Built large-scale SAP data cleaning/validation pipelines (10M+ daily records, ~99% accuracy) and RAG-based semantic search with LangChain/vector DBs that cut manual querying by 82%, plus automation that reduced data onboarding from 8 hours to 12 minutes.”
Mid-Level AI/ML Software Engineer specializing in agentic LLM systems
“Built and deployed a production LLM-powered multi-agent compliance copilot (life sciences/finance) using LangChain/LangGraph + RAG over vector databases, delivered via async FastAPI on Kubernetes. Emphasizes audit-ready, deterministic outputs with schema constraints and citations, plus rigorous evaluation/monitoring; reports 60%+ reduction in manual research time and successful production adoption.”
“LLM/agent workflow engineer with healthcare experience (CVS/CBS Health) who built and deployed a production call-insights platform using Azure OpenAI + LangChain/LangGraph, including sentiment and compliance checks. Demonstrates deep HIPAA/PHI handling (tenant-contained processing, redaction, RBAC/encryption/audit logging) and production rigor (testing, eval sets, validation/retries, autoscaling) to scale to thousands of transcripts.”
Mid-level Data Scientist / AI-ML Engineer specializing in Generative AI and LLM applications
“Built a production GenAI-powered analytics assistant to reduce reliance on data analysts by enabling natural-language Q&A over Databricks/Power BI dashboards, backed by vector search (Pinecone/Milvus) and a Neo4j knowledge graph, including multimodal support via OpenAI Vision. Demonstrates strong real-world LLM reliability engineering with strict RAG, LangGraph multi-step verification, and Guardrails/custom validators, plus broad orchestration and production monitoring experience (Airflow, ADF, Step Functions, Kubernetes, Prometheus/CloudWatch).”
Director-level AI & Data Science leader specializing in GenAI, LLMs, and MLOps
“ML/NLP engineer currently working in NYC on a system that connects complex unstructured data sources to deliver personalized insights, using embeddings + vector DB retrieval and a RAG architecture (LangChain, Pinecone/OpenSearch). Strong focus on production constraints—especially low-latency retrieval—using FAISS/ANN, PCA, index partitioning, and Redis caching, plus PEFT fine-tuning (LoRA/QLoRA) and KPI/SLA-driven promotion to production.”