Pre-screened and vetted.
Staff Software Engineer specializing in Cloud Healthcare Data Platforms
“Backend/data engineer with deep healthcare data experience (FHIR, de-identification) across both GCP and AWS. Has built and operated production microservices and ETL pipelines (FastAPI, Dataflow, Glue) with strong reliability practices, and led modernization of a legacy SAS compliance reporting system to cloud services with validated parity and stakeholder-facing Looker comparisons.”
Senior Software Engineer specializing in AI infrastructure and distributed systems
Senior Backend Software Engineer specializing in cloud platforms and event-driven systems
Intern Machine Learning Engineer specializing in LLM agents and multimodal reasoning
“LLM/agent engineer who built a production code-generation agent at Corvic AI that lets non-technical users query CSV/tabular data in natural language by generating and executing Python. Focused on making LLM systems reliable and scalable via schema-aware validation, sandboxed execution-feedback retries, prompt caching/embeddings, async execution, and high-throughput data processing with Polars; also partnered with Adobe product/marketing to ship brand-aligned AI content generation for email and push notifications.”
Senior Software Engineer specializing in distributed backend systems and streaming infrastructure
Senior Full-Stack Software Engineer specializing in scalable web platforms
Mid-level Software Engineer specializing in backend APIs, data pipelines, and cloud microservices
Mid-level AI/ML Engineer specializing in LLMs, multilingual NLP, and low-latency MLOps
Junior Data Scientist specializing in LLM agents, RAG, and reinforcement learning
“McKinsey practitioner who built and deployed production LLM systems for consultants/clients, including a Power BI-integrated multi-agent chatbot (RAG + text-to-SQL + formatting) with custom Python orchestration, verification loops, and a 100+ case eval set achieving ~95% consistency. Also delivered a taxonomy-mapper agent that standardized inconsistent labeling for C-suite stakeholders, cutting a process from >2 weeks to <30 minutes through demos and business-focused communication.”
Senior AI Research Engineer specializing in LLM agents and large-scale ML
“AT&T Labs builder who deployed a production multi-agent LLM system that lets engineers ask natural-language questions and automatically generates deterministic, schema-grounded Snowflake SQL (200–400 lines) to detect anomalies in massive wireless/network event data (~11B events/day). Experienced with LangChain and Palantir Foundry orchestration, RAG-based result interpretation, and rigorous evaluation/monitoring loops to continuously improve reliability.”
Mid-level AI/ML Engineer specializing in Generative AI, LLM alignment, and RAG
“Built and productionized a real-time enterprise RAG pipeline to improve factual accuracy and reduce LLM hallucinations by grounding responses in constantly changing internal knowledge bases (policies, manuals, FAQs). Experienced in orchestrating end-to-end ML workflows (Airflow/Kubernetes), handling messy multi-format data with schema enforcement (Pydantic/Hydra), and maintaining freshness via streaming incremental embeddings plus batch refresh. Also delivers applied ML solutions with non-technical teams (marketing/CRM) for segmentation and personalized engagement.”
Senior Backend Engineer specializing in Python and AWS serverless/data pipelines
“Serverless-focused backend/data engineer who has delivered production Python services on AWS (FastAPI on Lambda/API Gateway) plus Glue-based ETL pipelines from S3 to relational databases. Strong in operational reliability (timeouts, retries, monitoring/alerts) and modernization work, including parallel-run parity validation for migrating legacy batch logic to Python services. Demonstrated measurable SQL tuning impact (15 min to under 3 min).”
Mid-level Data Engineer specializing in AI/ML platforms and cloud data pipelines
“Built and shipped an LLM-powered data quality assistant that generates maintainable validation checks from metadata while executing validations via Great Expectations, exposed through FastAPI and integrated into Airflow-managed pipelines. Emphasizes production reliability (structured outputs, guardrails, monitoring, versioning, human review) and works closely with compliance/operations teams to deliver clear, auditable, user-friendly AI outputs.”
Junior Data Scientist specializing in Generative AI and agentic LLM systems
“LLM/agentic-systems builder who has shipped production tools for investment research and procurement insights, including a company screener that processes thousands of conference-listed companies using FireCrawl + Google Search + Gemini. Demonstrates strong orchestration expertise (LangGraph multi-agent graphs), performance optimization (async/batching to sub-30s), and pragmatic reliability/evaluation practices with stakeholder-friendly UX (real-time cost tracking and model/parameter toggles).”
Mid-level Software Engineer specializing in distributed systems on AWS
“Data/infra engineer with AWS DynamoDB experience who has shipped reliability-critical systems (Global Tables replica repair protocol) and customer-facing service rollouts using canary/percentage-based deployments, strong observability, and rollback strategies. Also built end-to-end Airflow pipelines producing weekly automated reports over ~10TB of advertising segment data, with rigorous week-over-week data quality validation.”
Junior Machine Learning Engineer specializing in LLMs, data pipelines, and MLOps
Senior Agentic AI & Backend Engineer specializing in LLM platforms and multi-agent systems
Senior Full-Stack Engineer specializing in payments and commerce platforms
Mid-level Machine Learning Engineer specializing in LLMs, ranking, and scalable ML systems
Senior Backend Engineer specializing in distributed systems and cloud microservices
“Backend/data engineer with experience at Nike building high-volume order orchestration and validation APIs using FastAPI microservices on AWS EKS with Kafka, Redis, and Postgres. Strong in production reliability (timeouts/retries/idempotency), GitOps (Argo CD) + Terraform deployments, and data pipelines (AWS Glue/S3), with hands-on incident ownership and legacy modernization into API-driven services.”
Senior Data Engineer specializing in real-time data platforms and lakehouse architectures
“Senior, product-focused engineer who has built real-time customer-facing web applications and a microservices backend (TypeScript/React/Node) using RabbitMQ, MongoDB, and Redis. Demonstrates strong operational maturity (idempotency, tracing/observability, backpressure) and built an internal console that became the primary tool for debugging, replaying jobs, and managing system behavior.”