Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in deep learning, NLP/LLMs, and MLOps
“Built and shipped a real-time oncology risk prediction system used by doctors during patient visits, trained on clinical data in AWS SageMaker and deployed via FastAPI with sub-second responses. Emphasizes clinician-trust features (SHAP explainability, validation checks) and HIPAA-compliant controls (encryption, RBAC, audit logging), plus Kubernetes-based production operations with autoscaling, monitoring, and drift/retraining workflows; collaborated closely with oncologists at Flatiron Health.”
Engineering Leader and Former CTO specializing in scalable cloud platforms
“Entrepreneurial builder using Claude Code to rapidly prototype multiple product ideas and validate them with target users via social channels. Created and launched NotePulse (a lower-priced Notionlytics alternative), acquiring a dozen early users through Reddit-driven discovery and light paid experiments, with a strong emphasis on MVP scoping and product polish.”
Executive CTO & AI Architect specializing in regulated SaaS (InsurTech/Healthcare/FinTech)
“Insurance-tech CTO and repeat founder with 10+ years in insurance startups; was employee #4/CTO at Polly (formerly DealerPolicy) and helped scale it from a PowerPoint to 250 employees while raising $180M+. Currently building and selling AgentCanvas.ai—an extensible AI accelerator platform for large insurance agencies—after coding the product end-to-end and now running demos/POCs with prospective buyers.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps on AWS
“AI engineer who built a production RAG-based internal analyst tool at BlackRock, fine-tuning an LLM on proprietary financial data and adding four layers of guardrails (input/retrieval/generation/output) to improve grounding and reduce hallucinations. Implemented a LangChain-based multi-agent orchestration (7 major agents) deployed on AWS ECS, with reliability measured via internal human evaluation, LLM-as-judge, and RLHF/drift monitoring.”
Junior Data Engineer specializing in BI, governed metrics, and workflow automation
“Built and shipped LLM/OCR/NLP-driven document-intelligence workflows in operational environments (EnvoyX and UPS), emphasizing production readiness via explicit state-machine orchestration, confidence gates, and human-in-the-loop review. Demonstrated strong business impact in customs brokerage/document ingestion: 50% fewer customs rejects, 30% higher throughput, SLA adherence improved from 71% to 96%, and platform reliability reaching 99.6% with 78% fewer bad-data incidents.”
Mid-level Data Engineer specializing in cloud data pipelines and enterprise data platforms
“Data engineer/backend engineer who owns large-scale, real-time event pipelines on AWS end-to-end, including a petabyte-scale CDC ingestion flow from multiple Postgres DBs into Redshift. Re-architected a legacy DynamoDB+S3 approach into a Delta Lake + DuckDB/PyArrow-compatible design, improving performance dramatically (e.g., ~600s to ~10s for 1k records) and increasing reliability at high file volumes.”
Mid-level AI Engineer specializing in GenAI, NLP, and MLOps
“LLM/agentic-systems engineer with PayPal experience hardening an LLM-powered fraud support assistant from prototype to production, focusing on low-latency distributed architecture, rigorous evaluation/testing, and security/compliance. Comfortable in customer-facing and GTM contexts—runs technical demos/workshops, builds tailored pilots, and aligns sales/CS with engineering to close deals and drive adoption.”
Senior Python Full-Stack Developer specializing in cloud, data engineering, and ML/GenAI
“Backend/data engineer with hands-on production experience building FastAPI services on AWS and implementing strong reliability/observability (CloudWatch, ELK, correlation IDs, alarms). Has delivered serverless + container solutions with IaC (CloudFormation/Terraform) and Jenkins CI/CD, and built AWS Glue/PySpark pipelines into S3/Redshift with schema-evolution and data-quality safeguards; demonstrated large-scale SQL tuning (45 min to 3 min on a 500M-row workload).”
Principal Cloud & Infrastructure Engineer specializing in reliability and regulated data platforms
“Founder/CTO-type startup leader who has built cloud-native data and AI platforms from scratch while owning both technical vision and product direction. Brings rare end-to-end startup experience spanning zero-to-one building, growth-stage execution, and fundraising from early stage through exit, with a strong ability to translate technical complexity into clear investor narratives.”
Junior Data Analyst specializing in business analytics and BI
“Analytics-focused candidate with hands-on experience building SQL data pipelines and Python-based forecasting workflows for inventory and planning use cases. They emphasize data quality, stakeholder trust, and operational adoption, citing a 19% forecast accuracy improvement and strong experience translating analytics into dashboard-ready business metrics.”
Executive CIO/CTO/CDO specializing in data, AI/ML, and digital transformation
“Founder building a healthcare provider data management startup who has progressed from problem identification to product architecture, patent filing, prototype development, beta customer outreach, and angel fundraising. They also have experience performing technical assessments for VCs and approach company-building with a structured focus on customer demand, risk mitigation, IP protection, and candid core-team formation.”
Junior Machine Learning Engineer specializing in AI, computer vision, and data systems
“Built and owned an end-to-end AV operations automation and dashboarding platform for USC event operations, used daily to coordinate hundreds of live events. Delivered a React/TypeScript full-stack system integrating Smartsheet APIs with strong reliability practices (typed contracts, validation/fallbacks, safe rollouts) and experience with queue-based microservice patterns (idempotency, retries, DLQs, monitoring).”
Senior Data Engineer specializing in cloud analytics and data modernization
“Candidate has hands-on experience delivering production data and AI systems, including an AWS-based real-time data platform for a financial client at Deloitte and a production RAG workflow that cut manual search time by 40%. They stand out for combining strong data engineering depth with practical LLM governance, incident debugging, and stakeholder management across business and risk/compliance teams.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and MLOps
“Internship experience shipping production AI systems: built an end-to-end RAG platform (Python/FastAPI + LangChain/LangGraph + vector search) to answer support questions from unstructured internal docs, with a strong focus on hallucination prevention through confidence gating and rigorous offline/online evaluation. Also delivered an AI-driven personalization/analytics feature using an unsupervised clustering pipeline, iterating with PMs to align statistically strong clusters with actionable business segmentation.”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Analytics professional with Deloitte experience building SQL and Python workflows for revenue, pipeline, and opportunity analytics at scale. They combine strong data engineering and modeling skills with business-facing delivery, citing impacts including 8-10% conversion improvement, ~$700K revenue protected, 12% YoY project acquisition growth, and 15% retention improvement in financial services.”
Junior data and product analyst specializing in machine learning and analytics
“Senior at the University of Michigan who led most of the technical build for a real client-facing Medicare fraud detection system with explainable ML and an analyst-ready Streamlit dashboard. Also builds practical LLM tools independently, including a market sentiment pipeline over Reddit/news data and a resume parser/grader, showing strong product instinct alongside applied ML and data engineering depth.”
Mid-level Full-Stack Java Developer specializing in cloud microservices and AI-driven platforms
“Software engineer with Intuit experience shipping an end-to-end real-time financial insights product on AWS, using event-driven architecture with Kafka and Spark Streaming to process millions of records with low latency. Also delivers customer-facing React + TypeScript dashboards and has hands-on production operations experience, including resolving a database scaling incident via read replicas, query tuning, and connection pooling.”
Mid-level Data Scientist / Machine Learning Engineer specializing in fraud, risk, and MLOps
“AI/ML practitioner with Northern Trust experience who has shipped production LLM systems (internal support assistant) using RAG, vector databases, orchestration (LangChain/custom pipelines), and rigorous monitoring/feedback loops. Also built AI-driven fraud detection/risk monitoring solutions in a regulated financial environment, emphasizing explainability (SHAP), audit readiness, and stakeholder trust through dashboards and clear communication.”
Mid-level AI/ML Engineer specializing in fraud detection and risk analytics in Financial Services
“At JP Morgan Chase, built and deployed a production LLM-powered RAG knowledge assistant to help fraud investigators and risk analysts quickly navigate regulatory updates and internal policies, reducing investigation delays and compliance risk. Strong focus on secure retrieval (RBAC filtering), reliability (layered testing + observability), and production constraints (latency/SLOs), with Airflow-orchestrated, auditable ML pipelines.”
Junior Data Scientist specializing in Generative AI and applied machine learning
“At Evoke Tech, built a production LLM "Testbench" to quickly compare LLMs/embedding models and RAG strategies (semantic, hybrid BM25, re-ranking, HyDE, query expansion) to select optimal architectures for different client needs. Also developed a multi-agent, multimodal (voice/text) RAG system for live catalog retrieval and safe product recommendations using LangGraph/LangChain with LangSmith monitoring, and regularly translated PM/UX goals into concrete agent behaviors via demos and flowcharts.”
Senior Software Engineer specializing in full-stack systems, data pipelines, and ML
“Built and productionized an autonomous research agent (AutoGPT) in a Docker/Kubernetes environment with Pinecone-based long-term memory and custom Python tools for analysis, visualization, and report drafting. Implemented layered guardrails (prompt templates, automated validation, self-critique loops, and monitoring) and achieved ~25% reduction in manual report generation time while scaling the workflow to support multiple concurrent users.”
Senior Data Scientist specializing in ML, NLP, and GenAI analytics
“Built and deployed an LLM-powered analytics assistant enabling business users to ask questions in plain English and receive validated Spark SQL executed in Databricks, with a Streamlit/Flask UI. Addressed strict client schema-privacy constraints by implementing a RAG strategy and ultimately leveraging AWS Bedrock and fine-tuned reference docs. Also has production ML pipeline experience using Docker + Airflow and AWS (S3/ECS/EC2) for financial classification models.”
Mid-level AI/ML Engineer specializing in LLM fine-tuning, RAG, and MLOps
“AI/ML engineer with HP experience building and productionizing an LLM-powered document intelligence platform (LangChain + Pinecone) to deliver semantic search and contextual Q&A across millions of enterprise support documents. Demonstrates strong MLOps and scaling expertise (Airflow, Kubernetes autoscaling, Triton GPU inference, monitoring with Prometheus/W&B) plus a structured approach to evaluation (A/B tests, shadow deployments, failover) and effective collaboration with non-technical stakeholders.”
“GenAI/data engineering practitioner with production experience across Equinix, Optum, and Citibank—built an Azure OpenAI (GPT-4) + LangChain document intelligence platform processing 1.5M+ docs/month and a HIPAA-compliant Airflow healthcare pipeline handling 5M+ claims/day. Also delivered a real-time fraud detection + explainability system using LightGBM and a fine-tuned T5 NLG component, improving fraud accuracy by 15%+ while partnering closely with compliance stakeholders.”