Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in LLM systems and healthcare data automation
“React performance-focused engineer who contributed performance patches back to an open-source context+reducer state helper after profiling and fixing excessive re-renders in an enterprise project management platform at Easley Dunn Productions. Also built an end-to-end LLM-driven pipeline at Prime Healthcare to normalize millions of supply-chain records, reducing defects by 80% and saving 160+ hours/month.”
Junior Machine Learning Engineer specializing in LLMs, NLP, and computer vision
“Built a production, agentic multi-agent pharmaceutical intelligence system for US oncology (breast cancer) conference/news intelligence, automating MSL-style information gathering and summarization for pharma and healthcare stakeholders. Uses CrewAI + LangChain orchestration, custom scraping across ~15 pharma newsrooms, and a grounding-score evaluation approach (sentence transformers/cosine similarity) to mitigate hallucinations.”
Mid-level Data Scientist/ML Engineer specializing in healthcare AI and MLOps
“Designed and deployed an enterprise LLM-powered clinical/pharmacy policy knowledge assistant at CVS Health, replacing manual searches across PDFs/Word/SharePoint with a HIPAA-compliant RAG system. Built end-to-end ingestion and orchestration (Airflow + Azure ML/Data Lake + vector index) with PHI masking, versioned re-embedding, and production monitoring (Prometheus/Grafana), and partnered closely with clinicians/compliance to ensure policy-grounded, auditable answers.”
Mid-level Data & AI Engineer specializing in healthcare data pipelines and MLOps
“Built and deployed a production LLM-powered clinical note summarization system used by care managers to speed review of 5–20 page unstructured medical records. Implemented safety-focused validation (prompt constraints, rule-based and section-level checks, human-in-the-loop) to reduce hallucinations while maintaining low latency and meeting privacy/regulatory constraints, integrating via APIs into existing clinical tools.”
Mid-Level Full-Stack Software Engineer specializing in cloud-native microservices and data platforms
“Backend/ML integration engineer with experience at Accenture and Walmart building Flask-based analytics and prediction APIs on PostgreSQL/MySQL. Strong focus on performance and scalability—uses precomputed aggregates, Redis caching, query tuning (indexes/partitioning/EXPLAIN), and async/background processing; also designs secure multi-tenant isolation with JWT and schema/db-per-tenant strategies.”
Mid-level AI/ML Engineer specializing in Generative AI and NLP
“AI/LLM engineer with production experience building secure, scalable compliance-focused generative AI systems (GPT-3/4, BERT) including RAG over internal regulatory document bases. Has delivered end-to-end pipelines on AWS with PySpark/Airflow/Kubernetes/FastAPI, emphasizing privacy controls, monitoring, and iterative evaluation (A/B testing). Also partnered closely with bank compliance officers using prototypes to refine NLP summarization/classification and reduce document review time.”
Principal Data Scientist & Software Engineer specializing in space mission data systems
“Space/heliophysics ML engineer who built a PyTorch GRU model to propagate solar wind from L1 to the magnetopause with probabilistic outputs for uncertainty quantification, achieving ~25% better CRPS than standard approaches. Also developed production-grade Python ETL and an open-source telemetry processing package for a mission (LEXI), using Docker and GitHub Actions CI/CD and iterating with scientist/engineer stakeholders.”
Mid-level AI/ML Engineer specializing in healthcare ML and LLM/RAG systems
“AI/LLM engineer with recent production experience at UnitedHealth Group building an end-to-end RAG system over structured EMR data and unstructured clinical notes, including evidence retrieval, GPT/LLaMA-based reasoning, and a validation layer for reliability. Strong in orchestration (Kubeflow/Airflow/MLflow), prompt engineering for noisy healthcare text, and rigorous evaluation/monitoring with gold-standard benchmarking, plus close collaboration with clinical operations stakeholders.”
Intern Robotics/ML Engineer specializing in autonomy, networking, and systems software
“Robotics software engineer who built a lightweight, ROS-free distributed control and telemetry stack for a Caltrans long-range culvert inspection robot. Strong in integrating heterogeneous hardware (UART motor controllers, Ethernet sensors, MJPEG cameras) and delivering real-time operator data via FastAPI/WebSockets, including reverse-engineering undocumented protocols and debugging network-induced latency with control-loop redesign.”
Mid-level Full-Stack Software Engineer specializing in FinTech and cloud-native microservices
“Backend engineer with fintech/banking experience (e.g., Canara Bank) building secure Python/Flask microservices for financial reporting and unified data access. Strong in Postgres/SQLAlchemy performance optimization (including materialized views) and in productionizing ML services on AWS (Lambda/ECS/CloudWatch) with Docker, model registries, and blue-green deployments, plus multi-tenant isolation via JWT-based middleware.”
Mid-level AI/ML Engineer specializing in MLOps and LLM-powered applications
“AI/ML engineer with production experience building a RAG-based internal analytics assistant (Databricks + ADF ingestion, Pinecone vector store, LangChain orchestration) deployed via Docker on AWS SageMaker with CI/CD and MLflow. Strong focus on real-world constraints—latency/cost optimization (LoRA ~60% compute reduction), hallucination control with citation grounding, and enterprise security/governance. Previously at Intuit, delivered an interpretable churn prediction system (PySpark/Databricks, Airflow/Azure ML) that improved retention targeting ~12%.”
Mid-Level Software Engineer specializing in Robotics, AI/ML, and XR
“Candidate states they have worked on many robotics software system projects and has overcome many technical challenges, but declined to provide any project details during the screening and ended the interview early.”
Mid-Level Software Developer specializing in Java, Cloud, and Microservices
“Backend/Python engineer who owned an end-to-end FastAPI + AWS internal natural-language document Q&A system (Textract extraction, embeddings/vector DB, LLM integration) with strong focus on reliability and latency. Hands-on with Kubernetes + GitOps (Argo CD, Helm, rolling updates/auto-rollback) and built/optimized Kafka streaming pipelines using Prometheus/Grafana. Also supported a zero-downtime on-prem to cloud migration with parallel run and gradual traffic cutover.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and MLOps in Financial Services
“ML/LLM engineer at Charles Schwab who built a production loan-advisor chatbot integrated with internal knowledge and loan-calculator APIs, adding strict numeric validation to prevent rate hallucinations and optimizing context to control costs. Also runs ~40 Airflow DAGs orchestrating retraining/ETL/drift monitoring with an automated Snowflake→SageMaker→auto-deploy pipeline, and uses rigorous testing plus canary rollouts tied to business metrics and compliance constraints.”
Mid-level Data Scientist specializing in ML, MLOps, and customer analytics
“ML/NLP practitioner focused on insurance/claims analytics for a large financial firm, working with millions of fragmented structured and unstructured records. Built production-grade pipelines for entity extraction, entity resolution, and semantic search using Sentence-BERT + vector DB, including fine-tuning with contrastive learning (reported ~15% recall lift) and scalable ETL/containerized deployment on Kubernetes.”
Senior Data Scientist / ML Engineer specializing in NLP, anomaly detection, and cloud ML platforms
“ML/NLP practitioner who built customer-feedback topic modeling (NMF + TF-IDF) to diagnose chatbot-to-agent handovers and drove product/ops changes that reduced operational costs by 20%. Also developed LSTM-based intent recognition using Word2Vec/GloVe embeddings for semantic linking, and deployed an LSTM autoencoder for fraud anomaly detection that cut false positives by 25% while capturing 15% more fraud in A/B testing.”
Junior AI Software Engineer specializing in GenAI and full-stack ML deployment
“Backend/Founding-Engineer-style builder who architected AESOP, a multi-agent distributed platform for biomedical literature evidence synthesis. Implemented an async FastAPI stack on AWS with LangGraph orchestration, Redis/Postgres+pgvector, and Celery-based background processing, plus defense-in-depth security (JWT refresh/rotation and DB-level isolation). Notable for hardening LLM workflows with multi-layer validation and convergence safeguards to prevent hallucinations and infinite agent loops.”
“Built an AI-driven insurance policy summarization platform at Marsh, taking it end-to-end from messy PDF ingestion/OCR and custom extraction through LLM fine-tuning and AWS SageMaker deployment. Delivered measurable impact (25% reduction in manual review time, 99% uptime) and demonstrated strong production MLOps/LLMOps practices with Airflow/Step Functions orchestration, rigorous evaluation (ROUGE + human review), and continuous monitoring for drift, latency, and hallucinations.”
Mid-level AI/ML Engineer specializing in GenAI agents, RAG pipelines, and MLOps
“AI/ML engineer who built a production RAG-based internal document intelligence assistant (LangChain + Pinecone) to let employees query enterprise reports in natural language. Demonstrated hands-on pipeline orchestration with Apache Airflow and tackled real production issues like retrieval grounding and latency using tuning, caching, and token optimization, while partnering closely with non-technical business stakeholders through iterative demos.”
Intern Data Scientist specializing in healthcare AI and experimentation
“Human-AI Design Lab practitioner who productionized a wearable-health anomaly detection system by evolving a standalone autoencoder into a hybrid autoencoder + GPT-based approach, backed by PySpark ETL and MLOps on AWS SageMaker/MLflow. Also has applied LLM troubleshooting experience (fine-tuned FLAN-T5 summarization) and partnered with BI teams to run A/B tests and improve retention via feature stores and experimentation.”
Mid-level Data Scientist specializing in Generative AI, RAG systems, and ML engineering
“AI/LLM engineer who built a production QA RAG for a University of Massachusetts faculty success initiative, cutting service tickets by 70%. Strong end-to-end RAG implementation skills (LangChain, Qdrant, hybrid/HyDE retrieval, FastAPI) with rigorous evaluation (RAGAS, LLM-as-judge) and practical handling of constraints like API rate limits and cost. Prior cross-functional delivery experience collaborating with SMEs and business owners at TCS and IBM.”
Intern Full-Stack Software Engineer specializing in AWS serverless and real-time web apps
“New-grad/early-career engineer who led high-stakes modernization of a field-operations platform from Firebase to AWS using an incremental/dual-write strategy, achieving zero downtime and ~30–32% infra cost reduction while improving scalability. Also built and productionized an AI-native code assistant (LangChain + Pinecone RAG) with measurable online metrics and safety guardrails, and has experience working directly with CEO/CTO/CPO and embedded with customer teams to ship enterprise features quickly.”
Mid-Level Software Engineer specializing in cloud, microservices, and AI/ML
“Backend/API engineer with ~4 years experience building production services in .NET Core/PostgreSQL/Redis/Docker and optimizing real-world latency issues (claims ~60% response-time improvement). Also built and owned an end-to-end RAG-based AI assistant using Python/FastAPI, OpenAI APIs, and Pinecone, plus agentic workflows with reliability guardrails (retries, confidence thresholds, monitoring). Currently pursuing a master’s degree and targeting a $150k base salary.”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and external data collection systems on AWS (S3/Lambda/SQS/Glue/EMR) using PySpark/SQL, serving curated datasets to Snowflake/Redshift for finance and fraud teams. Has operated a large-scale crawler ingesting millions of records/day with anti-bot tactics, schema versioning/quarantine, and CloudWatch/Datadog monitoring, and also shipped a versioned REST API with caching and query optimization.”