Pre-screened and vetted.
Mid-level Data Analyst specializing in ML, AI, and data visualization
Senior Data Engineer specializing in real-time pipelines, cloud data platforms, and healthcare analytics
Senior Data Scientist specializing in AWS ML solutions for healthcare, telecom, and e-commerce
Senior Data Analyst & Data Scientist specializing in healthcare, epidemiology, and predictive modeling
Mid-level AI/ML Engineer specializing in MLOps, NLP/CV, and fraud detection
Mid-level Machine Learning Engineer specializing in MLOps and healthcare analytics
Mid-level Data Scientist specializing in ML, NLP, and cloud deployment
Mid-level Data Analyst specializing in BI dashboards, SQL optimization, and cloud data pipelines
Mid-level Data Engineer specializing in lakehouse architectures and cloud ELT
Senior Marketing Analytics & Operations Analyst specializing in performance marketing and BI
“Paid media/demand gen marketer who owned a $50K/month multi-brand budget (five business units) across Google Ads and LinkedIn Ads. Ran structured A/B tests and made fast budget reallocations based on benchmarks (CTR/CPL/lead quality), and diagnoses performance declines via funnel and saturation analysis (frequency/overlap) to stabilize CPAs and restore volume.”
Mid-level DevOps & Cybersecurity Software Developer specializing in IAM/CIAM automation
“Frontend engineer who led the end-to-end UI for an internal employee catalog tool at Genetec, building React/TypeScript dashboards with complex search filters. Emphasizes tight product-owner feedback loops (weekly demos), Figma-based design alignment, and disciplined delivery practices using CI/CD, automated tests, and version tagging for rollouts/reverts.”
Mid-level AI/ML Engineer specializing in NLP, GenAI, and MLOps in healthcare and finance
“AI/ML engineer with CVS Health experience deploying production LLM systems in regulated healthcare settings, including a large-scale RAG solution (1M+ documents) built for compliance-grade, auditable policy/regulatory Q&A with strong anti-hallucination controls. Also delivered an NLP summarization system for physician notes/case narratives by partnering closely with non-technical care operations stakeholders and iterating via prototypes, dashboards, and feedback loops.”
Intern Full-Stack Software Engineer specializing in automation and data-driven systems
“Early-career engineer with Charles Schwab internship experience building and testing production-bound internal APIs, emphasizing architectural fit, stakeholder alignment, and systematic debugging. Also has academic Python/ML experience analyzing Oura Ring biometric data and exposure to multi-agent robotics through coursework and RoboSub.”
Mid-level Software Engineer specializing in ML, LLM apps, and cloud data systems
“Built a production SQL chatbot for access-log analytics that replaced manual custom report requests with natural-language querying, using LangGraph and a ChromaDB-backed RAG pipeline for grounded, consistent answers. Implemented a privacy-preserving design where the LLM never sees raw customer data (only query metadata) and has experience building multi-agent/tool-calling systems with LangGraph (DeepAgents), including solving sub-agent communication drift via self-reflection.”
Mid-level Data Scientist specializing in ML, NLP, and Generative AI
“Data engineering / ML practitioner with experience at MetLife building transformer-based sentiment analysis over large unstructured datasets and productionizing pipelines with Airflow/PySpark/Hadoop (reported 52% efficiency gain). Also implemented embedding-based semantic search using Pinecone/Weaviate to improve retrieval relevance and enable RAG for customer support and document matching use cases.”
Mid-level Data Scientist & Generative AI Engineer specializing in LLMs and RAG
“ML/NLP practitioner who built a retrieval-augmented generation (RAG) system for large financial and operational document sets using Sentence-Transformers (all-mpnet-base-v2) and a vector DB (e.g., Pinecone), with a strong focus on retrieval evaluation and chunking strategy optimization. Experienced in entity resolution (rules + embedding similarity with type-specific thresholds) and in productionizing scalable Python data workflows using Airflow/Dagster and Spark.”
Mid-level AI/ML Engineer specializing in LLMs, GenAI, and NLP
“AI/ML Engineer who built a production RAG-based LLM system for insurance policy documents, turning thousands of messy PDFs into a searchable index using LangChain, Azure AI Search vectors, hybrid retrieval, and FastAPI. Strong focus on evaluation (MRR/precision@k/recall@k, REGAS) and performance optimization (vLLM), with prior clinical NLP experience using BERT-based NER validated on ground-truth datasets.”
Junior Data Scientist specializing in ML, LLMs, and RAG applications
“University hackathon finalist (2nd place) who built CareerSpark, a production-style multi-agent career guidance app in 24 hours using a hierarchical debate architecture with a moderator/judge agent. Has startup internship experience at LiveSpheres AI using LangChain for multi-LLM orchestration, and demonstrates a structured approach to testing/evaluation (golden sets, integration sims, latency/accuracy KPIs) plus strong non-technical stakeholder communication.”
Mid-level Data Scientist specializing in Generative AI and multimodal systems
“Recent J&J intern who built a conversational RAG agent and led a shift from a monolithic model to a modular RAG workflow, cutting response time from several days to under a second by tackling data fragmentation, context retention, and embedding/latency optimization. Also worked on a large (7B-parameter) multimodal VQA pipeline for healthcare research and stays current via NeurIPS/ICLR and open-source contributions.”
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and MLOps
“Built a secure, on-prem/private GPT assistant to replace manual SharePoint-style search across thousands of policies/SOPs/engineering docs, using a production RAG stack (LangChain/LangGraph, FAISS/Chroma, PyMuPDF+OCR, vLLM). Implemented layout-aware ingestion (including table-to-JSON) and a multi-agent retrieval/generation/verification workflow with strong observability and compliance guardrails, delivering ~70% reduction in search time.”