Pre-screened and vetted.
Mid-level Data Scientist / AI-ML Engineer specializing in RAG, MLOps, and real-time analytics
“Software/ML engineer who built a production automated job-finding and cold-email personalization system for Fortune 500 outreach, using JobSpy for dynamic scraping, LangChain orchestration, and LLM+vector DB semantic search with grounding/relevance metrics and guardrails. Also delivered a predictive investment analytics platform for financial advisors, communicating results via Tableau dashboards and portfolio KPIs like Sharpe ratio and drawdowns.”
Senior Data Analytics & Data Science professional specializing in Financial Services
“Worked on large financial analytics datasets combining complaint text, transaction logs, and demographics; built end-to-end NLP/ML pipelines (TF-IDF + Random Forest) and data integration in BigQuery with Tableau reporting, citing ~95–98% accuracy. Also implemented entity resolution with fuzzy matching and semantic linking using BERT sentence-transformer embeddings stored in FAISS, including fine-tuning on labeled pairs to improve search/linking relevance.”
Mid-level AI/ML Engineer specializing in Generative AI and MLOps
“Built and shipped a production RAG assistant using GPT-4, LangChain, and Pinecone/FAISS to search 50K+ institutional documents, with a strong focus on groundedness and hallucination reduction through retrieval optimization and re-ranking. Pairs this with a metrics-driven evaluation/monitoring approach (BLEU/ROUGE, manual sampling, logging) and workflow automation via Airflow, and has experience translating stakeholder needs into iterative AI prototypes.”
Mid-level ML & Data Engineer specializing in GenAI, graph modeling, and fraud/risk analytics
“Built a production AI fraud/risk scoring platform at BlueArc that ingests web business/product/site data, generates text+image embeddings, and connects entities in a graph to detect reuse patterns and links to known bad actors. Optimized for scale with incremental graph re-scoring and delivered investigator-friendly explainability by surfacing the exact signals/relationships behind each score; orchestrated workflows with Airflow and GCP event-driven components (Pub/Sub, Dataflow, Cloud Run) and has recent LLM workflow orchestration experience (retrieval, prompting, scoring).”
Mid-level AI/ML Engineer specializing in LLM, NLP, and MLOps
“AI/ML Engineer with 3+ years of experience spanning RAG pipelines, MLOps, large-scale data workflow automation, and resilient Playwright-based UI automation. At Black Hawk Network and Wipro, they describe shipping production systems with strong observability and compliance controls, including reducing flaky automation failures from 30% to under 2% and automating 3+ TB/day reconciliation workflows.”
Mid-level AI Builder and Data Engineer specializing in GenAI and data pipelines
“Full-stack AI product engineer who personally built ViGenAir, a multimodal system that turns long-form video into ads using FastAPI, React, and agentic scoring. Stands out for handling complex 50GB+ media pipelines, re-architecting systems to eliminate OOM failures, and making opaque AI workflows usable through interactive visual UX that improved trust, speed, and retention.”
Staff Full-Stack & DevOps Engineer specializing in cloud-native platforms and AI
“Backend/data engineer focused on production Python and AWS: built FastAPI REST services and a containerized ECS Fargate + Lambda architecture deployed via Terraform/CI-CD. Strong in data engineering (Glue/S3/Parquet/RDS) and operational reliability (CloudWatch/SNS, retries, schema-evolution handling), with experience modernizing legacy SAS reporting into Python microservices using feature flags and parity validation.”
Junior Data Scientist specializing in agentic AI and RAG pipelines
“LLM/agentic systems builder who shipped production workflows at Angel Flight West and Eureka AI, combining LangGraph + RAG (Postgres/pgvector) with strong observability (LangSmith/Langfuse). Delivered large operational gains (address lookup cut from 10 minutes to 60 seconds; accuracy to 92%) and has a track record of quickly stabilizing customer-critical pipelines (Pydantic-enforced JSON for ETL) while partnering with sales/ops to drive adoption.”
Senior AI/ML Engineer specializing in financial risk, fraud detection, and GenAI analytics
“AI/ML engineer with experience at Northern Trust and Persistent Systems building production LLM + RAG systems for regulated financial use cases, including liquidity forecasting, anomaly detection, and credit scoring. Emphasizes compliance-first design with explainability (SHAP), traceability (MLflow), and hallucination controls (FAISS + citation-grounded prompting), and has delivered drift-triggered retraining pipelines using Airflow and Kubernetes while translating model outputs into business-ready marketing segments.”
Mid-level AI/ML Engineer specializing in healthcare imaging and GenAI/LLM systems
“Built and deployed a production LLM/RAG clinical document understanding and summarization system for healthcare, focused on reducing manual review time while meeting strict accuracy, latency, and compliance needs. Demonstrates strong MLOps/orchestration depth (Airflow, Kubernetes, Azure ML Pipelines) and a rigorous approach to hallucination mitigation through layered, source-grounded safeguards and stakeholder-driven requirements with physicians/compliance teams.”
Mid-level AI/ML & Data Engineer specializing in MLOps and cloud data pipelines
“AI/ML engineer (Merkle) with hands-on experience deploying RAG-based LLM applications and real-time recommendation engines into production. Strong in cloud/on-prem architectures, GPU autoscaling, caching, and network optimization—delivered measurable latency reductions (40–70%) and improved retrieval relevance by systematically benchmarking chunking/embedding configurations and validating pipelines via CI/CD.”
Mid-level Machine Learning Engineer specializing in data security and GenAI systems
“Built Hexagon’s production Text-to-CAD Copilot that converts text and rough sketches into editable CAD code, combining GraphRAG (Neo4j/LangChain) with a Gemini-powered vision module and multi-agent geometric validation—cutting manual modeling from a day to ~45 seconds and driving retrieval latency below 50ms. Also has large-scale GCP data/ML orchestration experience (Airflow/Cloud Composer, Dataflow, Pub/Sub, Snowflake) processing 50M+ daily records with drift monitoring and automated reliability controls.”
Mid-level AI/ML Engineer specializing in LLMs, RAG pipelines, and MLOps
“Data professional with ~4 years of experience, most recently at AIG (insurance), building ML/NLP systems for fraud detection and policy automation using transformers, CNNs, and clustering/anomaly detection. Also developed a RAG-based knowledge retrieval system, iterating across embedding models and moving to production based on precision and latency SLAs, then containerizing and deploying with SageMaker and CI/CD.”
Mid-level Data Scientist/Data Analyst specializing in ML, BI dashboards, and ETL pipelines
“Data/ML practitioner with experience at Humana and Hexaware, focused on turning messy, semi-structured datasets into production-ready pipelines. Built an age-prediction model from book ratings using heavy feature engineering and multiple regression models, and has hands-on entity resolution (deterministic + fuzzy matching) plus embeddings/vector DB approaches for linking and search relevance.”
Mid-level Data Engineer specializing in cloud data pipelines and machine learning
“Experience spans college-built AWS-hosted Python/Flask web apps and enterprise data work at General Motors, including PostgreSQL query optimization on millions of records and multi-tenant-style data isolation using group-based, column-level permission grants. Also built an AWS-hosted meat price prediction dashboard using Dash/Plotly and ran large nightly data pipelines orchestrated with Apache Airflow.”
Mid-level Data Engineer specializing in multi-cloud real-time data pipelines
“Data engineer with healthcare/clinical trial domain experience who owned a 100TB+/month AWS pipeline end-to-end (Glue/S3/Redshift/Airflow) and drove measurable outcomes (20% lower latency, 99.9% reliability, 40% less manual reporting). Also built production data services and API-based ingestion on GCP (Cloud Run/Functions/BigQuery) with strong validation, versioning, and safe migration practices, and launched an early-stage RAG solution (LangChain + GPT-4) for researchers.”
Senior Performance Marketing Analyst specializing in paid search and marketing analytics
“Growth marketing creative lead with experience at Starz and 3.5 years at mute6, spanning subscription acquisition and Shopify-based DTC/eCommerce. Drives Meta/TikTok/YouTube performance by pairing end-to-end creative production and UGC direction with data-led iteration (CTR/CPA/ROAS), including fatigue diagnosis and rapid refreshes.”
Mid-Level Data/ML Engineer specializing in Generative AI and cloud data platforms
“Built and productionized an LLM-based financial document analysis system using a RAG pipeline, including robust ingestion/chunking/embedding workflows, vector DB retrieval, and an AWS-deployed FastAPI service containerized with Docker. Demonstrates strong applied expertise in improving retrieval quality and latency at scale, plus hands-on experience debugging agentic/LLM workflows with monitoring and trace-based analysis while supporting demos and customer-facing adoption.”
Junior AI and Backend Engineer specializing in LLM systems
“AI/LLM engineer who has shipped production RAG copilots and multi-agent workflows, including a real-time Llama3 (Ollama) copilot backend handling 12k+ concurrent queries at 99.9% uptime. Deep on orchestration (Langflow/Airflow/Kubernetes), reliability evaluation (hallucination detection, p95 latency, token cost), and monitoring (Prometheus/Grafana), with demonstrated stakeholder-facing analytics delivery via Tableau.”
“Built and owned a production RAG-based conversational AI system at Entera for real estate analysis, taking it from experimentation through AWS deployment, monitoring, and iterative improvement. Demonstrates strong practical judgment in retrieval design, LLM safety, and scalable Python service architecture, with measurable impact including 30-40% reduction in manual analysis time and roughly 30% better response accuracy.”
Junior Software Engineer specializing in backend systems and ML applications
“Full-stack engineer with hands-on experience building and shipping production web products across AI, frontend, backend, and DevOps. Notably built an end-to-end resume-job matching platform during an internship that processed 1000+ resumes/day and cut recruiter screening effort by 60%, and later shipped an internal operations dashboard at CHS with measurable performance gains.”
Executive Data & AI Leader specializing in cloud-native platforms and data-intensive systems
“Data/ML and product leader with large-scale consumer and enterprise experience (including Walmart) who blends hands-on prototyping with executive stakeholder alignment. Has delivered measurable outcomes across personalization, semantic search/knowledge graphs, and fraud/security architecture, and has scaled organizations rapidly (30→180 in 12 months) by upskilling and building modern data/ML engineering capabilities.”
Senior Product Manager specializing in AI-driven SaaS and product-led growth
“Product-focused candidate with live B2C/product-led growth experience who translates free-to-play game mechanics (onboarding as a core loop, progressive unlocking, nudges, variable rewards) into real-world web/mobile products. Strong in live experimentation, KPI-driven iteration, and monetization/retention measurement (DAU/MAU, funnels, LTV-focused A/B testing).”
Mid-level Data Scientist / ML Engineer specializing in Generative AI, RAG, and MLOps
“Built and productionized a RAG-based LLM research assistant for biomedical and regulatory document search using Mixtral 7B on SageMaker, LangChain, and Milvus, cutting research time by ~40%. Has hands-on multi-cloud MLOps experience across AWS/Azure/GCP with Kubeflow/Airflow/Composer plus Terraform + ArgoCD, and applies rigorous evaluation/monitoring (latency, accuracy, hallucinations). Also partnered with a non-technical PM to deliver an insurance policy Q&A chatbot that reduced customer response time by 30%+.”