Pre-screened and vetted.
“Senior data scientist with ~5 years’ experience building production ML/NLP systems in finance (Wells Fargo) and deep learning for sensor analytics in connected vehicles (Medtronic). Has delivered end-to-end platforms combining time-series forecasting with transformer-based NLP, including automated drift monitoring/retraining (MLflow + Airflow) and standardized Docker/CI/CD deployments; achieved a reported 22% precision improvement after domain fine-tuning.”
Mid-level Data Scientist specializing in real-time fraud detection and MLOps
“ML/NLP engineer with experience at Charles Schwab building an NLP + graph (Neo4j) entity-resolution system to unify fragmented user/device/transaction data and improve downstream model quality and analyst querying. Has applied embeddings (SentenceTransformers + FAISS) with domain fine-tuning to boost hard-case matching recall by ~12% while maintaining precision, and has a track record of hardening scalable Python/Spark pipelines and productionizing fraud models via A/B tests and shadow-mode monitoring.”
Mid-level AI Engineer specializing in LLMs, RAG, and agentic platforms
“Built and shipped a production RAG-based assistant that lets parents ask natural-language questions about their child’s learning progress, using pgvector retrieval (child-id filtered) and Redis caching to hit ~180ms latency. Implemented real-world guardrails and compliance (Llama Guard, COPPA, retrieval thresholds, fallbacks) with 99.5% uptime, and ran human-in-the-loop eval loops that improved satisfaction from 3.8 to 4.2 while serving 60k+ monthly users and reducing costs significantly.”
Senior Data & Platform Engineer specializing in cloud-native streaming and distributed systems
“Financial data engineer who has built and operated high-volume batch + streaming pipelines (200–300 GB/day; 5–10k events/sec) using AWS, Spark/Delta, Airflow, Kafka, and Snowflake, with strong emphasis on data quality and reliability. Demonstrated measurable impact via 99.9% SLA adherence, major reductions in bad records/nulls, MTTR improvements, and significant latency/runtime/query performance gains; also built a distributed web-scraping system processing 5–10M records/day with anti-bot and schema-drift defenses.”
Mid-level Data Engineer specializing in multi-cloud data platforms for healthcare and finance
“Data engineer with Cigna experience building and operating an end-to-end AWS-based healthcare claims pipeline processing ~2TB/day, using Glue/Kafka/PySpark/SQL into Redshift. Strong focus on data quality and reliability (schema validation, monitoring/alerting, retries/checkpointing/backfills), reporting improved accuracy (~99%) and reduced latency, plus experience serving real-time Kafka/Spark data to downstream analytics with documented data contracts.”
Intern Full-Stack Software Engineer specializing in AI/ML and cloud
“Built a Python-based geospatial machine learning backend for PFAS contamination risk mapping, including reproducible feature pipelines, ensemble modeling, and a FastAPI layer for visualization/analysis. Emphasizes data integrity and robustness (CRS/coverage checks, fail-fast validation) and has led safe backend refactors using feature flags, idempotent backfills, and Postgres RLS for secure, queryable results delivery.”
Mid-level AI Engineer specializing in LLMs, RAG, and healthcare AI
“Built and scaled an AI-powered voice/chat patient engagement platform at Penn Medicine from early prototype into production clinical workflows, focusing on latency, edge cases, and user trust. Strong in LLM reliability engineering (structured prompts, validation/fallbacks), real-time troubleshooting with observability, and cross-functional enablement through pilots, demos, and sales/customer partnership.”
Principal AI Systems Architect specializing in AI governance and audit-safe autonomous agents
“Backend engineer who architected and owned a mission-critical outage management/decision-support platform, replacing a legacy system that failed under load. Emphasizes auditability, deterministic validation, and server-side concurrency controls (section locking, scoped autosaves), plus redundancy/load balancing and monitoring to keep the system stable for 24/7 operations handling 1,500+ weekly outage events.”
Senior AI/ML Engineer specializing in production-grade LLM systems for regulated finance
“AI/LLM engineer with published work who built FinVet, a production financial misinformation detection system using multi-pipeline RAG, confidence-based voting, and evidence-backed outputs (F1 0.85, +37% vs baseline). Also built NexusForest-MCP, a Dockerized Model Context Protocol server exposing structured global deforestation/carbon data via SQL tools for reliable LLM tool use. Previously delivered borrower risk-rating (PD) models at BMO Financial Group that were validated and integrated into an enterprise credit system through close collaboration with credit officers and portfolio managers.”
Mid-level Data Engineer specializing in Lakehouse, Streaming, and ML/LLM data systems
“Built and productionized an enterprise retrieval-augmented generation platform for internal knowledge over large unstructured corpora, emphasizing trust via strict citation/grounding and hybrid retrieval (BM25 + FAISS + cross-encoder re-ranking). Demonstrates strong scaling and cost/latency optimization through incremental indexing/embedding and index partitioning, plus disciplined evaluation/observability practices. Has experience operationalizing pipelines with Airflow/Databricks/GitHub Actions and partnering closely with risk & compliance stakeholders on auditability requirements.”
Mid-level AI/ML Engineer specializing in generative AI, RAG platforms, and LLM agents
“AI/LLM engineer who has shipped 10+ production applications, including InvestIQ on GCP—a production-grade RAG due-diligence engine that ethically scrapes web/PDF sources, builds a ChromaDB knowledge base, and delivers analyst-style dashboards plus a citation-backed chat copilot. Deep focus on reliability (evidence-only answers, hard citations, refusal gating), retrieval tuning, and orchestration (Airflow/Cloud Composer), plus multi-agent systems (CrewAI with 7 specialized finance agents).”
Mid-level Data Scientist specializing in Generative AI, NLP, and MLOps
“Built and deployed an LLM-powered claims-document summarization system (insurance domain) that cut agent review time from 4–5 minutes to under 2 minutes and saved 1,200+ hours per quarter. Hands-on across orchestration and production infrastructure (Airflow retraining DAGs, Kubernetes, SageMaker endpoints, FastAPI) and recent RAG workflows using n8n + Pinecone, with a strong focus on reliability, cost, and explainability for non-technical stakeholders.”
Mid-level AI/ML Engineer specializing in cloud data engineering and GenAI
“AI/LLM engineer with production experience in legal tech: built a GPT-4 + LangChain RAG summarization system at Govpanel that reduced legal case-file review time by 50%+. Previously at LexisNexis, orchestrated end-to-end Airflow data/AI pipelines processing 5M+ legal documents daily, improving ETL runtime by 35% with robust validation, monitoring, and SLAs.”
Mid-level AI/ML Engineer specializing in NLP, RAG systems, and real-time risk modeling
“AI/ML Engineer with 4+ years of experience (Capital One, Odin Technologies) and a master’s in Data Analytics (4.0 GPA) who has deployed LLM/RAG systems to production for compliance/risk and document review. Strong in orchestration and MLOps (Airflow, Kubernetes, MLflow, GitHub Actions) and in tackling real-world LLM constraints like latency, context limits, and data privacy, with measurable impact (20%+ manual review reduction; 33% faster release cycles).”
Mid-level AI/ML Engineer specializing in deep learning, MLOps, and LLM applications
“Built and deployed production LLM assistants for internal Q&A and customer-feedback summarization, emphasizing reliability (RAG, prompt tuning, validation/whitelisting) and privacy safeguards. Improved adoption by adding explainable outputs and a user feedback mechanism, and has hands-on orchestration experience with Aflow and Azure Logic Apps.”
Mid-level Data Engineer specializing in cloud data platforms and real-time analytics
“Customer-facing data engineering professional who builds and deploys real-time reporting/dashboard solutions, gathering reporting and compliance requirements through direct stakeholder engagement. Experienced with Google Cloud IAM governance, secure integrations (encryption, audit logging), and fast production troubleshooting of ETL/pipeline failures with follow-on monitoring and automated recovery improvements; motivated by hands-on, travel-oriented customer work.”
Senior Full-Stack Developer specializing in cloud-native microservices and AI/ML analytics
“Full-stack/backend engineer with deep insurance claims domain experience who built and operated a microservices + ETL platform (Java/Spring Boot + Python + Kafka/Databricks) processing 1M+ daily transactions. Combines production-grade reliability (99.7% uptime, zero-downtime blue/green releases, strong observability) with customer-facing UI delivery (AngularJS/React+TS dashboards and a hackathon-winning research chatbot).”
Senior Data Engineer specializing in data infrastructure and marketing/CRM analytics
“Salesforce-focused implementation/solutions engineer from Full Circle Insights who owned end-to-end campaign attribution and reporting deployments for multiple customers at once (3–5 concurrently), including sandbox testing, KPI monitoring, and rollback-safe migrations from legacy reporting. Also builds personal multi-agent workflows and uses Claude Code to rapidly scaffold data/analytics scripts like an advertising optimization parser over CSV/XLSX inputs.”
Mid-Level Full-Stack/Backend Engineer specializing in AWS, APIs, and GenAI systems
“Backend engineer who built the core backend for Air Kitchens’ discovery/booking platform on AWS (Node + Python, DynamoDB, SQS/Lambda), optimizing for fast user-facing APIs and scalable async workflows. Introduced an AI matching service with a deterministic pre-filter + LLM ranking approach to balance latency vs quality, and has hands-on experience with production security (JWT/RBAC/RLS), CI/CD, and blue-green, staged migrations from Django to modular services.”
Mid-level Full-Stack & Data Engineer specializing in AWS cloud and real-time streaming
“Backend engineer with experience at Cigna evolving REST API services backed by PostgreSQL, emphasizing reliability/correctness, scalability, and observability. Has hands-on production experience with FastAPI (contract-first design, Pydantic schemas), performance tuning (indexes, caching), and secure auth patterns (OAuth/JWT, RBAC, row-level security via Supabase), plus low-risk incremental rollouts using feature flags and dual writes.”
Senior Full-Stack Developer specializing in Python, AWS serverless, and data workflows
“Backend/data engineer from ALDI Tech Hub who modernized legacy analytics (Excel/SAS) into production-grade Python services on AWS serverless (FastAPI on Lambda behind API Gateway with Step Functions). Strong in reliability and operations (Cognito auth, retries/timeouts, structured logging, CloudWatch alarms) and data pipelines (Glue ETL with schema evolution); delivered measurable SQL tuning gains (30s to 2s, 70% CPU reduction).”
Mid-level GenAI Engineer specializing in LLM fine-tuning, RAG, and MLOps
“Healthcare-focused LLM engineer who deployed a production triage and clinical knowledge retrieval assistant using RAG and LangGraph-orchestrated multi-agent workflows. Emphasizes clinical safety and compliance with robust hallucination controls, HIPAA/PHI protections (tokenization, encryption, audit logging, zero-retention), and human-in-the-loop escalation; reports a 75% latency reduction in a healthcare agent system.”
Intern Data Scientist specializing in ML engineering and LLM agentic workflows
“Built an agentic, multi-step LLM system that generates full-stack code for API integrations using LangChain orchestration, Pinecone/SentenceBERT RAG, and a human-in-the-loop feedback loop for iterative code refinement. Also collaborated with non-technical content writers and PMs during a Contentstack internship to deliver a Slack-based AI workflow that generates and brand-checks articles with one-click approvals.”