Pre-screened and vetted.
Senior Business Analyst specializing in AI and commercial banking analytics
“Analytics candidate with hands-on experience supporting a workforce system transformation from symplr to Oracle Fusion Time and Labor, using SQL and Python to turn operational HR, attendance, and payroll data into reporting-ready datasets. They emphasize performance optimization, reusable analytics pipelines, and metric consistency across dashboards, with project work focused on overtime reduction, workforce efficiency, and retention trends by department.”
Junior Data Analyst specializing in ML, NLP, and cloud data pipelines
“Built and deployed a GenAI-powered PhD career intelligence platform at NYU that maps academic backgrounds to career paths and converts long academic CVs into job-ready resumes. Stands out for treating LLM systems as structured production pipelines—combining NLP extraction, embeddings, orchestration, and AWS deployment—to improve recommendation quality and cut resume preparation time by 70%.”
Healthcare technology executive and architect with 20+ years leading enterprise platforms and digital transformation.
“Healthcare-focused founder in the R&D stage building an EHR and clinical staffing startup centered on value-based care. They have already tested the concept with the market, are engaging Medicaid/Medicare leaders and industry conferences like ViVE and HIMSS, and are focused on early-signal detection to improve patient outcomes while lowering utilization costs.”
Senior Software Engineer specializing in cloud automation and distributed systems
“Developer with experience across Drupal and Java/Spring Boot applications using React/jQuery for UI and API-driven features. Has handled production issues by tuning reverse proxy timeouts for login problems and troubleshooting data pipeline inaccuracies by fixing database queries, with a focus on performance and careful verification before changes.”
“ML/GenAI engineer with recent CVS Health experience building a production RAG system over unstructured financial/research documents using LangChain, FAISS, and Pinecone, plus LoRA/PEFT fine-tuning of GPT/LLaMA for domain-aware summarization. Demonstrates strong applied MLOps and data engineering skills (Airflow/Prefect, Docker/Kubernetes, CI/CD, MLflow) and measurable impact (sub-second retrieval, ~40% better context retrieval, ~25% entity matching improvement).”
Senior Data Analyst specializing in data pipelines, web scraping, and legal data enrichment
“Data engineer focused on reliable, scalable analytics pipelines and external data collection. Has owned end-to-end pipelines processing 5–10M records/day, serving Snowflake data marts to Power BI/Tableau, and reports ~99% reliability through strong validation/monitoring. Also shipped versioned REST APIs for curated data with query optimization and caching.”
Senior Data & Backend Engineer specializing in cloud data pipelines and LLM/RAG systems
“Data engineer with end-to-end ownership of large-scale retail and clinical data ingestion/processing on AWS, including real-time streaming and batch pipelines. Delivered measurable outcomes: 20M daily transactions processed, latency cut from 4 hours to 5 minutes, ~70% fewer failures, and 120+ pipelines running at 99.8% reliability with full audit compliance.”
Mid-level Data Engineer specializing in big data pipelines and real-time streaming
“Data engineer who has owned end-to-end production pipelines processing a few million records/day, using Python/Airflow/SQL/PySpark with Snowflake serving to BI (Power BI). Built resilient external web data collection systems (anti-bot, schema-change detection, backfills) and shipped versioned REST APIs for internal consumers, improving pipeline success rates to 99% through monitoring, retries, and idempotent design.”
Mid-Level Data Engineer specializing in cloud data platforms and governed analytics
“Data engineer with Optum experience building end-to-end healthcare data pipelines for HL7/FHIR, processing millions of records daily across Kafka streaming and Databricks/Spark batch. Strong focus on data quality (schema enforcement/validations), reliability (Airflow monitoring/alerts), and analytics-ready serving in Snowflake powering Power BI/Tableau, with CI/CD via Git and Jenkins.”
Junior Software Engineer specializing in LLMs, ML, and full-stack development
“Built and shipped a production LLM-driven data harmonization/record-matching pipeline for pharmaceutical datasets, combining normalization, embeddings/vector search, and an LLM validation step. Emphasizes production reliability via guardrails, confidence thresholds, idempotent/retryable stages, and human-in-the-loop fallbacks, with monitoring focused on manual review and error rates to reduce false positives.”
Mid-level Machine Learning Engineer specializing in LLM systems and healthcare data automation
“React performance-focused engineer who contributed performance patches back to an open-source context+reducer state helper after profiling and fixing excessive re-renders in an enterprise project management platform at Easley Dunn Productions. Also built an end-to-end LLM-driven pipeline at Prime Healthcare to normalize millions of supply-chain records, reducing defects by 80% and saving 160+ hours/month.”
Mid-level Data Scientist/ML Engineer specializing in healthcare AI and MLOps
“Designed and deployed an enterprise LLM-powered clinical/pharmacy policy knowledge assistant at CVS Health, replacing manual searches across PDFs/Word/SharePoint with a HIPAA-compliant RAG system. Built end-to-end ingestion and orchestration (Airflow + Azure ML/Data Lake + vector index) with PHI masking, versioned re-embedding, and production monitoring (Prometheus/Grafana), and partnered closely with clinicians/compliance to ensure policy-grounded, auditable answers.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Junior Software Engineer specializing in full-stack and QA automation
“QA engineer intern experience at Amazon (Alexa Daily Essentials) owning end-to-end quality for AI-powered timer/stopwatch features at massive scale. Demonstrates disciplined Jira-based workflow, automation-driven regression coverage, and strong device-matrix verification (Echo Show generations), with concrete examples of finding and driving resolution of complex UI/backend synchronization bugs.”
Mid-Level Software Engineer specializing in backend, data platforms, and FinTech systems
“Backend engineer with experience at HSBC and Machinations who has delivered major production performance wins (cutting large trade-file upload times from ~13–15s to ~2s) using chunked parallel processing with strong reliability controls. Also built and shipped an applied AI RAG workflow using Langflow + Cohere embeddings + FAISS with hosted/local LLM fallbacks (Hugging Face, Ollama) and production-grade guardrails, observability, and evaluation.”
Mid-level Java Full-Stack Developer specializing in microservices and cloud-native web apps
“Full-stack engineer who has shipped and owned production analytics dashboards using Next.js App Router + TypeScript, combining server components for data-heavy pages with client components for interactive charts/filters. Also built a Temporal-orchestrated payment reconciliation workflow with versioning, idempotency, and exponential-backoff retries, and has hands-on Postgres query/index optimization using EXPLAIN ANALYZE.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and external data collection systems on AWS (S3/Lambda/SQS/Glue/EMR) using PySpark/SQL, serving curated datasets to Snowflake/Redshift for finance and fraud teams. Has operated a large-scale crawler ingesting millions of records/day with anti-bot tactics, schema versioning/quarantine, and CloudWatch/Datadog monitoring, and also shipped a versioned REST API with caching and query optimization.”
Mid-level ML Data Engineer specializing in MLOps and scalable healthcare data pipelines
“Data/ML platform engineer with healthcare (Cigna) experience owning an end-to-end pipeline spanning Airflow + Debezium CDC ingestion, PySpark/SQL transformations, rigorous data quality gates, and feature-store/API serving for ML training and inference. Worked at 10+ TB scale and cites a ~30% latency reduction plus stronger reliability via idempotent design, monitoring, and backfill-safe reprocessing; also built pragmatic early-stage data pipelines at Frankenbuild Ventures.”
Mid-level Software Engineer specializing in cloud microservices and data pipelines
“Data engineer/platform builder who has owned production pipelines end-to-end processing millions of records/day, with strong emphasis on data quality (quarantine workflows) and reliability (monitoring, retries, incremental loads). Also designed large-scale external data collection/crawling with anti-bot handling and backfills, and shipped versioned REST data services optimized for performance and developer usability in an early-stage environment.”
Mid-Level Software Developer specializing in Java, Cloud, and Microservices
“Backend/Python engineer who owned an end-to-end FastAPI + AWS internal natural-language document Q&A system (Textract extraction, embeddings/vector DB, LLM integration) with strong focus on reliability and latency. Hands-on with Kubernetes + GitOps (Argo CD, Helm, rolling updates/auto-rollback) and built/optimized Kafka streaming pipelines using Prometheus/Grafana. Also supported a zero-downtime on-prem to cloud migration with parallel run and gradual traffic cutover.”
Senior Backend Software Engineer specializing in Java microservices, Kafka, and AWS
“AI engineer who shipped a production chat assistant for a storage company by building the underlying RAG-style knowledge base (document ingestion, chunking/embeddings, FAISS vector store) and an admin update interface to keep content current. Also has full-stack delivery experience (Python REST APIs + React/TypeScript UI) and AWS operations using Terraform/Jenkins, including handling a real production performance incident by optimizing DB queries and adding auto-scaling.”
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and MLOps
“Data engineer at AT&T focused on large-scale telecom (5G/IoT) data platforms, owning end-to-end pipelines from Kafka/Azure ingestion through Databricks/Delta Lake transformations to serving analytics and ML. Has operated at very high volumes (~50+ TB/day) and delivered measurable performance gains (25–30% faster processing) plus improved reliability via Airflow monitoring, robust data quality checks, and resilient external data collection patterns (rate limiting, retries, dynamic schemas).”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer currently at American Airlines who built and owned end-to-end flight operations and booking data pipelines (batch + real-time) using Azure Data Factory, Kafka, Spark/Databricks, Synapse, and Snowflake—processing hundreds of GBs/day. Strong focus on reliability and data quality (idempotency, checkpointing, retries, validation/alerts) and delivered near-real-time analytics powering Power BI dashboards; previously helped stand up an early-stage data platform at Sysco on AWS (Glue/S3/Redshift) with Airflow and Jenkins CI/CD.”