Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud data pipelines for healthcare and financial services
“Data engineer with ~4 years of experience (Cigna) building and operating Azure Data Factory pipelines for healthcare claims/member/provider data at 2–3M records/day. Emphasizes reliability and downstream safety via schema/data-quality validation, quarantine workflows, idempotent processing, and backfills; also improved runtime ~20% through SQL optimization and served curated datasets through versioned views and well-documented, analyst-friendly interfaces.”
Mid-level Data Engineer specializing in cloud-native healthcare and enterprise data platforms
“Data Engineer (TCS) who owned an end-to-end CRM analytics pipeline for Bayer’s eSalesWeb integration, ingesting from Salesforce APIs/databases/S3 and serving analytics-ready datasets via PostgreSQL/S3 for Tableau. Drove measurable outcomes: ~60% reduction in manual data-quality effort, ~30% lower latency through SQL optimization, and ~35% improved stability via monitoring, retries, and idempotent processing.”
Executive Technology Leader (CTO) specializing in cloud, AI/ML, and scalable product platforms
“Technical leader and hands-on engineer with 20+ years of experience who has previously raised funding and exited a venture. Currently bootstrapping a new AI-direction startup with personal and family capital, leveraging structured financial planning and a relationship-driven approach to investor outreach.”
Mid-level Data Analyst specializing in financial and customer analytics
“Analytics professional with experience at KPMG and Robosoft Technologies, working across financial and customer engagement data. They combine SQL, Python, experimentation, and BI dashboards to turn messy multi-source data into decision-ready insights, including a pricing test that improved conversion rates by 9%.”
Senior Python Backend Engineer specializing in scalable APIs and cloud-native microservices
“Backend/data platform engineer who has built and operated a cloud-native media ingestion/processing platform in Python (Django/DRF, FastAPI) with Kafka, Postgres, and Redis, emphasizing multi-tenant security and reliability. Delivered AWS production systems combining EKS and Lambda with Terraform + GitHub Actions/Helm, and built Glue-based ETL pipelines with strong schema-evolution and data-quality practices; also modernized SAS analytics into Python on AWS. Seeking fully remote roles with a $120K–$140K base range.”
Senior AI/ML & Full-Stack Engineer specializing in GenAI, RAG, and MLOps platforms
“Backend/data platform engineer who owned end-to-end production services for a fleet analytics/GenAI platform, spanning FastAPI microservices on Kubernetes and AWS (EKS + Lambda) event-driven workloads. Strong in reliability/observability (OpenTelemetry, circuit breakers, idempotency), data pipelines (Glue/Airflow/Snowflake), and measurable performance/cost wins (SQL 10s to <800ms P95; ~30% compute cost reduction).”
Senior Data & Platform Engineer specializing in cloud-native streaming and distributed systems
“Financial data engineer who has built and operated high-volume batch + streaming pipelines (200–300 GB/day; 5–10k events/sec) using AWS, Spark/Delta, Airflow, Kafka, and Snowflake, with strong emphasis on data quality and reliability. Demonstrated measurable impact via 99.9% SLA adherence, major reductions in bad records/nulls, MTTR improvements, and significant latency/runtime/query performance gains; also built a distributed web-scraping system processing 5–10M records/day with anti-bot and schema-drift defenses.”
Mid-level Data Engineer specializing in multi-cloud data platforms for healthcare and finance
“Data engineer with Cigna experience building and operating an end-to-end AWS-based healthcare claims pipeline processing ~2TB/day, using Glue/Kafka/PySpark/SQL into Redshift. Strong focus on data quality and reliability (schema validation, monitoring/alerting, retries/checkpointing/backfills), reporting improved accuracy (~99%) and reduced latency, plus experience serving real-time Kafka/Spark data to downstream analytics with documented data contracts.”
Mid-level Backend/AI Software Developer specializing in data pipelines for FinTech and healthcare
“Data engineer/backend data services builder with end-to-end ownership of production pipelines for a Pfizer client, combining Python/SQL ingestion and transformation with strong data quality controls. Delivered measurable performance gains (~30% faster queries) and improved reliability through monitoring/alerting (Splunk, Prometheus/Grafana), structured logging, and incident response; also built internal REST APIs with versioning and caching and set up GitLab-based CI/CD with containerized deployments.”
Mid-level Data Engineer specializing in cloud ETL and real-time streaming
“Data engineer focused on AWS + Spark/Databricks pipelines, including an end-to-end nightly loan-data ingestion flow (~2.2M records) from Postgres/S3 through Glue and Databricks into a DWH with layered validation and alerting. Also built real-time streaming with Kafka + Spark Structured Streaming and a master’s project streaming Reddit data for sentiment analysis under ambiguous requirements and tight budget constraints.”
Mid-level Full-Stack Developer specializing in FinTech platforms and cloud-native microservices
“Backend engineer focused on AI-enabled systems, having built a production-style RAG pipeline (vector search + LLM) exposed via Python/Flask endpoints with strong observability and hallucination-reduction techniques. Demonstrates deep performance work in PostgreSQL/SQLAlchemy (5x faster analytics queries) and high-throughput optimization using Celery + Redis (800ms to 120ms latency, 3x throughput), plus schema-per-tenant multi-tenancy with tenant-aware middleware and logging.”
Senior Workforce Analytics & WFM Leader specializing in contact center operations
“Operations-focused team lead currently managing 20 coordinators, with strong workforce management experience spanning forecasting, scheduling, KPI/staffing reporting, and executive-facing data presentations. Led a cross-functional Salesforce implementation and redesigned forecasting/workflow to support a newly created internal-promotion department, improving flexibility in coverage planning.”
Senior Data Engineer specializing in scalable data pipelines and API-driven data services
“Data engineer focused on building scalable, reliable end-to-end data pipelines and backend REST data services, spanning API ingestion plus batch/stream processing with Airflow, Kafka, Spark/PySpark, and SQL. Emphasizes strong data quality validation, monitoring/fault tolerance, and performance tuning for large datasets, with experience deploying in cloud environments using containerization and CI/CD.”
Mid-level Data Analyst specializing in financial services and fraud analytics
“Analytics candidate currently at Facteus with hands-on experience turning messy transactional data into trusted reporting layers in Snowflake and Power BI. They combine SQL and Python automation with strong validation, performance tuning, and stakeholder-facing metric design, including cohort-based retention and segmentation work that improved trust and adoption of analytics.”
Mid-level Data Analyst specializing in healthcare and business intelligence
“Healthcare analytics candidate with hands-on experience turning messy EHR, billing, and operational data into validated SQL datasets and automated Python/Airflow pipelines. They appear strongest in hospital KPI reporting—especially length of stay, readmissions, retention, and bed utilization—and have owned projects from metric definition through Power BI delivery and impact measurement.”
Senior Operations Analyst specializing in business intelligence and financial services
“Analytics-focused candidate with hands-on experience turning messy datasets into reporting-ready outputs using SQL, building reproducible Python workflows, and operationalizing metrics in R Shiny dashboards. They stand out for combining structured data analysis with NLP and segmentation in marketplace-style datasets such as Airbnb, real estate, and sports salary data to drive pricing, engagement, and demand insights.”
Mid-level Data Engineer specializing in Lakehouse, Streaming, and ML/LLM data systems
“Built and productionized an enterprise retrieval-augmented generation platform for internal knowledge over large unstructured corpora, emphasizing trust via strict citation/grounding and hybrid retrieval (BM25 + FAISS + cross-encoder re-ranking). Demonstrates strong scaling and cost/latency optimization through incremental indexing/embedding and index partitioning, plus disciplined evaluation/observability practices. Has experience operationalizing pipelines with Airflow/Databricks/GitHub Actions and partnering closely with risk & compliance stakeholders on auditability requirements.”
Mid-level AI/ML Engineer specializing in MLOps and LLM applications
“BNY Mellon engineer who has built and operated production AI systems end-to-end: a LangChain/Pinecone RAG platform scaled via FastAPI + Kubernetes to 1000 RPM with 99.9% uptime, supported by monitoring and data-drift detection. Also deep in data/infra orchestration (Airflow, Dagster, Terraform on AWS/EMR/EC2), processing 500GB+ daily and delivering measurable reliability and performance gains, plus strong compliance-facing model explainability using SHAP and Tableau.”
Executive People & Culture leader specializing in HR strategy, DEI, and org development
“HR leader (VP of HR at Joe Coffee Company; previously at Equinox) with hands-on experience implementing two HRIS migrations and configuring timekeeping/HR systems used as an org-wide hub (ATS, performance management, timekeeping, system of record). Drove a major compliance-focused change to break tracking/pay practices using cost analysis and system configuration to reduce liability while minimizing employee dissatisfaction.”
Mid-level AI/ML Engineer specializing in cloud data engineering and GenAI
“AI/LLM engineer with production experience in legal tech: built a GPT-4 + LangChain RAG summarization system at Govpanel that reduced legal case-file review time by 50%+. Previously at LexisNexis, orchestrated end-to-end Airflow data/AI pipelines processing 5M+ legal documents daily, improving ETL runtime by 35% with robust validation, monitoring, and SLAs.”
Mid-level Software Engineer specializing in data pipelines and backend APIs
“Data engineer with Webster Bank experience owning end-to-end pipelines (APIs + databases) processing millions of records/day, improving data quality (25–30% fewer issues) and reliability (~99.9% successful runs). Built resilient external data ingestion/scraping systems (schema-change validation, idempotent backfills, monitoring/alerts) and shipped a FastAPI service exposing curated datasets with versioning and consistently low latency.”
Senior Data Engineer specializing in data infrastructure and marketing/CRM analytics
“Salesforce-focused implementation/solutions engineer from Full Circle Insights who owned end-to-end campaign attribution and reporting deployments for multiple customers at once (3–5 concurrently), including sandbox testing, KPI monitoring, and rollback-safe migrations from legacy reporting. Also builds personal multi-agent workflows and uses Claude Code to rapidly scaffold data/analytics scripts like an advertising optimization parser over CSV/XLSX inputs.”
Mid-level Data Analyst/Data Engineer specializing in BI, ETL pipelines, and cloud analytics
“Data engineer focused on marketing/web analytics and external API pipelines, handling ~10M records/week. Built Azure-based ingestion and PySpark transformations with rigorous data quality checks, then served curated datasets into Synapse/Redshift for Power BI. Also designed an Airflow-orchestrated crypto REST API pipeline with monitoring, retries/exponential backoff, schema-change detection, and backfill-friendly reprocessing.”
“Analytics professional with Northern Trust experience focused on investment portfolio reconciliation and reporting. They combine SQL, Python, and Power BI to clean and validate high-volume financial data, automate manual processes, and align operations and accounting teams on shared metrics—driving roughly 20% improvement in reconciliation accuracy.”