Pre-screened and vetted.
Junior Data Engineer specializing in cloud ETL/ELT and lakehouse platforms
Mid-level Business Analyst specializing in banking, pharma, and enterprise systems
“Analytics professional with hands-on experience spanning enterprise supply chain data and workforce analytics. They’ve worked on a Manhattan Active WMS implementation for a pharmaceutical client integrating MAWM, JD Edwards, and Boomi, and also built SQL/Python/Tableau solutions for BankUnited/FIU to standardize retention and engagement reporting. Strong fit for roles requiring messy data wrangling, KPI operationalization, and stakeholder-trusted dashboards.”
Mid-level Business Analyst specializing in data analytics and BI
“Healthcare analytics professional with hands-on experience turning messy claims, eligibility, and utilization data into validated BI-ready models using SQL and Python. They combine strong data engineering and KPI design skills with stakeholder-facing delivery, including Power BI prototyping, retention metric operationalization, and analyses that supported care management interventions and cost-control decisions.”
Junior Data Scientist specializing in generative AI and RAG systems
“Data scientist at Guardian Airwaves building a RAG-powered quiz generator using Grok AI, with hands-on experience solving hard document-ingestion problems (PDFs with images/tables) via unstructured.io and LlamaIndex. Has deployed production systems on AWS EC2 and brings a pragmatic approach to agent reliability (human-in-the-loop, LLM-based eval, latency/cost metrics) while effectively translating RAG concepts to non-technical stakeholders.”
Mid-level Software/Data Engineer specializing in LLM apps, RAG pipelines, and cloud microservices
“Backend/data engineer who built an enterprise LLM assistant (AI Genie) at Broadband Insights using a LangChain + GPT-4 + Pinecone RAG pipeline to automate broadband analytics reporting. Developed Python/Dagster ETL processing 10M+ records/day and improved data freshness by 60%, with production-grade scalability patterns (async workers, containerized microservices, Kubernetes) and strong multi-tenant isolation practices.”
Mid-level Marketing Analytics & Growth Specialist in performance marketing
“Paid media specialist with nonprofit/fundraising experience at Keelworks Foundation, running high-spend acquisition across Google, Meta, and LinkedIn (and familiar with TikTok). Uses CRM-driven funnel segmentation and disciplined A/B testing across ads, landing pages, and email flows; has recovered stalled performance and delivered ~10% donation growth while tracking deep donor-quality KPIs (lead-to-donor, retention, avg gift).”
Senior Operations & Business Analyst specializing in process improvement and KPI reporting
“Quality/process professional with hands-on supplier quality experience across consumer hardware (electric skateboards) and construction/field services. Uses rigorous statistical and root-cause methods (Ishikawa, 5 Whys) to drive CAPA closure with overseas suppliers, adding inspection gates and IQC, and validating fixes through multi-month sampling follow-up.”
Intern Marketing Analytics professional specializing in GA4, experimentation, and BI reporting
“Outbound-focused business development profile spanning a family-owned construction materials business in India and an early-stage startup (Jetvoy) targeting Web3/travel partners. Built ICP, messaging, and a spreadsheet-based CRM from scratch, ran multi-channel campaigns (email/LinkedIn/calls), and leveraged tools like Sales Navigator, Apollo, HubSpot/Sheets, Zapier-style automations, and ChatGPT to improve reply/meeting rates through personalization.”
Senior Data Scientist specializing in LLM applications, RAG systems, and production ML
“Senior Data Scientist in consulting who has built production RAG systems for insurance/annuity document search at large scale (100K+ PDF pages), emphasizing grounded answers, guardrails, and low-latency retrieval. Experienced in end-to-end MLOps for LLM apps—monitoring, evaluation sets, drift handling, and safe rollouts—and in orchestrating complex pipelines with Prefect/Airflow and deploying services on Kubernetes.”
Mid-level Data Analyst specializing in analytics, BI, and predictive modeling
“Analytics professional with cross-domain experience spanning healthcare claims, logistics optimization, and customer booking funnels. They combine strong SQL/Python execution with stakeholder alignment and operational adoption, and can point to measurable impact including 18% healthcare cost optimization and 24% logistics savings.”
Mid-level Business Analyst specializing in analytics, e-commerce, and supply chain
“Marketing analytics candidate who combines strong SQL data engineering with Python automation to turn messy GA4, Instagram, and Postgres data into reliable reporting and decision tools. They’ve built cohort- and retention-based measurement frameworks that shifted teams away from vanity metrics, improved campaign allocation, and drove roughly 30% better two-week retention.”
Mid-Level Software Engineer specializing in FinTech and LLM-powered data products
“Full-stack engineer with payments/settlement domain experience who modernized a payment tracking workflow from REST to GraphQL and delivered a production payment status dashboard using Next.js App Router + TypeScript. Strong in performance and reliability work (Postgres indexing/Explain Analyze, Redis caching, Datadog observability) and in durable event-driven processing with Kafka (DLQs, idempotency, reconciliation, event replay).”
Mid-level Data Engineer specializing in cloud data pipelines and Snowflake
“Data engineer who has owned production pipelines end-to-end, ingesting 50–100 GB/day from APIs/S3 and near-real-time Kafka into Snowflake with strong data quality gates (Great Expectations/dbt) and Airflow-based reliability (SLAs, alerting, dashboards). Also built a Snowflake-backed REST data API with caching/pagination and versioned endpoints, and designed a compliant, scalable web-scraping system with anti-bot handling and safe backfills.”
Entry-Level Finance Analyst specializing in equity research and valuation
“Investment-focused candidate with CFA Research Challenge experience who proactively sources and engages operators/founders, using structured Excel-based pipeline management and insight-driven follow-ups. Emphasizes primary research and synthesis into an investment thesis, with a repeatable discovery framework and clear next-step planning.”
Junior Data Analyst specializing in BI, analytics, and machine learning
“Analytics professional with hands-on experience turning messy Excel-based operational data into SQL/Python pipelines and Power BI dashboards, including a production bottleneck project that improved workflow efficiency by 20%. Also brings applied machine learning experience from a Databricks/PySpark loan risk scoring project using logistic regression and XGBoost on large-scale S3 data.”
Mid-level Data Analyst specializing in analytics, reporting, and operational insights
“Analytics candidate with hands-on experience turning messy retail/customer data into clean reporting tables using SQL and PostgreSQL, then extending the work into Python-based reusable analysis workflows. They have applied segmentation, cohort analysis, and retention metric design to support dashboards and improve targeting, engagement, and repeat purchase performance.”
Mid-level Data Engineer specializing in AI/ML, RAG systems, and cloud data pipelines
“Built a production lead-generation system using AI agents that researches the internet for relevant leads and integrates RAG-based contact enrichment/shortlisting aligned to existing CRM data, enabling sales reps to focus more on selling. Also has hands-on AWS data orchestration experience (Glue, Step Functions) moving raw data into Redshift and evaluates agent performance with human-in-the-loop plus BLEU/perplexity metrics.”
Mid-level Data Engineer specializing in cloud data platforms and real-time pipelines
“Data engineer who has owned production pipelines end-to-end—from Kafka/Airflow ingestion through SQL/Python validation and dbt transformations into Redshift/BI. Also built and operated a large-scale distributed web scraping platform (50–100 sites daily, ~5–10M records/day) with Kubernetes, Kafka queues, robust retries/DLQ, anti-bot measures, and backfill-safe raw HTML storage.”
Mid-level Data Scientist specializing in ML, LLM pipelines, and MLOps
“Built and deployed a production LLM-driven document understanding pipeline using LangChain/LangGraph, focusing on reliability via step-by-step prompting, validation checks, and monitoring. Also partnered with non-technical marketing stakeholders at Heartland Community Network to deliver an XGBoost targeting model surfaced in Power BI, improving campaign conversion by 12%.”
Mid-level Business Analyst specializing in retention, churn, and revenue analytics
“Early-career data analyst with hands-on experience at SuperWorld building SQL and Python analytics pipelines for product and growth use cases. They stand out for turning messy event and transaction data into validated funnel datasets, automating reporting to cut manual effort by ~40%, and partnering with product and marketing teams on conversion and engagement metrics.”
Mid-level Data Engineer specializing in FinTech data platforms
“Backend-focused engineer with experience at Ramp, Easebuzz, and George Mason University, spanning data pipelines, workflow automation, and production reliability. Stands out for quantifiable performance gains, strong debugging instincts in distributed job systems, and translating ambiguous finance operations processes into measurable automation outcomes.”
Intern-level Data Scientist specializing in AI and full-stack applications
“Engineer with hands-on experience building production ML and Python backend systems, including a real-time social media monitoring pipeline handling 1000+ events per second and a prototype AI operations assistant for Seattle-Tacoma Airport. Stands out for combining reliability engineering, automation, and LLM/NLP-to-SQL work, with measurable impact such as improving uptime from 92% to 99.4%.”
Junior Data Engineer specializing in LLM agents and RAG pipelines
“Built and deployed “ApartmentFinder AI,” a multi-agent system using Google ADK, Gemini, and Google Maps MCP to automate apartment shortlisting and commute-time analysis, cutting a 45–70 minute user workflow down to ~30 seconds. Also has strong delivery/process chops from serving as an SDLC Release Coordinator, managing 52+ releases and reducing SDLC issues by 84%.”
Mid-level Data Scientist specializing in NLP, recommender systems, and ML deployment
“At Provenbase, built and shipped a production LLM-powered semantic search and candidate matching platform (RAG with GPT-4/Gemini, multi-agent orchestration, Elasticsearch vector search) to scale sourcing across 10M+ candidate records and 1000+ data sources. Drove sub-second performance, cut LLM spend 30% with routing/caching, and improved recruiting outcomes (+45% sourcing accuracy; +38% visibility of underrepresented talent) through bias-aware ranking and tight collaboration with recruiting stakeholders.”