Pre-screened and vetted.
Mid-level Data Scientist specializing in real-time fraud detection and MLOps
“ML/NLP engineer with experience at Charles Schwab building an NLP + graph (Neo4j) entity-resolution system to unify fragmented user/device/transaction data and improve downstream model quality and analyst querying. Has applied embeddings (SentenceTransformers + FAISS) with domain fine-tuning to boost hard-case matching recall by ~12% while maintaining precision, and has a track record of hardening scalable Python/Spark pipelines and productionizing fraud models via A/B tests and shadow-mode monitoring.”
Mid-level Data Engineer specializing in multi-cloud data platforms for healthcare and finance
“Data engineer with Cigna experience building and operating an end-to-end AWS-based healthcare claims pipeline processing ~2TB/day, using Glue/Kafka/PySpark/SQL into Redshift. Strong focus on data quality and reliability (schema validation, monitoring/alerting, retries/checkpointing/backfills), reporting improved accuracy (~99%) and reduced latency, plus experience serving real-time Kafka/Spark data to downstream analytics with documented data contracts.”
Mid-level Backend/AI Software Developer specializing in data pipelines for FinTech and healthcare
“Data engineer/backend data services builder with end-to-end ownership of production pipelines for a Pfizer client, combining Python/SQL ingestion and transformation with strong data quality controls. Delivered measurable performance gains (~30% faster queries) and improved reliability through monitoring/alerting (Splunk, Prometheus/Grafana), structured logging, and incident response; also built internal REST APIs with versioning and caching and set up GitLab-based CI/CD with containerized deployments.”
Mid-level Data Engineer specializing in cloud ETL and real-time streaming
“Data engineer focused on AWS + Spark/Databricks pipelines, including an end-to-end nightly loan-data ingestion flow (~2.2M records) from Postgres/S3 through Glue and Databricks into a DWH with layered validation and alerting. Also built real-time streaming with Kafka + Spark Structured Streaming and a master’s project streaming Reddit data for sentiment analysis under ambiguous requirements and tight budget constraints.”
Mid-Level Software Engineer specializing in Java/Spring microservices and full-stack web apps
“Software/full-stack engineer focused on deploying and integrating microservice applications into production AWS and hybrid cloud/on-prem industrial environments. Demonstrated end-to-end troubleshooting by tracing intermittent user failures to network routing/packet loss caused by load balancer and NIC misconfiguration, then adding monitoring to prevent recurrence. Also delivers customer-specific Python extensions with strong validation, testing, and backward compatibility.”
Mid-level AI/ML Engineer specializing in FinTech risk, fraud detection, and GenAI/RAG systems
“Built and productionized Azure-based LLM/RAG systems for regulatory/compliance use cases, including automating analyst research and compliance report generation across large unstructured document sets. Demonstrates strong practical depth in hallucination mitigation, hybrid retrieval tuning (BM25 + embeddings), and production MLOps (Databricks, Cognitive Search, AKS, Airflow/MLflow), plus proven ability to deliver auditable, explainable solutions with non-technical compliance teams.”
Mid-level Full-Stack Developer specializing in cloud-native healthcare applications
“Full-stack engineer with recent experience at Amgen building an internal healthcare data validation/transformation and workflow automation service: Python/FastAPI backend with REST APIs plus a React UI, designed around a canonical contract-first model to handle inconsistent upstream data. Operates production systems on AWS (EC2/ELB/S3/CloudFront) with strong focus on observability (structured logs, correlation IDs) and safe CI/CD-driven migrations; also has experience shipping quickly in ambiguous environments at TCS.”
Mid-level Backend Software Engineer specializing in Python/FastAPI on AWS
“Backend engineer with healthcare domain experience building AI-driven radiology workflow systems. Evolved tightly coupled APIs into secure, reliable FastAPI-based services by moving heavy imaging/data processing into idempotent asynchronous pipelines with retries, feature-flagged incremental rollout, and strong data-integrity controls (constraints, backfills, validation). Strong focus on defense-in-depth security for sensitive patient data (OAuth2/JWT, RBAC, and database-level protections).”
Mid-level Full-Stack Software Engineer specializing in cloud-native distributed systems
“Backend/platform-focused engineer who has shipped production LLM agents for messy research dataset submissions, turning manual validation into an automated, reliable ingestion pipeline. Strong on production hardening (streaming large uploads, strict schema/function-calling outputs, idempotency, RBAC) plus eval/monitoring loops that improved data quality, reduced support burden, and increased adoption.”
Senior Data Engineer specializing in scalable data pipelines and API-driven data services
“Data engineer focused on building scalable, reliable end-to-end data pipelines and backend REST data services, spanning API ingestion plus batch/stream processing with Airflow, Kafka, Spark/PySpark, and SQL. Emphasizes strong data quality validation, monitoring/fault tolerance, and performance tuning for large datasets, with experience deploying in cloud environments using containerization and CI/CD.”
Executive business and technology leader specializing in SaaS, media, and digital transformation
“Candidate participated in Launch Factory's venture studio selection process, which introduced them to a formalized startup methodology, and then went on to found and recently exit their own business. They are highly motivated to keep building companies, with a clear emphasis on creating products that serve a community and validating market need through product-market fit.”
Entry-level Data Engineer specializing in ETL, analytics, and anomaly detection
“Worked on industrial pump analytics at SitePro, where they built an anomaly detector using messy sensor and pump data and used historical failure and maintenance cost analysis to make the business case to stakeholders. They combine SQL/Python data preparation with practical stakeholder communication around metrics like churn and operational impact.”
Mid-level Data Analyst specializing in financial services and fraud analytics
“Analytics candidate currently at Facteus with hands-on experience turning messy transactional data into trusted reporting layers in Snowflake and Power BI. They combine SQL and Python automation with strong validation, performance tuning, and stakeholder-facing metric design, including cohort-based retention and segmentation work that improved trust and adoption of analytics.”
Mid-level Data Analyst specializing in healthcare and business intelligence
“Healthcare analytics candidate with hands-on experience turning messy EHR, billing, and operational data into validated SQL datasets and automated Python/Airflow pipelines. They appear strongest in hospital KPI reporting—especially length of stay, readmissions, retention, and bed utilization—and have owned projects from metric definition through Power BI delivery and impact measurement.”
Mid-level Business Analyst specializing in operations data and reporting
“Candidate has hands-on project experience in healthcare analytics, using SQL, Python, and Power BI to analyze CMS hospital readmissions and HRRP penalty risk in Florida. Their work centers on turning messy CMS flat files into reporting-ready datasets, benchmarking hospitals against national references, and surfacing financial risk through dashboards.”
Mid-level Business Analyst specializing in healthcare and banking compliance
“Healthcare analytics professional with Cigna experience turning complex claims, eligibility, and provider data into trusted reporting layers using SQL, Python, and Power BI. Stands out for combining deep data-quality rigor with end-to-end ownership of operational analytics projects, including standardized retention/churn metric design and automated reporting workflows.”
Senior Software Engineer specializing in backend systems, AI/LLM integration, and cloud infrastructure
“Backend engineer with experience in highly regulated and high-stakes systems, including an airline crew messaging platform requiring near-zero-error real-time operations and a HIPAA-compliant mental health application built from an early-stage concept. They also show strong operational maturity, having owned a GoDaddy production incident through resolution and then led deployment pipeline improvements that reduced build failures by 40% and doubled deployment frequency.”
Entry-level Full-Stack Software Engineer specializing in FinTech and web applications
“Built end-to-end internal and user-facing automation/data features, including a Selenium-based BU course scraper with around 1,800 users and a CSV export system that became the company standard at Triple. Shows initiative in ambiguous environments, working directly with business stakeholders and resolving production infrastructure issues involving AWS and Terraform.”
Senior Operations Analyst specializing in business intelligence and financial services
“Analytics-focused candidate with hands-on experience turning messy datasets into reporting-ready outputs using SQL, building reproducible Python workflows, and operationalizing metrics in R Shiny dashboards. They stand out for combining structured data analysis with NLP and segmentation in marketplace-style datasets such as Airbnb, real estate, and sports salary data to drive pricing, engagement, and demand insights.”
Junior Software Engineer specializing in AI, data, and full-stack applications
“Builder with a mix of backend engineering, product instinct, and startup execution: they shipped a legal BI platform from scratch that handled 1,000+ cases, cut reporting time 80%, and saved $30K annually. They also move quickly in ambiguous environments, from launching a roommate app across iOS/Android after user discovery to building a RAG system with a 50+ case evaluation suite and a cloud dev environment in under 48 hours.”
Mid-level Full-Stack Software Engineer specializing in microservices and scalable backend systems
“Backend/microservices engineer (Java/Spring Boot, Kafka, Angular microfrontends) with Teradata experience building distributed analytics/query routing platforms and delivering 20–30% latency reductions through event-driven redesign and reliability hardening. Also built and shipped an end-to-end multimodal medical imaging AI feature (LLaVA/Mistral 7B + LoRA) with production guardrails like confidence-based human review, drift monitoring, and audit logs.”
Junior Full-Stack & ML Engineer specializing in research tooling and applied machine learning
“Full-stack engineer and ML assistant in UC Irvine’s CS department who deployed a lab project showcase platform and integrated on-demand execution of computational projects using Docker for isolation. Also built and optimized Linux cloud/cluster test automation for research, diagnosing RAM and network sync bottlenecks, and later led development of a Python-based predictive analytics tool for musicians using probabilistic graphical models and flexible data pipelines.”
Mid-level AI Engineer specializing in GenAI, LLM integration, and RAG pipelines
“Built and led deployment of an autonomous, self-correcting multi-agent knowledge retrieval and validation system at HCA Healthcare to reduce heavy manual research/validation in clinical/compliance documentation. Deeply focused on production reliability and cost—used LangGraph StateGraph orchestration plus ONNX/CUDA/quantization to cut GPU costs by 25%, and partnered with the Compliance VP using real-time contradiction-rate dashboards to hit a 40% automation goal without compromising compliance.”
Mid-level AI/ML Engineer specializing in cloud data engineering and GenAI
“AI/LLM engineer with production experience in legal tech: built a GPT-4 + LangChain RAG summarization system at Govpanel that reduced legal case-file review time by 50%+. Previously at LexisNexis, orchestrated end-to-end Airflow data/AI pipelines processing 5M+ legal documents daily, improving ETL runtime by 35% with robust validation, monitoring, and SLAs.”