Pre-screened and vetted.
Mid-level Data Analyst specializing in BI, analytics, and healthcare data
“Analytics professional at Optum with hands-on experience turning messy healthcare claims data from SQL, Excel, and CRM systems into validated reporting datasets and Power BI dashboards. They also built reproducible Python workflows for claims analysis and owned an end-to-end project focused on improving claims processing efficiency through metric design, segmentation, and stakeholder-driven operational improvements.”
Mid-level AI/ML Engineer specializing in GenAI, NLP, and financial systems
“GenAI/ML engineer with hands-on experience building production financial intelligence and document summarization systems at Citibank. Stands out for combining LLM fine-tuning, hybrid RAG, multi-agent workflows, and strong MLOps/observability practices to deliver measurable business impact, including 60% faster analyst retrieval, 31% higher precision, and 99%+ uptime.”
Senior AI/ML Engineer specializing in Generative AI, LLMs, and MLOps
“Telecom (Verizon) AI/ML practitioner who built a production multimodal system that ingests messy customer issue reports (calls, chats, emails, screenshots, videos) and turns them into confidence-scored incident summaries with reproducible steps and evidence links. Also built KPI/alarm-to-ticket correlation to rank likely root-cause domains (RAN/Core/Transport), cutting triage from hours to minutes and improving MTTR.”
Mid-level Full-Stack Software Engineer specializing in cloud-native microservices
“Full-stack engineer who owned end-to-end delivery of a customer-facing financial services web platform and built internal tooling for engineering teams. Strong in microservices and event-driven systems (Kafka/RabbitMQ), distributed transaction management (saga), and production performance/observability—achieving ~40% backend response-time improvement through database and query optimization.”
Mid-level Full-Stack Software Developer specializing in cloud-native microservices
“Full-stack engineer with enterprise experience at Metasystems Inc. (and Qualcomm) building high-traffic, security-sensitive systems—owned a secure transaction processing module end-to-end using Java/Spring Boot, Python/Django, and React. Strong AWS production operations (EKS/ECS/Lambda/RDS/DynamoDB) with IaC (Terraform/CloudFormation), observability, and reliability patterns; also delivered resilient ETL/integration pipelines with idempotency/retries/backfills and achieved a 50% deployment-time reduction through CI/CD and modular refactoring.”
Mid-level AI/ML Engineer specializing in Generative AI and data engineering
“IBM engineer who built and deployed a production RAG-based LLM assistant using LangChain/FAISS with a fine-tuned LLaMA model, served via FastAPI microservices on Kubernetes, achieving 99%+ uptime. Demonstrates strong practical expertise in reducing hallucinations (semantic chunking + metadata-driven retrieval) and managing latency, plus mature MLOps practices (Airflow/dbt pipelines, MLflow tracking, monitoring, A/B and shadow deployments) and effective collaboration with non-technical stakeholders.”
Mid-level Data Engineer specializing in cloud ETL/ELT and lakehouse architecture
“Data engineer focused on sales/marketing analytics pipelines, owning ingestion from CRMs/ad platforms through warehouse serving and dashboards at ~hundreds of thousands of records/day. Built reliability-focused systems including dbt/SQL/Python data quality gates with alerting, a resilient web-scraping pipeline (retries/backoff, anti-bot tactics, schema-change detection, backfills), and a versioned internal REST API with caching and strong developer usability.”
Mid-level Data Engineer specializing in real-time streaming and cloud data platforms
“Data engineer with Wells Fargo experience owning an end-to-end lakehouse ETL pipeline on Databricks/Azure Data Factory, processing ~480GB daily and implementing robust data quality/reconciliation across 40+ tables to reach ~99.3% reliability. Strong in performance optimization (cut runtime 5.5h→3.8h), CI/CD and monitoring, and resilient external/API ingestion with retries, schema validation, and backfills.”
Senior Data Engineer specializing in Spark, Kafka, and Databricks Lakehouse platforms
“Data engineer at Fidelity who built and operated a real-time financial transactions lakehouse on AWS/Databricks, processing millions of records daily with Kafka streaming. Demonstrated strong reliability and data quality practices (watermarking, idempotent Delta writes, validation/reconciliation, observability) and delivered measurable improvements (~30% faster jobs and ~30% fewer data issues) while enabling trusted gold-layer analytics for downstream teams.”
Mid-level Data Analyst specializing in business intelligence and cloud data platforms
“Healthcare analytics professional with TCS/Humana experience turning messy claims and eligibility data into reliable reporting assets using SQL and Python. They combine strong data engineering and analytics execution with stakeholder management, including automating monthly claims reporting from half a day to under 5 minutes and driving a provider outreach effort that reduced claim rejection rates by about 20%.”
Mid-level AI/ML Engineer specializing in healthcare NLP and MLOps
“ML/AI engineer with healthcare payer experience (Signal Healthcare, Cigna) who has shipped production fraud/claims prediction systems using Python/TensorFlow and exposed them via FastAPI/Flask microservices integrated with EHR and Salesforce. Emphasizes operational reliability and trust—Airflow-orchestrated pipelines with data quality gates plus SHAP-based interpretability, A/B testing, and drift/debug workflows—backed by reported outcomes of 22% lower false payouts and 17% higher model accuracy.”
Mid-level Data Engineer specializing in cloud lakehouse/warehouse pipelines
“Data engineer with HCA Healthcare experience building and operating end-to-end AWS-based pipelines for clinical and operational reporting (50–100 GB/day), serving curated data into Redshift/Snowflake for Power BI/Tableau. Emphasizes production reliability (Airflow SLAs/retries/alerting, logging/observability) and strong data quality controls (reconciliations, schema/null/duplicate checks), and has shipped versioned REST APIs to expose warehouse data to downstream systems.”
Mid-level Data Engineer specializing in cloud lakehouse platforms and ETL/ELT
“Accenture data engineer who greenfielded a supply-chain lakehouse platform, building an end-to-end medallion/Delta pipeline ingesting ~1.4TB/day from 17+ ERP/WMS/TMS/shipment sources. Delivered Gold datasets to Redshift/Synapse/Databricks SQL powering Power BI/Tableau with a 99.5% SLA, while cutting runtime 30% and cloud costs 16% through Spark/Delta optimizations and robust data quality controls.”
Senior Data Engineer specializing in cloud data platforms and BI reporting
Mid-level Data Engineer specializing in cloud data pipelines and healthcare analytics
Mid-level Data Engineer specializing in scalable real-time data platforms
Principal Full-Stack Architect specializing in AI, cloud, and enterprise platforms
Mid-level AI/ML Engineer specializing in MLOps, real-time pipelines, and cloud deployment
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Senior Backend Engineer specializing in cloud-native microservices for FinTech and Healthcare