Pre-screened and vetted.
Mid-level Full-Stack .NET Developer specializing in microservices and cloud deployments
Mid-level Java Full-Stack Developer specializing in microservices and cloud platforms
Mid-Level Software Engineer specializing in cloud-native microservices
Senior Data Engineer specializing in AWS cloud data platforms and streaming analytics
Mid-level Java Full-Stack Developer specializing in microservices and cloud-native systems
Senior Full-Stack Software Engineer specializing in cloud-native Java microservices
Senior Java Full-Stack Developer specializing in Banking and cloud microservices
Senior Power BI Developer specializing in healthcare and financial analytics
Senior Full-Stack Developer specializing in .NET, Azure, and enterprise platforms
Mid-level AI Engineer specializing in production LLM, RAG, and agentic AI systems
Senior Software Test Engineer specializing in automation, API, performance, and accessibility testing
Senior Full-Stack Java Developer specializing in Spring Boot microservices and cloud platforms
Senior AI Python Engineer specializing in Generative AI and MLOps
Senior Data Engineer specializing in cloud data platforms and ML pipelines
“Data engineer focused on AWS-based enterprise data platforms, owning end-to-end pipelines from multi-source batch/stream ingestion (Glue/Kinesis/StreamSets/Airflow) through PySpark transformations into curated datasets for Redshift/Snowflake. Emphasizes production reliability with strong monitoring/observability and data quality gates, and reports ~30% performance improvement plus improved SLAs and latency after optimization.”
Mid-level Full-Stack Java Engineer specializing in microservices, React, and Azure
“Full-stack engineer with hands-on ownership of a real-time loyalty rewards notification system at Dell, spanning React UI, Spring Boot/Node microservices, Kafka event processing, and Oracle/Postgres persistence. Strong production operations experience across AKS/Azure DevOps and AWS (EC2/RDS/S3, autoscaling, CloudWatch), including resolving peak-load Kafka lag and API latency incidents through scaling and performance tuning.”
Mid-level Data Engineer specializing in cloud data pipelines for healthcare and financial services
“Data engineer with ~4 years of experience (Cigna) building and operating Azure Data Factory pipelines for healthcare claims/member/provider data at 2–3M records/day. Emphasizes reliability and downstream safety via schema/data-quality validation, quarantine workflows, idempotent processing, and backfills; also improved runtime ~20% through SQL optimization and served curated datasets through versioned views and well-documented, analyst-friendly interfaces.”
Mid-Level Software Engineer specializing in cloud microservices and data processing
“Data-focused engineer who has built near real-time trending news sentiment pipelines end-to-end (API/web ingestion, validation, transformations, and dashboard serving) and implemented reliability patterns like retries with exponential backoff and backfills. Also shipped Java/Spring Boot REST APIs backed by SQL with indexing/pagination, and stood up an early-stage QR-based attendance MVP using Firebase with iterative hardening via logging and validation.”
Mid-level Full-Stack .NET Engineer specializing in Sitecore and cloud-native microservices
“Backend/web API engineer with hands-on experience deploying .NET Core APIs to Azure App Service and stabilizing production systems through disciplined log-driven troubleshooting, environment configuration management, and SQL performance tuning (execution plans, query rewrites, indexing). Has also debugged cross-layer incidents involving DB locks and network latency, and modifies Python/XML automation scripts to meet customer-specific requirements while improving logging and performance.”
Senior Full-Stack .NET Developer specializing in cloud-native web applications
“Backend/ML systems engineer who built a Flask + PostgreSQL internal ticketing platform and demonstrates strong database/ORM performance depth (indexes, partitioning, RLS multi-tenancy). Notably optimized a high-throughput attachment OCR/embedding pipeline with batching, deduplication, and Redis caching, cutting median latency from 45s to 10s and reducing worker cost by 35% while increasing throughput 4x.”