Pre-screened and vetted.
Mid-level Full-Stack Java Developer specializing in cloud-native microservices
“Full-stack Java developer with IBM and Epic Systems experience modernizing legacy enterprise apps into microservices and delivering customer-facing healthcare claims workflows at very high scale (2M+ transactions/day). Strong blend of product engineering (APIs + React/TypeScript UI) and production operations on AWS, including performance incident remediation via query optimization, indexing, and autoscaling.”
Mid-level Data Scientist specializing in MLOps, LLM/RAG applications, and deep learning
“Built and deployed a production compliance automation RAG system (at Citi) that generates citation-backed, schema-validated risk summaries for regulatory document review. Emphasizes regulated-environment reliability with retrieval-only grounding, abstention, confidence thresholds, and immutable audit logging, plus orchestration using LangChain/LangGraph and Airflow. Reported ~60% reduction in compliance review effort while maintaining high precision and traceability.”
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and real-time fraud detection
“GenAI/ML engineer who has shipped production agentic systems in highly regulated and high-throughput environments, including an AWS Bedrock-based fraud/compliance workflow at U.S. Bank with PII redaction and hallucination detection that cut investigation time by 50%+. Also built and evaluated RAG and recommendation systems at Target, using RAGAS-driven testing, hybrid retrieval with re-ranking, and SHAP explainability dashboards to align model behavior with merchandising business KPIs.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics platforms
“Data engineer with healthcare and enterprise experience (Molina Healthcare, Dell Technologies) building and operating high-volume batch + streaming pipelines across AWS and Azure. Strong focus on data quality (schema validation, fail-fast checks), reliability (monitoring/alerts, retries), and performance tuning (Spark/partitioning), with measurable runtime reduction and improved downstream trust.”
Mid-level Data Engineer specializing in cloud data pipelines and financial services warehousing
“Data engineer (Charles Schwab) who took ownership of an unstable, ambiguous nightly financial data pipeline and rebuilt it into a reliable, incremental AWS Glue/Airflow/Redshift system feeding Power BI. Created a custom Python data-quality framework with hard-stop gating and schema drift detection, improving integrity (99.9%), cutting runtime (~20%), and reducing incidents/tickets (35% fewer schema-related dashboard incidents; 30% fewer investigations).”
Mid-Level Software Engineer specializing in Java microservices and AWS cloud-native systems
“Full-stack engineer who has owned customer-critical analytics and course intelligence platforms end-to-end (React/TypeScript + Node/Express + SQL), including an internal self-serve Reporting & Analytics Center adopted by 1,000+ users. Demonstrates strong systems thinking across performance (2× faster heavy reports), reliability (feature flags, testing), and distributed architecture (RabbitMQ microservices with idempotency, DLQs, and correlation-ID observability).”
Mid-level Data Engineer specializing in real-time pipelines and cloud data platforms
“Backend engineer with hands-on experience building secure Python/Flask services (sessions, JWT, RBAC) and optimizing PostgreSQL/SQLAlchemy performance, including custom SQL using CTEs/window functions profiled via EXPLAIN ANALYZE. Also integrates LLM features via OpenAI/Azure into backend systems and improves scalability with RabbitMQ-driven async processing, caching, and multi-tenant data isolation patterns.”
Senior Machine Learning Engineer specializing in MLOps and NLP/GenAI
“Built a production LLM-agent framework for a startup that performs daily financial/trading analysis by combining live market data with internal tools, including a centralized memory module to prevent context drift and reduce hallucinations. Also implemented an Airflow-orchestrated retail price forecasting pipeline deployed to AWS endpoints, scaling parallel workloads via Kubernetes Executor and validating systems with rigorous functional + LLM-specific metrics and cross-team collaboration.”
Mid-level Machine Learning & Data Infrastructure Engineer specializing in MLOps on AWS
“Built and deployed a fine-tuned Qwen 2.5 14B model into production at Dextr.ai as the backbone for hotel-operations agentic workflows, running on AWS EKS with Triton and TensorRT-LLM. Demonstrates strong cost-aware LLM engineering (QLoRA, FP8/BF16 on H100) plus rigorous benchmarking/observability (Prometheus, LangSmith) with reported sub-30ms TTNT. Previously handled long-running ETL orchestration with Airflow at GE Healthcare and Lowe's.”
Mid-level Data Engineer specializing in cloud lakehouse and streaming platforms
“Data engineer focused on building production-grade pipelines on AWS (Kafka/Kinesis/Glue/S3) through to curated serving layers in Snowflake and Delta Lake. Emphasizes automated data quality validation (PySpark + CI/CD), modular dbt transformations for analytics (customer spending, risk metrics), and operational reliability with CloudWatch and DLQs; data consumed by BI tools and ML pipelines for fraud detection and risk analytics.”
Mid-level Data Engineer specializing in multi-cloud real-time and batch data pipelines
“Data engineer with healthcare domain experience who owned 100M+ record pipelines end-to-end (Kafka/Kinesis/ADF → PySpark/dbt validation → Spark SQL transforms → Snowflake/Power BI serving). Built production-grade reliability practices (Airflow orchestration, CloudWatch/Grafana monitoring, pytest + contract/regression tests, idempotent ingestion/backfills) and delivered measurable improvements: 35% lower latency and 40% better query performance.”
Mid-level Data Engineer specializing in capital markets post-trade data platforms
“Data/streaming engineer in capital markets who led an end-to-end trade settlement data product (Kafka→MongoDB→data lake) with rigorous data-quality logic and ~$175K first-year operational impact. Also built a low-latency Go-based CME market data engine feeding SOFR curve generation, using MSK on EKS with performance tuning (idempotency, compression, partitioning) to achieve sub-100ms delivery.”
Junior Data Analyst specializing in financial and operational analytics
“Analytics professional with experience at KPMG turning messy operational and financial data from SQL Server and AWS S3 into clean reporting datasets and automated Python workflows. They combine SQL, Python, Power BI, and experimentation methods to deliver stakeholder-aligned KPI dashboards and marketing performance insights with a strong focus on data integrity and reproducibility.”
Junior Software Engineer specializing in full-stack, data engineering, and mobile apps
“Built production LLM agents at Hivenue and Amazon, spanning consumer booking automation and internal data-query/reporting workflows. Stands out for combining conversational UX with strong reliability engineering—strict tool use, state machines, schema validation, idempotency, and evaluation pipelines—and can point to measurable impact including a 21% reduction in time to book and a 12% conversion lift.”
Senior Data Scientist and AI/ML Engineer specializing in GenAI and cloud ML
“ML/AI engineer with hands-on experience owning systems from experimentation through deployment and monitoring, including a Bank of Montreal project that improved timely interventions by 12%. Also brings GenAI/RAG experience with evaluation and safety guardrails, plus clinical NLP pipeline work extracting medication data from notes for patient risk prediction.”
Senior Software Engineer specializing in distributed systems and FinTech
“Data/analytics-focused engineer who builds end-to-end KPI reporting and validation products used daily by plant leads and leadership to track yield, downtime, and defects. Combines Python/SQL + Power BI data pipelines with strong data-quality practices (automated validation, monitoring/alerts) and has experience designing scalable frontend architecture in TypeScript/React and working in distributed/microservices-style data systems.”
Senior Full-Stack & Mobile Engineer specializing in Node.js and React
“Backend engineer with TaskRabbit experience building and operating payment/booking services in Python/Django on AWS (ECS + Lambda) with Kafka/SQS eventing. Demonstrates strong production reliability and incident ownership in high-stakes payment flows (idempotency, strict timeouts, retries, monitoring/alerting) plus data/ETL work in AWS Glue and measurable SQL performance wins.”
Senior Full-Stack Developer specializing in Python, cloud microservices, and AI/ML
“Backend/data engineer with hands-on production experience across GCP and AWS: built FastAPI microservices on Cloud Run and delivered AWS Lambda + ECS Fargate systems with Terraform/GitHub Actions. Strong in data engineering (Glue/Spark, S3/Redshift) and modernization (SAS to Python/SQL), with proven reliability and incident ownership—including cutting a 20+ minute reporting query to under 2 minutes.”
Senior Python Developer specializing in data engineering, MLOps, and cloud platforms
“Backend/data engineer with production experience building secure Django/DRF APIs (JWT RS256 + rotating refresh tokens), background processing with Celery, and strong reliability practices (timeouts, retries/backoff, structured logging, audit trails). Has delivered AWS solutions spanning Lambda + ECS with IaC/CI-CD and built Glue/PySpark ETL pipelines with schema evolution and data-quality quarantine patterns; also modernized a legacy SAS pipeline to Python/PySpark with parallel-run parity validation and phased rollout.”
Mid-level Generative AI Engineer specializing in LLM apps, RAG, and MLOps
“LLM/GenAI engineer with US Bank experience building a production financial-document intelligence platform using LangChain/LangGraph, GPT-4, and Amazon OpenSearch. Delivered a RAG-based assistant for compliance/audit teams with grounded, cited answers, focusing on reducing hallucinations and latency, and deployed securely on AWS (SageMaker/EKS) with CI/CD and evaluation tooling (LangSmith, RAGAS).”
Mid-level Data Engineer specializing in cloud data platforms, Spark, and streaming pipelines
“Data/MLOps engineer (Cognizant background) who owned an AWS/Airflow/Snowflake healthcare transactions pipeline processing ~8–10M records/day and cut pipeline/data-quality incidents by ~33%. Also built and deployed a production FastAPI model-inference service on Kubernetes (Docker, HPA) with strong observability (Prometheus/Grafana), versioned endpoints, and resilient backfill/idempotent external data ingestion patterns.”
Mid-level Data Engineer specializing in cloud data pipelines and streaming
“Data engineer with experience at Wells Fargo and Accenture owning end-to-end production pipelines processing hundreds of millions of transactional/risk records daily. Strong focus on data quality and reliability (reconciliation checks, schema drift detection, CloudWatch alerting) plus Spark performance tuning and idempotent backfills using Delta Lake/merge logic across AWS (S3/EMR/Databricks/Redshift) and Azure (ADF/Azure DevOps/Azure Monitor).”
Mid-level Data Engineer specializing in AWS/Azure pipelines and streaming analytics
“Data engineer with experience across healthcare and geospatial risk systems, owning end-to-end pipelines from ingestion through serving on AWS/Azure stacks. Built HIPAA-compliant data quality gates and CDC for millions of daily claims, and also delivered a real-time wildfire risk platform with 20-minute refresh cycles and a 60% data accuracy lift. Strong in streaming (Kafka), Spark performance tuning, and production-grade orchestration/CI/CD (Airflow, Docker, Jenkins, GitHub Actions, Terraform).”
Senior Data Engineer specializing in cloud data platforms and automated data quality
“Data engineer at CenterPoint Energy who built and operated multiple production-grade GCP data systems: a daily Snowflake→BigQuery replication framework (150+ tables) with Monte Carlo/Atlan-driven observability and schema-drift protection, plus a FastAPI metrics service for pipeline health. Demonstrated measurable impact (40% faster dashboard queries, 70% less manual refresh work, zero data loss) and strong operational rigor (scaling Cloud Run jobs, SAP SLT reconciliation, quarantine patterns, CI/CD via GitHub Actions + Terraform).”