Pre-screened and vetted.
Mid-level Machine Learning & Data Engineer specializing in MLOps and cloud data platforms
Mid-Level Software Engineer specializing in backend and full-stack systems
Mid-level Data Engineer specializing in cloud lakehouse and streaming analytics
Senior AI & Machine Learning Engineer specializing in NLP, GenAI, and MLOps
“ML/GenAI practitioner with healthcare domain depth who built and deployed a production cervical-cancer EMR classification system using a hybrid rules + medical BERT approach, optimized for high recall under severe class imbalance and PHI constraints. Experienced running end-to-end production ML/LLM pipelines with Apache Airflow (validation, promotion/rollback, monitoring, retraining) and partnering closely with clinicians to calibrate thresholds and implement human-in-the-loop review.”
Mid-Level Software Engineer specializing in cloud-native systems, automation, and LLM-enabled robotics
“React-focused engineer who built a full-stack analytics/test-metrics dashboard (React frontend + Python backend) and turned common UI pieces (data tables, filter panels, chart wrappers) into a reusable internal component library with docs, examples, and basic tests. Strong on profiling-driven performance optimization (React Profiler, memoization) and on owning ambiguous internal-tool projects end-to-end; now planning to package internal patterns into public open-source components.”
Mid-level AI/ML Engineer specializing in NLP, MLOps, and Generative AI
“Built and deployed a production generative AI chatbot at NVIDIA using LangChain + GPT-3 integrated with internal data sources, cutting response time nearly in half and improving CSAT by ~12 points. Also delivered LLM-driven QA tools by fine-tuning Hugging Face transformer models and deploying via an AWS-based pipeline (Lambda/Glue/S3) with orchestration (Airflow/Step Functions), CI/CD, Kubernetes, and monitoring (MLflow/Splunk/Power BI).”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with experience at Moderna and Block owning high-volume (≈10TB/day) production pipelines on AWS, using Kafka/S3/Glue/dbt/Snowflake with strong data quality and observability practices (schema validation, anomaly detection, CloudWatch monitoring). Also built external financial API ingestion with Airflow retries, throttling/token rotation, and schema versioning, and helped stand up an early-stage biomedical data platform with CI/CD and incident debugging.”
Intern Full-Stack/AI Software Engineer specializing in GenAI and cloud microservices
“Backend engineer who owned the AI/data pipeline layer for an EV-charging management platform (Ampure Intelligence), ingesting real-time charger telemetry via OCPP and serving FastAPI APIs to web/mobile clients. Strong in production reliability for asynchronous systems (state reconciliation, idempotency), Kubernetes GitOps (ArgoCD), Kafka streaming, and zero-downtime cloud-to-on-prem migrations; also improved LSTM-based forecasting through targeted preprocessing.”
Mid-Level Software Engineer specializing in AWS distributed systems and microservices
“Backend/ML-systems engineer with experience (including Amazon) building real-time face recognition services using PyTorch (MTCNN/FaceNet) and AWS (SQS/S3/Lambda/EC2) with a focus on low latency, burst handling, and cost control. Also led a revenue-critical legacy pricing workflow migration to a serverless event-driven architecture using strangler-pattern rollout, simulation-based validation, and strong security practices (JWT/RBAC/RLS).”
Senior Data Engineer specializing in cloud lakehouse and real-time streaming pipelines
“Senior data engineer with experience in both healthcare (CVS Health) and financial services (Bank of America), building large-scale Azure lakehouse pipelines (30+ EHR sources, ~5TB) and real-time streaming services (Event Hubs/Kafka) for patient vitals. Strong focus on reliability and data quality (Great Expectations, monitoring/alerting, schema drift automation), with measurable outcomes like 50% runtime reduction and 99%+ uptime for regulatory reporting pipelines.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
“Data engineer with Intuit experience owning end-to-end, high-volume financial data pipelines (API/S3 ingestion, Airflow orchestration, Spark/PySpark + SQL transforms, Snowflake marts). Strong focus on reliability and data quality—achieved 99.8% SLA and cut discrepancies by 35% using Great Expectations, reconciliation, schema versioning, and automated backfills; also built near real-time Kafka/API data services with CI/CD and observability.”
Mid-level Data Engineer specializing in large-scale analytics platforms
“Data/Backend engineer with experience at Naukri building large-scale analytics products over a 130M+ user base, including Spark/Airflow pipelines and Kafka-based clickstream validation with Confluent Schema Registry. Also built an audience segmentation backend (Athena/S3 + Spring Boot APIs) for non-technical internal teams and recently shipped a GenAI customer data audit system (FastAPI/Postgres/Llama) that cut sales-planning validation from ~3 months to ~1 week.”
Mid-Level Software Engineer specializing in cloud, backend systems, and microservices
“Full-stack engineer with hands-on ownership of a customer-facing advanced performance metrics experience in the Amazon S3 console, spanning React UI, Python/Node services, Redshift/RDS data access, and AWS IaC/CI-CD with CloudWatch/Route53 operational readiness. Demonstrates strong production instincts around resilience (partial failures, multi-region inconsistencies), progressive rollouts/feature flags, and reliable ETL/integration patterns (idempotency, backfills, reconciliation).”
Intern Software Engineer specializing in data pipelines and full-stack web development
“Internship at Radar (geolocation infrastructure) where they owned automation of multiple geospatial data ingestion pipelines (including US/Canadian address ingestion), orchestrating Spark (Scala) jobs via Python-based Airflow and using GitOps-style CI/CD workflows.”
Mid-level Software Engineer specializing in cloud platforms, data engineering, and distributed systems
“Full-stack engineer who built and owned an AI-assisted job-matching dashboard in Next.js App Router/TypeScript, keeping LLM logic server-side and improving performance via deduplication, caching/revalidation, and streaming (35% fewer duplicate LLM calls; 40% faster first render). Also has strong data/backend chops: designed Postgres models and optimized queries at million-record scale (1.8s to 120ms) and built durable AWS multi-region telemetry workflows with idempotency, retries, and monitoring.”
Senior Data Engineer specializing in cloud data platforms and real-time pipelines
“Data engineer focused on reliability and observability, building end-to-end pipelines processing millions of records/day from sources like S3 and Kafka. Has hands-on experience with Airflow-based data quality automation, PySpark/Databricks transformations, and shipping versioned Python REST APIs deployed via Docker/Kubernetes with CI/CD (Jenkins) and monitoring (CloudWatch/Azure Logs).”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer with healthcare (CVS Health) experience who migrated production PySpark workloads to native BigQuery SQL and built a Great Expectations-based validation microservice on GKE (Flask + REST) integrated into Cloud Composer. Has operated high-volume pipelines (~300–400GB/day) and designed external vendor ingestion on AWS (Lambda/Step Functions/Glue) with schema-drift detection, alerting, and backfill-safe controls to protect downstream Snowflake/BigQuery tables.”
Senior Data Scientist / Generative AI Engineer specializing in fraud, risk, and MLOps
“Built and deployed a production LLM/RAG fraud investigation system to replace manual investigator workflows, combining transaction data, historical cases, and policy documents with agent-style steps and LoRA fine-tuning. Demonstrates strong reliability engineering (grounding, citations, abstention paths), performance optimization (retrieval/indexing/caching), and end-to-end MLOps orchestration using Azure ML Pipelines/MLflow plus Kubernetes/Argo with canary and rollback deployments.”
Mid-level Data Scientist specializing in NLP, LLMs, and cloud ML platforms
“LLM/MLOps engineer who has shipped production systems for complaint intelligence and contact-center NLU, including LoRA/RLHF-tuned LLaMA models deployed on GKE with vLLM and Vertex AI batch pipelines to BigQuery. Demonstrates strong practical focus on hallucination control, data imbalance mitigation, and production monitoring (Langfuse) with regression testing and canary rollouts, plus experience orchestrating complex workflows with AWS Step Functions.”
Mid-level AI/ML Engineer specializing in GenAI, RAG, and enterprise data platforms
“Built and shipped a production LLM-powered RAG assistant for enterprise internal document search (PDFs, knowledge bases, structured data), addressing real-world issues like noisy documents, hallucinations, and latency with grounded prompting, retrieval-confidence fallbacks, and performance optimizations. Also partnered with compliance and business teams at JPMc to deliver a solution aligned with regulatory constraints, supported by monitoring, feedback loops, and systematic evaluation.”
Mid-level Software Engineer specializing in AWS, DevOps automation, and data platforms
“Engineer with Securonix experience deploying and operating production microservices and real-time data-processing systems at high throughput. Led AWS infrastructure, CI/CD, monitoring, and customer-driven customization for a threat-report classification solution, including rule adjustments and model retraining based on live client feedback.”
Mid-level Data Scientist/Data Engineer specializing in ML pipelines, insurance and healthcare analytics
“Built a production assistive-vision iPhone app to help visually impaired users find grocery items, training a custom YOLO detector on 2,000+ self-collected/annotated images and deploying via CoreML with a cloud multimodal LLM for navigation instructions. Brings hands-on AWS serverless + ECS container deployment (CDK/GitHub Actions) and a disciplined approach to AI workflow reliability (state-machine design, offline evals, stress tests, logging/metrics), plus experience communicating model insights to non-technical stakeholders (MOTER Technologies).”