Pre-screened and vetted.
Senior ML Engineer & Data Scientist specializing in LLM agents, retrieval/ranking, and MLOps
“Machine Learning Engineer currently at Webster Bank building an enterprise-scale LLM agent for Temenos Journey Manager/Maestro, using RAG-style multi-stage retrieval with FAISS/Pinecone, hybrid dense+sparse search, and LoRA fine-tuning optimized via NDCG/MAP and A/B testing. Previously handled messy incident/telemetry data at Deuta Werke GmbH with deterministic + fuzzy entity resolution, and has strong production data engineering experience across Spark/Hadoop and Python ETL systems.”
Mid-level Data Engineer specializing in cloud lakehouse and streaming platforms
“Data engineer focused on building production-grade pipelines on AWS (Kafka/Kinesis/Glue/S3) through to curated serving layers in Snowflake and Delta Lake. Emphasizes automated data quality validation (PySpark + CI/CD), modular dbt transformations for analytics (customer spending, risk metrics), and operational reliability with CloudWatch and DLQs; data consumed by BI tools and ML pipelines for fraud detection and risk analytics.”
Mid-level Data Engineer specializing in multi-cloud real-time and batch data pipelines
“Data engineer with healthcare domain experience who owned 100M+ record pipelines end-to-end (Kafka/Kinesis/ADF → PySpark/dbt validation → Spark SQL transforms → Snowflake/Power BI serving). Built production-grade reliability practices (Airflow orchestration, CloudWatch/Grafana monitoring, pytest + contract/regression tests, idempotent ingestion/backfills) and delivered measurable improvements: 35% lower latency and 40% better query performance.”
Senior Data Scientist and AI/ML Engineer specializing in GenAI and cloud ML
“ML/AI engineer with hands-on experience owning systems from experimentation through deployment and monitoring, including a Bank of Montreal project that improved timely interventions by 12%. Also brings GenAI/RAG experience with evaluation and safety guardrails, plus clinical NLP pipeline work extracting medication data from notes for patient risk prediction.”
Mid-level Python Backend Engineer specializing in cloud-native and AI-powered systems
“Backend/AI engineer who has shipped an LLM-powered enterprise support-ticket agent at Comcast, building a production-grade microservices pipeline (FastAPI, SQS, Redis) with strong observability (OpenTelemetry/Splunk/Prometheus/Grafana) and reliability patterns (async, caching, circuit breakers, idempotency). Demonstrated quantified impact at scale—processing 10k+ tickets/day while improving response SLAs and routing accuracy through evaluation and human feedback loops.”
Senior Full-Stack Developer specializing in Python, cloud microservices, and AI/ML
“Backend/data engineer with hands-on production experience across GCP and AWS: built FastAPI microservices on Cloud Run and delivered AWS Lambda + ECS Fargate systems with Terraform/GitHub Actions. Strong in data engineering (Glue/Spark, S3/Redshift) and modernization (SAS to Python/SQL), with proven reliability and incident ownership—including cutting a 20+ minute reporting query to under 2 minutes.”
Mid-level Data Engineer specializing in cloud data platforms, Spark, and streaming pipelines
“Data/MLOps engineer (Cognizant background) who owned an AWS/Airflow/Snowflake healthcare transactions pipeline processing ~8–10M records/day and cut pipeline/data-quality incidents by ~33%. Also built and deployed a production FastAPI model-inference service on Kubernetes (Docker, HPA) with strong observability (Prometheus/Grafana), versioned endpoints, and resilient backfill/idempotent external data ingestion patterns.”
Senior Full-Stack Software Developer specializing in IoT and cloud systems
“Frontend-focused engineer who built a full movie recommendation system from concept to production, comparing classic collaborative filtering with LLM-based recommendation approaches on AWS. Emphasizes scalable architecture, strict TypeScript data contracts, and high-quality Next.js/React UI patterns (defensive states, scoped state management, performance optimization) with disciplined QA and feature-flagged rollouts.”
Mid-level AI/ML Engineer specializing in fraud detection and risk analytics in Financial Services
“Finance-domain ML/LLM engineer who has shipped production systems including a RAG-based financial insights assistant with a custom post-generation validation layer that verifies atomic claims against retrieved source text to prevent hallucinations in compliance-critical workflows. Also built large-scale MLOps automation on AWS using Kubeflow + MLflow + CI/CD for fraud detection and credit risk models processing 500M+ transactions/day with a 99.99% uptime goal, and partnered closely with JP Morgan risk/compliance stakeholders on NLP-driven compliance monitoring.”
Executive Systems Architect specializing in distributed edge-to-cloud and real-time data platforms
“Has worked across multiple startup stages from pre-funding through Series D and emphasizes rigorous idea validation through direct conversations with both end users and purchasing decision-makers. Interested in applying NLP to automate summarization/abstracting of highly technical articles, with a balanced view of entrepreneurship that prioritizes health and family.”
“Built and deployed a production RAG-based internal knowledge assistant that let analysts query company documents in natural language, using LangChain/LangGraph with Pinecone and a FastAPI service for integration. Emphasizes reliability in production through hallucination mitigation (retrieval tuning + prompt guardrails) and measurable evaluation/monitoring (accuracy, latency, task completion, hallucination rate), iterating based on user feedback.”
Mid-level Full-Stack Developer specializing in FinTech and cloud-native web apps
“Backend engineer who built a containerized Flask service powering an engineering metrics dashboard by syncing GitHub and Jira data into PostgreSQL, with strong emphasis on schema design, query performance, caching, and background processing. Has hands-on experience with SaaS multi-tenancy (tenant scoping + Postgres RLS) and integrating AI/ML inference via separate model-serving services (FastAPI + TensorFlow Serving) and external APIs (OpenAI/Hugging Face/PyTorch).”
Senior Site Reliability Engineer specializing in multi-cloud Kubernetes and DevSecOps
“Cloud/Kubernetes-focused production engineer with experience running 99.95% uptime platforms across AWS/Azure/GCP. Strong in incident response and performance troubleshooting (including a 30% MTTR reduction), and in building secure CI/CD and Terraform-based IaC for AKS/GKE microservices with robust change controls and rollback practices. Notably does not have direct IBM Power/AIX/VIOS/HMC or PowerHA/HACMP ownership.”
Mid-level Java Full-Stack Developer specializing in banking and telecom platforms
“Frontend-focused engineer with experience at T-Mobile and U.S. Bank who maintained a TypeScript utility library (types, tests, build pipeline, and docs) adopted by multiple teams, and improved React workflow performance by refactoring components and optimizing data fetching. Known for pragmatic cross-team support—reproducing issues quickly, shipping well-tested fixes, and managing changes carefully to avoid breaking downstream apps.”
Senior Data & Backend Engineer specializing in cloud data pipelines and LLM/RAG systems
“Data engineer with end-to-end ownership of large-scale retail and clinical data ingestion/processing on AWS, including real-time streaming and batch pipelines. Delivered measurable outcomes: 20M daily transactions processed, latency cut from 4 hours to 5 minutes, ~70% fewer failures, and 120+ pipelines running at 99.8% reliability with full audit compliance.”
Mid-level Data Scientist specializing in Generative AI, MLOps, and cloud data platforms
“GenAI/ML engineer (CitiusTech) who has deployed production RAG systems for compliance/operations document Q&A, using Pinecone + FastAPI microservices on Kubernetes with strong monitoring and guardrails. Also built a GenAI-powered incident triage/routing solution in collaboration with non-technical stakeholders, achieving 35% faster response times and 40% fewer misclassified tickets, and has hands-on orchestration experience with Airflow and AutoSys.”
Intern Software Engineer specializing in cloud, big data, and test automation
“Internship experience at Qualitest building and deploying an LLM-powered test automation system that reduced manual test creation and improved efficiency (~40%). Demonstrates strong production engineering for LLM systems (timeouts/retries/monitoring/caching, prompt optimization, batching) and has scaled workflows to 100+ concurrent jobs; also has orchestration experience with AWS Step Functions and Kubernetes.”
Mid-Level Data Engineer specializing in cloud data platforms and governed analytics
“Data engineer with Optum experience building end-to-end healthcare data pipelines for HL7/FHIR, processing millions of records daily across Kafka streaming and Databricks/Spark batch. Strong focus on data quality (schema enforcement/validations), reliability (Airflow monitoring/alerts), and analytics-ready serving in Snowflake powering Power BI/Tableau, with CI/CD via Git and Jenkins.”
Mid-level Cloud Data Engineer specializing in Azure/AWS pipelines and medallion architecture
“Data engineer focused on reliability and data quality, owning end-to-end pipelines processing ~100k–300k records/day. Implemented robust validation and monitoring that cut reporting issues by ~30%, and built stable external data collection with anti-bot measures, backfills, and schema-change detection while maintaining backward-compatible internal data services.”
Mid-Level Full-Stack Software Engineer specializing in healthcare, cloud, and data platforms
“Backend/platform engineer who owned a real-time customer analytics microservice stack in Python/FastAPI with Kafka streaming into PostgreSQL, including schema enforcement (Avro) and high-throughput optimizations. Strong Kubernetes + GitOps practitioner (EKS/GKE, Helm, Argo CD) who has handled CI/CD reliability issues with automated pre-deploy checks and rollbacks, and supported major migrations (on-prem to AWS; VM to EKS) with blue-green cutover planning.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and external data collection systems on AWS (S3/Lambda/SQS/Glue/EMR) using PySpark/SQL, serving curated datasets to Snowflake/Redshift for finance and fraud teams. Has operated a large-scale crawler ingesting millions of records/day with anti-bot tactics, schema versioning/quarantine, and CloudWatch/Datadog monitoring, and also shipped a versioned REST API with caching and query optimization.”
Mid-level AI/ML Engineer specializing in LLM, RAG/GraphRAG, and fraud analytics
“LLM/agent engineer who has deployed a production internal assistant to reduce employee inquiry resolution time while maintaining regulatory compliance. Experienced with RAG, hallucination risk triage, and graph-based orchestration (LangGraph) for enterprise/banking-style workflows, emphasizing schema-validated, citation-backed, tool-constrained agent designs and tight collaboration with non-technical business/compliance stakeholders.”
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and MLOps
“Data engineer at AT&T focused on large-scale telecom (5G/IoT) data platforms, owning end-to-end pipelines from Kafka/Azure ingestion through Databricks/Delta Lake transformations to serving analytics and ML. Has operated at very high volumes (~50+ TB/day) and delivered measurable performance gains (25–30% faster processing) plus improved reliability via Airflow monitoring, robust data quality checks, and resilient external data collection patterns (rate limiting, retries, dynamic schemas).”