Pre-screened and vetted.
Mid-level QA Analyst specializing in manual, automation, API, and backend testing
“QA engineer with 4+ years testing UX/UI for enterprise, data-driven and legacy platforms (including Dell Technologies), partnering closely with developers/product/business through Agile sprints. Experienced validating end-to-end behavior across UI, REST APIs (Postman), and databases (SQL), and automating regression with Selenium Java/TestNG. Notable work includes diagnosing search ranking/pagination issues by correlating UI behavior with API responses and metadata consistency.”
Mid-level AI/ML Engineer specializing in Generative AI, RAG, and real-time fraud detection
“GenAI/ML engineer who has shipped production agentic systems in highly regulated and high-throughput environments, including an AWS Bedrock-based fraud/compliance workflow at U.S. Bank with PII redaction and hallucination detection that cut investigation time by 50%+. Also built and evaluated RAG and recommendation systems at Target, using RAGAS-driven testing, hybrid retrieval with re-ranking, and SHAP explainability dashboards to align model behavior with merchandising business KPIs.”
Mid-level GenAI Engineer specializing in production AI agents and evaluation pipelines
“Built and shipped a production LLM-powered internal operations automation platform using LangChain RAG (Pinecone) and FastAPI microservices, deployed on AWS EKS, serving 10k+ daily interactions. Implemented a rigorous evaluation/observability stack (golden datasets, prompt regression tests, MLflow, retrieval metrics, hallucination monitoring) that drove hallucinations below 2% and improved reliability, and partnered closely with non-technical ops leaders to cut manual lookup work by 60%+.”
Mid-level Full-Stack Developer specializing in cloud-native microservices and distributed systems
“Software engineer with hands-on ownership of both fintech checkout improvements (saved payment methods/one-click checkout with tokenization and feature-flag rollouts) and production LLM/RAG systems for customer support. Demonstrates strong operational rigor via guardrails, evaluation loops integrated into CI/CD, and scalable data pipelines handling messy PDFs/CSVs/logs with reliability and observability.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics platforms
“Data engineer with healthcare and enterprise experience (Molina Healthcare, Dell Technologies) building and operating high-volume batch + streaming pipelines across AWS and Azure. Strong focus on data quality (schema validation, fail-fast checks), reliability (monitoring/alerts, retries), and performance tuning (Spark/partitioning), with measurable runtime reduction and improved downstream trust.”
Senior Software Engineer specializing in distributed systems and FinTech
“Data/analytics-focused engineer who builds end-to-end KPI reporting and validation products used daily by plant leads and leadership to track yield, downtime, and defects. Combines Python/SQL + Power BI data pipelines with strong data-quality practices (automated validation, monitoring/alerts) and has experience designing scalable frontend architecture in TypeScript/React and working in distributed/microservices-style data systems.”
Junior Backend Software Engineer specializing in conversational AI and cloud APIs
“Backend/ML-focused software engineer who built and evolved a Python/FastAPI backend for a large-scale conversational AI platform, decoupling API and inference services to improve stability and deployment velocity. Experienced in production hardening (timeouts/fallbacks/monitoring), secure multi-tenant systems (JWT/RBAC/RLS), and low-risk migrations using shadow deployments and incremental traffic ramp-ups.”
Mid-level Full-Stack Software Engineer specializing in Java/Spring microservices and AWS
“Backend/platform engineer who has owned a real-time business analytics dashboard backend (Python/Flask/MongoDB) and built Kafka event-streaming pipelines with idempotent processing and DLQs. Strong DevOps/GitOps experience deploying containerized microservices to AWS EKS with CI/CD (Jenkins/GitHub Actions/CodePipeline) and ArgoCD auto-sync/drift detection, plus hands-on support for phased hybrid cloud/on-prem migrations using feature flags and replication.”
Junior Full-Stack Software Engineer specializing in Django, AWS, and AI/ML
“Full-stack engineer who built and owned an AI-powered personal statement editor in Next.js (App Router + TypeScript), including dynamic routing, server-side data fetching, and typed API route handlers. Post-launch, they handled production monitoring/debugging and shipped reliability/performance upgrades (rate limiting, retries, rollback, DB indexing), and report a 40% latency reduction using Suspense/streaming and React concurrency patterns. Also implemented a durable Temporal-orchestrated AI document workflow with robust retry/idempotency strategies.”
Mid-level Full-Stack Java Developer specializing in enterprise SaaS and FinTech
“Software engineer with fintech/retirement-fund domain experience who led an internal dashboard consolidating fund transactions, approvals, and reporting into a single workflow tool. Strong in full-stack delivery (React + REST APIs + DB optimization) and in scaling/cleaning messy operational data via modular ETL pipelines (Python/Node), iterating post-launch with performance improvements like caching, pagination, and enhanced filtering.”
Mid-level Data Analytics & ML Engineer specializing in NLP, LLMs, and cloud data platforms
“At KPMG, built and productionized a secure RAG-based LLM assistant that lets business and risk stakeholders query data warehouses in natural language, reducing dependence on data engineers for ad-hoc analysis. Demonstrates strong production rigor (Airflow orchestration, CI/CD, containerization), retrieval/embedding tuning (rechunking, semantic abstraction for structured data), and reliability controls (confidence thresholds, refusal behavior, monitoring and canary evals).”
Mid-level Data Scientist / ML Engineer specializing in streaming ML systems for healthcare and IoT
“ML/GenAI engineer with production experience building an LLM-powered governance layer that summarizes verified drift/performance signals into validation reports and release notes, designed for regulated environments with de-identification and non-blocking fallbacks. Strong Airflow-based orchestration background across healthcare and finance, integrating Databricks/Spark and MLflow for scalable retraining/monitoring. Demonstrated ability to partner with non-technical healthcare operations teams to deliver actionable risk-scoring outputs via dashboards and automated reporting.”
Staff Software Engineer / Technical Architect specializing in cloud data platforms and GenAI agents
“Small-team builder of Promethium’s “Mantra” next-gen agentic text-to-SQL engine, using vector DB + LangGraph tooling and SQL validation/evaluation to improve query accuracy. Experienced in diagnosing production LLM workflow failures via LangSmith traces and in running hands-on developer workshops and pre-sales POCs with live debugging and real customer data.”
Junior Data Engineer / Analyst specializing in AI/ML data infrastructure
“Built and deployed a compliance-sensitive LLM pipeline that extracts rebate logic from hospital–supplier medical contracts, using multi-layer redaction (regex/NER/dictionary), schema-validated structured outputs, and secure placeholder reinsertion. Hosted models on Amazon Bedrock to avoid retraining on sensitive data and improved both accuracy and cost by splitting the workflow into a lightweight section classifier plus a fine-tuned extraction model, orchestrated with LangChain and evaluated via layered, test-driven agent assessments.”
Mid-level AI/ML Engineer specializing in fraud detection and healthcare predictive analytics
“Built and deployed a production LLM-powered calorie-counting chatbot that turns plain-English meal descriptions into normalized food entities, quantities, and calorie estimates using a hybrid transformer + rule-engine pipeline. Emphasizes reliability with schema/constraint guardrails, confidence-based routing (including embedding similarity search fallbacks), and strong observability/metrics (hallucination rate, calibration, latency, cost). Partnered closely with nutritionists to encode domain standards into mappings and validation logic.”
Entry-level AI/ML Engineer specializing in LLMs, RAG, and DevOps automation
“Built and owned a production-scale AI-driven software release/version intelligence platform orchestrated via GitHub Actions that tracks 1000+ upstream repositories and automatically generates SLA-bound JIRA upgrade tickets for hardened container images. Replaced brittle regex/PEP440 parsing with an LLM-based semantic filtering layer plus deterministic validation to handle noisy/inconsistent GitHub tags at scale, with monitoring for coverage, latency, and correctness validated against upstream ground truth.”
Mid-level AI/ML Engineer specializing in LLM, RAG/GraphRAG, and fraud analytics
“LLM/agent engineer who has deployed a production internal assistant to reduce employee inquiry resolution time while maintaining regulatory compliance. Experienced with RAG, hallucination risk triage, and graph-based orchestration (LangGraph) for enterprise/banking-style workflows, emphasizing schema-validated, citation-backed, tool-constrained agent designs and tight collaboration with non-technical business/compliance stakeholders.”
Mid-level Applied AI Engineer specializing in agentic LLM workflows
“Master’s-in-Data-Science candidate (UHV) with 4+ years in AI engineering building production LLM and multimodal systems. Designed an LLM-powered workflow automation platform using RAG over vector stores with guardrails (schema/output validation, fallbacks) and a rigorous evaluation/monitoring framework including drift tracking and shadow deployments. Experienced orchestrating large-scale vision-language pipelines with Airflow and Kubernetes (OCR, distributed training) and partnering with non-technical ops stakeholders to cut cycle time and reduce errors.”
Mid-level Solutions & Pre-Sales Manager specializing in HRMS, analytics, and multi-cloud AI
“Enterprise implementation/deployment specialist focused on HRMS and payroll systems across APAC customers, combining cloud/hybrid (AWS/Azure/GCP) integration work with strong client-facing delivery. Demonstrated ability to debug complex production issues across application, database, and network layers (e.g., isolating VPN/router congestion) and to tailor Python-based data cleaning/scoring/utilities to customer-specific workflows.”
Mid-level Data & AI Engineer specializing in healthcare data pipelines and MLOps
“Built and deployed a production LLM-powered clinical note summarization system used by care managers to speed review of 5–20 page unstructured medical records. Implemented safety-focused validation (prompt constraints, rule-based and section-level checks, human-in-the-loop) to reduce hallucinations while maintaining low latency and meeting privacy/regulatory constraints, integrating via APIs into existing clinical tools.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and MLOps
“Data engineer at AT&T focused on large-scale telecom (5G/IoT) data platforms, owning end-to-end pipelines from Kafka/Azure ingestion through Databricks/Delta Lake transformations to serving analytics and ML. Has operated at very high volumes (~50+ TB/day) and delivered measurable performance gains (25–30% faster processing) plus improved reliability via Airflow monitoring, robust data quality checks, and resilient external data collection patterns (rate limiting, retries, dynamic schemas).”