Pre-screened and vetted.
Mid-level AI Engineer specializing in LLMs, MLOps, and healthcare NLP
“Built a production, real-time clinical documentation system at HCA that converts doctor–patient conversations into structured clinical summaries using speech-to-text, LLM summarization, and RAG. Demonstrated measurable gains from medical-domain fine-tuning (clinical concept recall +18%, ROUGE-L 0.62 to 0.74) while meeting HIPAA constraints via PHI anonymization and encryption, and deployed via Docker/FastAPI with CI/CD and monitoring.”
Mid-level Business Analyst specializing in supply chain and logistics
“Analytics professional with hands-on experience in supply chain and logistics transformation, including enterprise data preparation in SQL, Python automation, and Power BI reporting. They highlight ownership of end-to-end digitization work at Blue Dart, where they defined operational metrics, aligned cross-functional stakeholders, and delivered measurable gains in transparency, reporting efficiency, and implementation quality.”
Mid-level AI & Machine Learning Engineer specializing in FinTech
“ML/AI engineer with hands-on experience building production systems in financial services, including a real-time underwriting analytics platform at Hartford Financial Services. Stands out for combining classic ML, low-latency API deployment, monitoring, and emerging LLM/RAG design patterns, with measurable impact including 20% better decision accuracy, sub-200ms latency, and 5M+ records processed daily.”
“Software engineer currently building AI-powered backend systems for interview analysis, with end-to-end ownership of an LLM-based monitoring platform. Stands out for combining practical product delivery in an ambiguous early-stage environment with measurable impact: over 40% reduction in manual review effort and roughly 20% lower inference cost.”
Senior AI/Machine Learning Engineer specializing in production ML and IoT platforms
“Backend/cloud engineer who built an AWS serverless IoT system that computes Bluetooth beacon locations from telemetry using heavy scientific Python (NumPy/SciPy/pandas) packaged as Dockerized Lambda, integrated with Java microservices and scheduled batch orchestration. Has deep AWS delivery experience (CI/CD with Code* tools, CloudFormation, cost controls) and has led high-severity incident response including CloudTrail forensics and infrastructure recovery after a compromised-keys crypto-mining attack.”
Mid-level Full-Stack Software Developer specializing in cloud-native microservices
“Full-stack engineer with enterprise experience at Metasystems Inc. (and Qualcomm) building high-traffic, security-sensitive systems—owned a secure transaction processing module end-to-end using Java/Spring Boot, Python/Django, and React. Strong AWS production operations (EKS/ECS/Lambda/RDS/DynamoDB) with IaC (Terraform/CloudFormation), observability, and reliability patterns; also delivered resilient ETL/integration pipelines with idempotency/retries/backfills and achieved a 50% deployment-time reduction through CI/CD and modular refactoring.”
Intern Data Scientist specializing in AI, analytics, and cloud data engineering
“Built a production multimodal LLM-based vendor risk assessment platform that ingests SOC reports and other documents, uses a strict RAG pipeline with grounded evidence (page/paragraph citations), and dramatically reduces analyst review time. Experienced with LangGraph/LangChain/AutoGen for stateful, fault-tolerant agent workflows, and emphasizes reliability (schema validation, guardrails) plus low-latency delivery (~1–2s) through hybrid retrieval, reranking, caching, and model tiering.”
Mid-level Backend Software Developer specializing in cloud-native microservices
“LLM-focused engineer who has shipped multiple production-grade AI reliability systems: an LLM output validation/monitoring service (FastAPI) with prompt versioning and failure analytics, plus a RAG feature using embeddings/vector DBs with retrieval thresholds, schema/context validation, and safe fallbacks. Strong in evaluation loops (groundedness, schema accuracy, human review) and scalable pipelines for messy document ingestion with observability and early detection of data quality issues.”
Mid-level Data Engineer specializing in real-time streaming and cloud data platforms
“Data engineer with Wells Fargo experience owning an end-to-end lakehouse ETL pipeline on Databricks/Azure Data Factory, processing ~480GB daily and implementing robust data quality/reconciliation across 40+ tables to reach ~99.3% reliability. Strong in performance optimization (cut runtime 5.5h→3.8h), CI/CD and monitoring, and resilient external/API ingestion with retries, schema validation, and backfills.”
Mid-level AI/ML Engineer specializing in GenAI and financial risk & compliance analytics
“Built and deployed a production LLM-powered financial risk and compliance platform to reduce manual trade exception handling and speed up insights from regulatory documents. Implemented a LangChain multi-agent workflow with structured/unstructured data integration (Redshift + vector DB) and emphasized hallucination reduction for regulatory safety using Amazon Bedrock. Strong MLOps/orchestration background across Kubernetes, Airflow, Jenkins, and monitoring/testing with MLflow, Evidently AI, and PyTest.”
Senior Data Engineer specializing in Spark, Kafka, and Databricks Lakehouse platforms
“Data engineer at Fidelity who built and operated a real-time financial transactions lakehouse on AWS/Databricks, processing millions of records daily with Kafka streaming. Demonstrated strong reliability and data quality practices (watermarking, idempotent Delta writes, validation/reconciliation, observability) and delivered measurable improvements (~30% faster jobs and ~30% fewer data issues) while enabling trusted gold-layer analytics for downstream teams.”
Mid-level AI/ML & Full-Stack Engineer specializing in LLM agents and medical RAG systems
“Full-stack engineer at an early-stage startup building an agentic AI application for enterprise systems, combining customer-facing Next.js/React UI work (30% faster load times) with backend/workflow orchestration using FastAPI + n8n, Redis, and RabbitMQ. Previously at Deloitte USI, built BDD Selenium/Java automation and managed 200+ defects end-to-end using JIRA/JAMA to support on-time production releases.”
Senior Full-Stack Developer specializing in Python, AWS serverless, and data workflows
“Backend/data engineer from ALDI Tech Hub who modernized legacy analytics (Excel/SAS) into production-grade Python services on AWS serverless (FastAPI on Lambda behind API Gateway with Step Functions). Strong in reliability and operations (Cognito auth, retries/timeouts, structured logging, CloudWatch alarms) and data pipelines (Glue ETL with schema evolution); delivered measurable SQL tuning gains (30s to 2s, 70% CPU reduction).”
Mid-Level Full-Stack Software Engineer specializing in FinTech and microservices
“Backend engineer with experience at Discover, Dell, and Carpus building high-concurrency microservices and secure APIs. Delivered measurable impact in fintech workflows by integrating credit bureaus (TransUnion/Experian), cutting loan processing from days to minutes and reducing latency 65% through PostgreSQL tuning and caching. Strong in production security patterns (JWT/RBAC, Postgres row-level security for multi-tenant isolation) and low-risk migrations (shadow mode + incremental rollout).”
Mid-level Full-Stack Software Engineer specializing in AI and data applications
“Analytics-focused candidate with experience building SQL/Python pipelines and dashboards for donor, campaign, and website performance reporting. They have worked with messy multi-source data, standardized metric definitions, and delivered automated reporting that reportedly reduced manual effort by about 80%.”
Senior AI/ML Engineer specializing in Generative AI, NLP, and regulated industries
“Built end-to-end ML and GenAI systems at Northern Trust, including a production RAG-based document intelligence platform for financial reports and contracts. Stands out for combining strong MLOps execution with practical product judgment—improving forecast accuracy by 22%, document review accuracy by 38%, and cutting deployment time by 45% while keeping latency and reliability production-ready.”
Senior software engineer specializing in AI/ML and LLM platform delivery
“ML/AI engineer with strong production ownership across predictive ML and Generative AI systems. They’ve delivered measurable business impact through real-time churn/drop-off prediction, RAG-based document QA, and scalable LLM optimization, with a consistent focus on reliability, safety, latency, and developer productivity.”
Entry-level Full-Stack Engineer specializing in distributed systems and ML platforms
“Early-career/new-grad candidate who built TrendScout AI, an evidence-first market intelligence agent that ingests messy news, extracts entities/events, builds a Neo4j knowledge graph, and answers questions via RAG with citations. Achieved ~95% retrieval relevance by combining ChromaDB semantic search with graph-based retrieval and validating outputs through human evaluation and guardrails to prevent hallucinations.”
Mid-level AI/ML Engineer specializing in healthcare NLP and MLOps
“ML/AI engineer with healthcare payer experience (Signal Healthcare, Cigna) who has shipped production fraud/claims prediction systems using Python/TensorFlow and exposed them via FastAPI/Flask microservices integrated with EHR and Salesforce. Emphasizes operational reliability and trust—Airflow-orchestrated pipelines with data quality gates plus SHAP-based interpretability, A/B testing, and drift/debug workflows—backed by reported outcomes of 22% lower false payouts and 17% higher model accuracy.”
Senior Software Engineer specializing in backend systems, microservices, and AI-enhanced workflows
“Significant contributor/maintainer to an open-source JavaScript event-tracking client SDK, owning API consistency/backward compatibility, high-load batching and retry/backoff improvements, and test/CI + documentation upgrades. Diagnosed production-like issues (missing events under load) via reproduction and logging, then reduced GC pressure and improved predictability with a ring-buffer-based batching redesign while actively triaging issues and reviewing PRs.”
Intern Full-Stack/Backend Software Engineer specializing in test automation and web systems
“Backend/ML engineer who built an end-to-end greenwashing detection system for corporate ESG reports: Python preprocessing pipeline, logistic regression + fine-tuned DistilBERT models, and a Dockerized FastAPI inference service optimized for latency. Internship experience maintaining GitLab CI/CD for TypeScript services (Jest/Playwright), improving pipeline stability and test determinism; familiar with Kubernetes/GitOps concepts and AWS CLI/SSO.”
Senior Software Engineer specializing in cloud-native microservices and AI-enabled platforms
“Infrastructure/operations engineer with hands-on production IBM Power/AIX (AIX 7.x, VIOS, HMC) and PowerHA/HACMP clustering experience, including DLPAR changes, failover testing, and incident recovery. Also delivers modern cloud DevOps work—GitHub Actions CI/CD for Docker-to-Kubernetes on AWS and Terraform-based provisioning of core AWS infrastructure (VPC/EKS/RDS/IAM) with controlled rollouts and drift checks.”
Mid-level Full-Stack Engineer specializing in cloud data platforms and LLM-powered apps
“Full-stack engineer with healthcare and finance experience who has owned end-to-end production systems across Azure and AWS. Built a real-time clinical dashboard at Centene (React + FastAPI + Azure Event Hubs) that cut data latency from ~12 minutes to under 1 minute and was associated with a 30% reduction in intervention delays. Also delivered MVPs in high-ambiguity environments at Accenture during monolith-to-microservices migration, improving uptime and maintainability with measurable results.”
Mid-level Data Engineer specializing in AWS cloud data platforms
“Data engineer with Charter Communications experience modernizing large-scale AWS data lake pipelines: ingesting S3 data, validating against legacy systems, transforming with PySpark/Spark SQL, and serving via Iceberg/Delta tables. Worked at 50M–300M record scale, delivered >99.5% data match, and built monitoring/alerting (CloudWatch/SNS) plus retry orchestration (Step Functions) and data quality gates (Great Expectations).”