Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in NLP, MLOps, and scalable data pipelines
“Built and shipped a production LLM-powered personalized client engagement assistant in the financial domain, balancing real-time recommendations with strict privacy/compliance requirements. Demonstrates strong MLOps/LLMOps depth (Airflow + MLflow, containerized microservices, drift monitoring) and a privacy-by-design approach validated in collaboration with risk and compliance teams.”
Mid-level Data Engineer specializing in streaming and cloud data platforms for financial services
“Data engineering-focused candidate (internship/project experience) who built end-to-end pipelines processing a few million transactional records/day for fraud detection and reporting, using Airflow, Python/SQL, and PySpark with strong emphasis on data quality gates, idempotency, and monitoring. Also implemented an external web/API data collection system with anti-bot tactics and schema-change quarantine, and shipped a versioned Flask API to serve curated warehouse data.”
Senior Software Engineer specializing in cloud infrastructure and platform engineering
“Backend engineer with deep experience in security and access-management platforms at JPMorgan Chase, including owning automation for migrating 50+ engineering teams from CyberArk to HashiCorp Vault. Stands out for combining regulated-environment rigor, infrastructure automation, and production operations with practical AI integration in internal access workflows.”
Senior Cloud & DevOps Engineer specializing in AWS and Kubernetes
“AIX/IBM Power Systems engineer with hands-on production incident leadership in a regulated banking environment, using deep OS-level tooling to diagnose CPU entitlement and memory pressure issues. Experienced with HMC/vHMC, VIOS, and zero-downtime DLPAR resizing, plus PowerHA/HACMP clustering and validated failover testing. Also drives migration readiness via Bash/Python automation (60% manual-effort reduction) and phased AIX cloud/hybrid cutovers.”
“Built and deployed a production LLM-powered RAG assistant for semiconductor manufacturing failure analysis, reducing engineer triage effort by grounding outputs in retrieved evidence and gating responses with SPC + ML signals (LSTM anomaly scores, XGBoost probabilities). Experienced with LangChain/LangGraph to ship reliable, observable multi-step agents with branching/fallback logic, and evaluates impact using both technical metrics and business KPIs like mean time to triage and downtime reduction.”
Senior Data Scientist / ML Engineer specializing in cloud ML pipelines and GenAI
“ML/NLP practitioner with experience building a transformer-failure prediction system that combines sensor signals with unstructured maintenance comments using LLM-based extraction and similarity validation. Strong emphasis on production readiness—data leakage controls, SQL-driven data quality tiers, and rigorous bias/fairness validation (including contract/spec evaluation across diverse company profiles).”
Executive Technology Leader (CTO/Chief Architect) specializing in AI, FinTech, and scalable platforms
“Serial entrepreneur who built Verb Technology from a garage startup to a Nasdaq IPO, raising multiple rounds of capital along the way. Invented interactive live streaming technology that was acquired by Amazon and demonstrated rapid product/market response during COVID by prototyping and launching a solution for users while tightly managing AWS costs.”
Senior Data Engineer specializing in cloud lakehouse platforms and streaming analytics
“Data engineer focused on fraud and banking analytics who has owned end-to-end batch + streaming pipelines at very large scale (hundreds of millions of records/day). Built robust data quality/observability layers (schema validation, anomaly detection, alerting) and delivered low-latency serving via AWS Lambda/API Gateway with DynamoDB + Redis, plus external data ingestion/scraping pipelines orchestrated in Airflow with anti-bot protections.”
Senior Full-Stack Software Engineer specializing in microservices and cloud-native systems
“Backend/infra engineer with experience across Nestle, J.P. Morgan, and Capgemini, combining ML systems work (YOLOv8/PyTorch object detection with TFLite edge deployment) with production-grade cloud/Kubernetes operations. Has delivered measurable impact via AWS migrations (25% cost reduction, 99.9% availability), microservice modernization (35% faster processing), and low-latency Kafka streaming for financial dashboards (<100ms) using DLQs and idempotent consumers.”
Mid-level AI/ML Engineer specializing in Generative AI, LLMOps, and MLOps
“Built and deployed an AWS-based LLM/RAG ticket triage and knowledge retrieval system (Pinecone/FAISS + Step Functions + MLflow) that cut support resolution time by 20%. Demonstrates strong production focus on hallucination reduction, PII security, and low-latency orchestration, with measurable evaluation improvements (e.g., ~25% grounding accuracy gain via re-ranking) and proven collaboration with support operations stakeholders.”
Mid-level Data Engineer specializing in real-time analytics and regulated domains
“Data platform engineer focused on large-scale, real-time fraud systems, with hands-on ownership of streaming architectures using Kafka, Spark, Snowflake, and Databricks. Stands out for combining performance tuning and platform automation with LLM/RAG-based enrichment, delivering measurable gains in latency, fraud accuracy, false positives, and analyst decision speed.”
Mid AI/Machine Learning Engineer specializing in FinTech and Generative AI
“AI/ML engineer with hands-on ownership of enterprise LLM deployments at Freshworks, including a large-scale RAG chatbot serving 15,000+ users across six departments. Stands out for combining deep production engineering skills—AWS microservices, Kubernetes, observability, retrieval quality, and faithfulness evaluation—with strong cross-functional stakeholder leadership and prior large-scale fraud data pipeline experience at Socure.”
Mid-level Software Engineer specializing in cloud-native backend and AI systems
“Candidate takes a disciplined, developer-in-the-loop approach to AI-assisted coding, using AI primarily for brainstorming, suggestions, and optimization while retaining full ownership of architecture and final code decisions. They also actively stay current on AI developments through research papers, communities, and emerging tools.”
“Built and productionized an LLM-powered PDF document Q&A system to eliminate manual searching through long documents, focusing on scalability and answer reliability. Implemented semantic chunking (using headings/paragraphs/tables), overlap, and preprocessing/quality checks to reduce hallucinations, and orchestrated the end-to-end pipeline with Airflow using retries, alerts, and parallel tasks.”
Senior Data Engineer specializing in Azure Lakehouse, Databricks/Spark, and Snowflake
“Data engineer/platform builder with experience across PwC and Liberty Mutual delivering high-volume, production-grade pipelines and real-time data services. Has owned end-to-end streaming + batch architectures on AWS and Azure, including web scraping systems, with quantified reliability gains (99.9% availability, 90%+ error reduction, 30% latency reduction) and strong observability/CI-CD practices.”
Mid-level AI/ML Engineer specializing in healthcare NLP and MLOps
“Healthcare/clinical ML practitioner who built and productionized ClinicalBERT-based pipelines to extract and standardize oncology EHR data, improving downstream model F1 from 0.81 to 0.92 while controlling training cost via LoRA/QLoRA. Experienced orchestrating real-time AWS ETL/ML workflows (Glue, Lambda, SageMaker) and partnering with clinicians using SHAP-based interpretability, contributing to an 18% reduction in readmissions and full adoption.”
Senior Data Engineer specializing in cloud data platforms and regulated analytics
“Data engineer at Capital One building AWS-based real-time and batch pipelines and backend data services for financial/fraud use cases. Has owned end-to-end pipelines processing millions of records/day, implemented dbt/Great Expectations quality gates, and tuned Redshift/Snowflake workloads (cutting query latency ~22–25% and reducing pipeline failures ~30–40%) while supporting 15+ downstream consumers.”
Mid-level Data Engineer specializing in cloud ETL pipelines (Azure, AWS, GCP)
“Data engineer/backend developer who owned end-to-end pipelines and external data collection systems, including API ingestion and large-scale web scraping. Worked at ~50M records/month scale, improving processing speed by 20% and reducing reporting errors by 15%, and shipped a Rust-based internal data API with versioning, caching, and strong validation/observability practices.”
Senior Engineering Manager specializing in platform, data/ML, and identity/access systems
“Senior engineering leader from Goodyear’s AndGo startup-like division who scaled the org from 12 to 30+ across pod-based teams and introduced an Architect Guild/ARD governance model. Led a 4-month Europe launch requiring AWS regional infrastructure, GDPR compliance, i18n/l10n, and new EMEA reporting pipelines, and has hands-on depth in API performance, incident response, and GraphQL/Hasura adoption to boost product velocity.”
Mid-level Backend/Platform Engineer specializing in data pipelines, reliability, and AI-assisted ingestion
“Backend engineer who built and scaled a blockchain-based e-voting platform at early-stage startup Elemential Labs, balancing decentralization with real-world operability by centralizing control-plane components while keeping the ledger immutable. Has hands-on experience migrating high-throughput ingestion from Kafka to AWS Kinesis with parallel cutover, strengthening data integrity and read-after-write consistency (Elasticsearch), and hardening pipelines against silent data-quality failures via anomaly detection and self-healing automation.”
Executive Engineering Leader specializing in platform, DX, and customer growth systems
“Builder/technical leader who was brought into Finicity to turn a credit-improvement concept into a viable product—architected, staffed, and launched what became Experian Boost. Delivered a major North American product launch in ~6 months, scaling to ~50,000 new users per day at launch and solving complex ML classification and distributed processing/order-of-operations challenges on AWS.”
Junior Data Infrastructure Software Engineer specializing in distributed pipelines and AI extraction
Junior Full-Stack Engineer specializing in blockchain, cloud, and data platforms
Mid-level Full-Stack Developer specializing in enterprise banking applications