Pre-screened and vetted.
Senior Data Engineer specializing in Azure, Databricks, and BI/ETL platforms
Senior Java Full-Stack Developer specializing in cloud-native microservices and FinTech
Executive Engineering Leader specializing in data platforms and SaaS
Mid-level Machine Learning Engineer specializing in Generative AI and RAG systems
“LLM/ML engineer who has shipped an enterprise RAG-based Q&A system (LangChain/LlamaIndex, FAISS + Azure Cognitive Search, GPT-3.5/4 via OpenAI/Azure OpenAI) to production on Docker + Kubernetes/OpenShift, tackling hallucinations, retrieval quality, latency/cost, and RBAC/IAM security. Also partnered with operations leaders to turn manual reporting into an LLM-powered summarization and forecasting dashboard driven by real KPIs and iterative stakeholder feedback.”
Mid-level Forward Deployed Engineer specializing in AI automation for finance and data platforms
“LLM/agentic workflow specialist with healthcare deployment experience who has taken LLM-based automation from prototype to production using operator-in-the-loop validation, RAG-style retrieval, RBAC, and monitoring for sensitive data compliance. Demonstrated real-time incident resolution (retrieval timeouts due to network/proxy misconfig) and strong GTM support—hands-on developer workshops and sales demos translating technical safeguards and real-time ETL into measurable ROI (70% ops reduction, ~$200K/year savings).”
Mid-level AI Solutions Engineer specializing in enterprise GenAI and automation
“Built and shipped multiple production LLM/agentic systems, including an agentic RAG NL-to-SQL analytics app that cut manual reporting from 9 hours/week to 15 minutes by grounding on schema-aware retrieval and robust fallback/monitoring. Also implemented a LangChain supervisor-orchestrated enterprise IT automation agent that routes requests for search, identity validation, and action execution, and created a RAG search tool spanning Jira/Confluence/SharePoint for operations stakeholders.”
Mid-level Data & AI Engineer specializing in data engineering, analytics, and LLM/RAG apps
“Built a production RAG-based “unified assistant” that consolidates siloed company documents into a single chatbot while enforcing fine-grained access control via RBAC/metadata filtering with OAuth2/JWT. Experienced orchestrating LLM workflows with LangChain/LangGraph + FastAPI (async + caching) and measuring performance via retrieval accuracy and response-time SLAs. Also delivered a churn analytics solution with dashboards and automated retention campaigns using n8n.”
Mid-level AI/ML Engineer specializing in Generative AI and healthcare data
“Built and deployed a production RAG-based document Q&A system on Azure OpenAI to help business teams search thousands of PDFs/Word files, using Qdrant vector search, MongoDB, and a Flask API. Demonstrates strong production engineering (streaming large-file ingestion, parallel preprocessing, monitoring/retries) plus systematic prompt/embedding/chunking experimentation to improve accuracy and reduce hallucinations, and has hands-on orchestration experience with ADF/Airflow/Databricks/Synapse.”
Senior AI/ML Engineer specializing in Generative AI and RAG
“ML/NLP practitioner at Morf Health focused on unifying fragmented healthcare data by linking structured patient/encounter records with unstructured clinical notes. Has hands-on experience with transformer embeddings, vector databases, and domain fine-tuning, plus rigorous evaluation (precision/recall) and human-in-the-loop validation with clinical SMEs to make pipelines production-grade.”
Mid-level Full-Stack Python Developer & Data Engineer specializing in ETL and web platforms
“Backend engineer who led major modernization efforts at GoDaddy, migrating legacy Perl services to Python/FastAPI with an incremental rollout strategy, containerization (Docker/Kubernetes), and CI/CD (Jenkins/GitHub Actions). Strong focus on secure, reliable API design (JWT, RBAC, PostgreSQL row-level security), rigorous testing, and data integrity—plus experience hardening an automated web-scraping pipeline against changing site structures and downtime.”
Mid-level Data Analyst specializing in financial risk and healthcare analytics
“AI/ML engineer focused on real-time, production-grade LLM systems, with a robotics-adjacent mindset around latency/accuracy tradeoffs and modular pipelines. Built a scalable RAG-based assistant orchestrated as microservices on Kubernetes with Kafka async messaging, ONNX/quantization optimizations, and monitoring (Prometheus/Grafana), citing a ~35% hallucination reduction; has also experimented with ROS Noetic/Gazebo to understand ROS concepts.”
Mid-level Data Analyst specializing in AI/ML and advanced analytics
“Accenture data/ML practitioner who deployed a retail churn prediction and BERT-based sentiment analysis system to production, integrating behavioral + feedback data and operationalizing it with ETL automation, orchestration, and CI/CD. Experienced managing 2TB+ multi-source data, monitoring drift in Databricks, and translating results into Power BI dashboards for marketing teams (including K-means customer segmentation).”
Senior Full-Stack Java Developer specializing in cloud-native microservices
“Backend/platform engineer with production ownership of high-volume transaction analytics and fraud monitoring services built in Java/Spring Boot. Has scaled data processing platforms (including healthcare datasets) and operated Kafka-based event pipelines with schema versioning, deduplication, and replay/backfill workflows, using strong observability via CloudWatch/Grafana and CI/CD with Jenkins.”
Junior Data Scientist specializing in fraud analytics and cloud data platforms
“Built and deployed production LLM-powered document summarization/classification systems using embeddings, vector databases (RAG-style retrieval), and automated evaluation (BERTScore/ROUGE), with a focus on monitoring and scalable cloud pipelines. Also partnered with a fraud analytics team to deliver a transaction anomaly detection solution, translating model outputs into Power BI dashboards and actionable KPIs while iterating on thresholds and alerts based on stakeholder feedback.”
Senior Full-Stack Java Engineer specializing in cloud-native microservices and FinTech
“Backend engineer who owned a Python task management API with JWT auth, async notifications, and performance work (DB optimization/caching) to handle high volumes. Led an on-prem to Azure private cloud migration at Morgan Stanley using GitOps and IaC (Terraform/ARM) with phased rollout and rollback planning. Also built a Kafka real-time streaming pipeline with exactly-once/idempotent consumers and Prometheus/Grafana monitoring.”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Built and productionized an LLM-powered clinical documentation and insights pipeline at Cardinal Health using LangChain + GPT-4 with RAG to summarize long clinical notes, extract medication/dosage entities, and generate structured SQL-ready outputs for downstream analytics. Emphasizes clinical reliability via labeled benchmarking (precision/recall/F1), shadow deployments, clinician human-in-the-loop review, and ongoing monitoring/orchestration with Airflow, Lambda, S3, Postgres, and Power BI.”
Mid-level AI/ML Engineer specializing in NLP, computer vision, and Generative AI
“Built and deployed a production LLM-powered clinical insights/summarization assistant for healthcare teams, including a Spark+Airflow pipeline, fine-tuned transformer models, and a FastAPI Docker service on AWS. Demonstrates strong MLOps/LLMOps depth (Airflow on Kubernetes, custom AWS operators/IAM, MLflow, CloudWatch) and practical reliability work like hallucination mitigation, confidence scoring, and retrieval-backed evaluation with shadow deployments.”
Mid-level Data Engineer specializing in cloud data platforms, Spark, and streaming pipelines
“Data/MLOps engineer (Cognizant background) who owned an AWS/Airflow/Snowflake healthcare transactions pipeline processing ~8–10M records/day and cut pipeline/data-quality incidents by ~33%. Also built and deployed a production FastAPI model-inference service on Kubernetes (Docker, HPA) with strong observability (Prometheus/Grafana), versioned endpoints, and resilient backfill/idempotent external data ingestion patterns.”
Mid-level Data Engineer specializing in cloud ETL/ELT and healthcare analytics
“Healthcare-focused data engineer/ML practitioner with experience at Lightbeam Health Solutions and Humana building production entity-resolution and semantic similarity pipelines across EMR, lab, and claims data. Uses NLP/ML (spaCy, scikit-learn, BioBERT/LightGBM) plus Snowflake/Airflow and vector search (Pinecone) to improve linkage accuracy (reported 90%) and semantic match quality (reported +12–15%), while reducing manual cleanup by 40%+.”
Senior AI Engineer specializing in Generative AI and RAG applications
“AI engineer who has shipped production LLM systems across customer service and marketing use cases—building a RAG app on Azure OpenAI and speeding retrieval with Redis caching tied to Okta sessions. Also implemented a LangGraph multi-agent workflow that pulls image context from Figma to generate structured HTML marketing emails, adding a verification agent to improve image-selection accuracy while optimizing solution cost for business stakeholders.”
Mid-level AI/ML Engineer specializing in LLM, RAG/GraphRAG, and fraud analytics
“LLM/agent engineer who has deployed a production internal assistant to reduce employee inquiry resolution time while maintaining regulatory compliance. Experienced with RAG, hallucination risk triage, and graph-based orchestration (LangGraph) for enterprise/banking-style workflows, emphasizing schema-validated, citation-backed, tool-constrained agent designs and tight collaboration with non-technical business/compliance stakeholders.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and MLOps in Financial Services
“ML/LLM engineer at Charles Schwab who built a production loan-advisor chatbot integrated with internal knowledge and loan-calculator APIs, adding strict numeric validation to prevent rate hallucinations and optimizing context to control costs. Also runs ~40 Airflow DAGs orchestrating retraining/ETL/drift monitoring with an automated Snowflake→SageMaker→auto-deploy pipeline, and uses rigorous testing plus canary rollouts tied to business metrics and compliance constraints.”