Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in LLMs, RAG, and agentic AI systems
Mid-level Data Scientist / Software Engineer specializing in AI automation and cloud microservices
Junior Data Scientist specializing in cybersecurity and AI/ML
Mid-Level Machine Learning Engineer specializing in LLMs and RAG systems
Intern Data Scientist / ML Engineer specializing in predictive modeling and data pipelines
Mid-level AI/ML Engineer specializing in MLOps, streaming data, and NLP/CV
Mid-level Data Analyst specializing in marketing analytics and machine learning
Mid-level AI/ML Engineer specializing in GenAI, RAG, and multi-agent LLM systems
Junior Data Engineer specializing in Azure, CRM data pipelines, and marketing personalization
“LLM/AI engineer who has deployed production RAG conversational analytics and Text-to-SQL systems over Snowflake and curated data marts, emphasizing enterprise-grade guardrails for accuracy, security, and cost. Notable for a structured approach to reducing hallucinations (curated metric/table registry, SQL validation, RBAC, and citation-backed responses) and for building resilient, observable multi-step agent workflows using LangChain/LlamaIndex and Airflow.”
Junior Data & AI Engineer specializing in cloud AI and analytics
“Built production AI backend systems in healthcare and e-commerce, including a healthcare agent that automated clinical workflows like medication refills, immunizations, and scheduling using FHIR APIs and cloud-native infrastructure. Strong in end-to-end backend ownership, LLM orchestration, and adding guardrails/validation for high-stakes and customer-facing AI workflows.”
Junior Data Analyst specializing in business analytics and machine learning
“Analytics-focused candidate with hands-on project experience in SQL data preparation and Python-based churn modeling. They demonstrated a practical approach to turning messy multi-source data into reporting tables, validating data quality rigorously, and translating churn insights into targeted retention strategies.”
Mid-level Data Engineer / Software Engineer specializing in streaming and cloud data platforms
“Backend engineer with deep Kafka/FastAPI microservices experience who redesigned a notification pipeline to cut end-to-end latency from ~5s to ~3s (including custom partition assignment and consumer tuning). Led a high-stakes ClickUp-to-Oracle migration of 1M+ records using idempotent ETL, reconciliation, and shadow deployment to achieve >99% integrity with zero downtime, and has hands-on production security implementation with Django/DRF (JWT + RBAC).”
Mid-level AI Engineer and Data Scientist specializing in LLM agents and RAG systems
“Built a production-grade LLM evaluation and regression system that stress-tests models across hundreds of iterations, combining LLM-as-judge, semantic similarity, statistical metrics, and rule-based checks, with results delivered via stakeholder-friendly HTML reports and dashboards. Experienced orchestrating multi-agent RAG workflows using LangChain/LangGraph and event-driven GenAI pipelines in n8n integrating OCR, speech-to-text, and external APIs, with strong emphasis on reliability, observability, and explainable failures.”
Mid-level Data Scientist specializing in Generative AI and LLMOps
“Built a production-grade, semi-automated document recognition and classification system for large volumes of scanned PDFs, starting from little/no labeled data and handling highly variable scan quality. Deployed on AWS using SageMaker + Docker and orchestrated on EKS with a microservices design that scales CPU-heavy OCR separately from GPU inference, with strong reliability controls (validation, fallbacks, retries, readiness probes).”
Mid-Level Full-Stack Software Engineer specializing in cloud-native apps and ML services
“Software engineer who deployed and stabilized a real-time analytics platform at Senecio Software, focusing on production reliability, observability, and performance under load. Experienced debugging issues spanning distributed services and networking (e.g., tracing timeouts to packet loss from misconfiguration) and extending Python (FastAPI/Django) APIs for customer-specific analytics features in a configurable, maintainable way.”
Mid-level AI Data Engineer specializing in GenAI, RAG, and cloud data pipelines
“LLM/agentic AI builder who deployed a production ITSM automation agent on Google ADK integrating ServiceNow and FreshService, with strong safety guardrails (human-approval gating and runbook-only command execution) and rigorous evaluation (500 synthetic tickets; 80%+ false-positive reduction). Also partnered with finance to deliver an AI agent that automated invoice/SOW retrieval and monthly reporting to account managers, reducing manual back-and-forth.”
Mid-level AI Engineer specializing in Generative AI and LLM systems
“Built and deployed a production-grade, multi-agent Text-to-SQL assistant that lets non-technical stakeholders query large enterprise databases in natural language. Uses Pinecone-based schema retrieval + LLM reasoning (Gemini/Claude/GPT) with a dedicated validation agent (schema/syntax checks and safe dry runs) to reduce hallucinations and improve reliability, while optimizing latency and cost via async execution and embedding caching.”
Mid-level AI Engineer specializing in ML, LLM applications, and data automation
“Data/ML practitioner who has built a production RAG-based knowledge assistant integrated into Microsoft 365/internal dashboards to help employees query internal documents in plain English. Experienced orchestrating and hardening ETL pipelines with Airflow and Azure Data Factory (validation, retries, monitoring) and running end-to-end model evaluation and production performance tracking via Power BI.”
Mid-level Data Engineer specializing in cloud-native batch and streaming pipelines
“Data/ML platform engineer with ~6 years in financial services and enterprise data platforms, building regulated fraud/credit-risk pipelines on AWS (Airflow, EMR/Spark, MLflow) and an Azure lakehouse ingesting 50+ sources and serving ~100M records/day. Also led an early-stage deployment of a RAG-based internal AI search tool using AWS Bedrock and LangChain with automated evaluation to validate LLM accuracy.”