Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud-native ETL/ELT and Snowflake analytics platforms
Senior Data Engineer specializing in cloud lakehouse platforms for banking and healthcare
Mid-level AI/ML Engineer specializing in GenAI, computer vision, and MLOps
“AI engineer with experience taking a GPT-4-powered GenAI career coach toward production on Azure AI Foundry, re-architecting the backend with hybrid (vector + keyword) search and RAG optimizations to cut latency by 50%. Also has client-facing TCS experience building healthcare ETL pipelines and delivering error-free monthly reports, plus current work analyzing agentic system reasoning traces and guardrail drift as an AI research fellow.”
Intern Robotics Software Engineer specializing in ROS2 multi-robot autonomy
“Robotics intern at the University of Delaware who built and debugged ROS2-based multi-robot coordination systems, focusing on real-time reliability (timestamp alignment, latency/jitter instrumentation, QoS/executor tuning). Also improved SLAM stability by fixing LiDAR/encoder synchronization and tuning state-estimation parameters, with a simulation-first workflow using Gazebo and Docker/CI for reproducible deployments.”
Mid-level Machine Learning Engineer specializing in fraud detection and LLM systems
“At FiVerity, built and deployed a production LLM/RAG-based Information Gathering Tool for credit union fraud analysts that generates auditable investigation summaries from verified evidence. Focused on high-stakes constraints—hallucination prevention, cross-entity leakage controls, compliance/PII-safe monitoring, and latency—while also shipping customer-facing agentic workflows using CrewAI and LangGraph in close partnership with fraud and compliance stakeholders.”
Mid-level AI/ML Engineer specializing in MLOps, NLP, and real-time ML pipelines
“Built a production, real-time insurance claims document-understanding and fraud-detection pipeline using TensorFlow + fine-tuned BERT, deployed on AWS (SageMaker/Lambda/API Gateway) with automated retraining via MLflow and Jenkins. Addressed noisy documents and latency using augmentation and model distillation (3x faster), cutting claims ops manual review by ~50% and reducing fraudulent payouts.”
Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms
“AI/ML engineer who led production deployment of a multimodal (text/video/image) RAG system on GCP using Gemini 2.5 + Vertex AI Vector Search, scaling to 10M+ documents with sub-second latency and +40% retrieval accuracy. Strong MLOps/orchestration background (Kubernetes, CI/CD, Airflow, MLflow) with proven impact on reliability (75% fewer incidents) and deployment speed (92% faster), plus experience delivering explainable ML (XGBoost + SHAP + Tableau) to non-technical retail stakeholders.”
Mid-level Data Engineer specializing in healthcare data platforms and MLOps
“ML/NLP practitioner with healthcare payer experience at HCSC, focused on connecting messy unstructured clinical notes to structured claims/provider data to improve fraud-analytics workflows. Has hands-on experience fine-tuning transformers in AWS SageMaker, building large-scale embedding search with FAISS, and implementing robust entity resolution using golden datasets, precision/recall calibration, and production monitoring for drift.”
Mid-level Data Engineer specializing in cloud data platforms and AI/ML analytics
“Backend/data engineer in healthcare who built an AWS-based clinical analytics platform from scratch (DynamoDB/S3/Airflow/dbt) with sub-second clinician query goals, 99.9% uptime, and HIPAA-grade controls (KMS encryption, IAM RBAC, audit trails). Also modernized ML delivery by replacing a manual 4-hour deployment with a 30-minute Docker/GitHub Actions CI/CD pipeline using parallel runs, parity testing, and rollback, and caught critical EHR data edge cases (date formats/timezones) that could have impacted patient care.”
Junior Software Engineer specializing in full-stack web and cloud systems
“Co-op engineer at EnFi who built and maintained a multi-tenant prompt library and LLM workflow tooling used by internal teams and external enterprise clients. Led TypeScript/React package design and standardized a typed workflow abstraction across disparate implementations (React, Go, JSON), improving reliability and developer adoption. Delivered measurable performance gains (~25% latency reduction) and owned end-to-end execution including docs, demos, debugging, and deployment.”
Junior AI/ML Engineer specializing in LLM agents and RAG systems
“Backend/data engineer who built a production-ready multi-agent financial intelligence system (Mycroft) that orchestrates specialized AI agents to analyze real-time market data using FastAPI and Pinecone vector search. Brings strong security/reliability instincts (rate limiting, JWT/OAuth2, retries/backoff, health checks) and has caught high-impact data integrity issues in financial migrations (timezone normalization across global legacy systems).”
Senior Laboratory Technician specializing in clinical diagnostics and quality compliance
“Forward-deployed, full-stack/platform engineer who owns production features end-to-end across frontend, backend, data, and infrastructure (AWS serverless, Terraform, React). Has modernized critical fintech/payment systems (zero-downtime monolith-to-microservices with Kafka event sourcing) and productionized AI-native support workflows (LLM + RAG on Pinecone) with measurable gains in latency, incidents, CSAT, and support efficiency.”
Mid-level AI/ML Engineer specializing in LLMs, RAG pipelines, and cloud MLOps
“Built and deployed a production LLM/RAG system at CVS to automate clinical documents, addressing PHI compliance, retrieval accuracy, and latency; achieved a 35–40% reduction in review effort through chunking and FP16/INT8 optimization. Also has experience translating AI outputs into actionable insights for non-technical stakeholders (sports analysts).”
Mid-level GenAI & Data Engineer specializing in agentic AI systems and AWS Bedrock
“At onedata, built and deployed an LLM-powered, multi-agent analytics platform on AWS Bedrock that lets users create Amazon QuickSight dashboards through natural-language conversation, cutting dashboard build time from ~30 minutes to ~5 minutes. Strong in production concerns (observability, token/cost tracking, model tradeoffs) and in bridging business + technical work, owning pre-sales pitching through delivery with an engineering management background focused on AI product management.”
Mid-level Full-Stack Developer specializing in cloud-native APIs and data workflows
“Built and owned end-to-end ordering and inventory/order management systems for a wholesale distributor, delivering an MVP quickly and iterating based on direct observation of daily users. Experienced with TypeScript/React + Node.js layered architectures and microservices using RabbitMQ, including real-world scaling issues (duplicates, backpressure) and observability practices (correlation IDs, structured logging).”
“Backend/data engineer who builds Python (FastAPI) data-processing API services for internal analytics/reporting, emphasizing modular architecture, async performance tuning, and reliability patterns (health checks, retries, observability). Also migrated legacy on-prem ETL pipelines to Azure using ADF/Data Lake/Functions and implemented a near-real-time ingestion flow with Event Hubs plus watermarking to handle late events and deduplication.”
Mid-level AI/ML Engineer specializing in predictive modeling and cloud ML pipelines
“LLM engineer/data engineer who has deployed production RAG systems for internal-document Q&A, building end-to-end ingestion, embedding, vector search, and FastAPI serving while actively reducing hallucinations and latency through rigorous retrieval tuning and caching. Also experienced in orchestrating cloud data pipelines (Airflow, AWS Glue, Azure Data Factory) and partnering with non-technical business teams to deliver AI solutions like automated document review.”
Mid-level AI Engineer specializing in LLMs, RAG, and data engineering
“AI Engineer Co-Op at Northeastern University who built a production Patient Persona Chat Bot to help nursing students practice clinical interactions, fine-tuning Llama 3 and integrating a LangChain + Pinecone RAG pipeline deployed on Amazon Bedrock. Emphasizes clinical accuracy and reliability with guardrails, retrieval filtering, and continuous evaluation, and also brings strong data engineering/orchestration experience (Airflow, EMR/PySpark, ADF, dbt, Databricks, Snowflake).”
Junior AI & Data Engineer specializing in LLM systems and analytics platforms
“Backend/ML engineer who built a job-search automation SaaS using a modular Selenium ETL pipeline, rigorous testing/observability, and a cost-optimized two-pass LLM ranking approach. Has led high-integrity data extraction from messy multi-city PDF records (95% integrity) and managed modular production rollouts for a 20+ engineer team, with a strong security focus (deny-by-default, row-level access control) in an AI-assisted grading platform.”
Mid-level ML & Data Engineer specializing in GenAI, graph modeling, and fraud/risk analytics
“Built a production AI fraud/risk scoring platform at BlueArc that ingests web business/product/site data, generates text+image embeddings, and connects entities in a graph to detect reuse patterns and links to known bad actors. Optimized for scale with incremental graph re-scoring and delivered investigator-friendly explainability by surfacing the exact signals/relationships behind each score; orchestrated workflows with Airflow and GCP event-driven components (Pub/Sub, Dataflow, Cloud Run) and has recent LLM workflow orchestration experience (retrieval, prompting, scoring).”
Mid-level AI/ML & Data Engineer specializing in MLOps and cloud data pipelines
“AI/ML engineer (Merkle) with hands-on experience deploying RAG-based LLM applications and real-time recommendation engines into production. Strong in cloud/on-prem architectures, GPU autoscaling, caching, and network optimization—delivered measurable latency reductions (40–70%) and improved retrieval relevance by systematically benchmarking chunking/embedding configurations and validating pipelines via CI/CD.”
Mid-level Data Engineer specializing in cloud data pipelines and machine learning
“Experience spans college-built AWS-hosted Python/Flask web apps and enterprise data work at General Motors, including PostgreSQL query optimization on millions of records and multi-tenant-style data isolation using group-based, column-level permission grants. Also built an AWS-hosted meat price prediction dashboard using Dash/Plotly and ran large nightly data pipelines orchestrated with Apache Airflow.”