Pre-screened and vetted.
Mid-level Software Engineer specializing in backend systems and event-driven data pipelines
Mid-level Full-Stack Java Developer specializing in cloud-native microservices
Senior Data Engineer specializing in cloud data pipelines and big data platforms
Mid-level Software Engineer specializing in backend services and data engineering
Mid-level Data Analyst specializing in analytics, BI, and data engineering
Mid-level Software Engineer specializing in full-stack systems, AI, and cybersecurity
Mid-level Full-Stack Software Engineer specializing in type-safe systems
Entry-level Software Engineer specializing in AI systems and embedded computing
Mid-Level Software Engineer specializing in geospatial AI and cloud security automation
“Cloud engineer and cloud OS SME (Chevron) who productionized large-scale security remediation—using Tanium and Ansible to address CIS benchmark noncompliance across 5,000+ servers with robust logging and RCA handoffs. Also drives adoption of a geospatial AI refinery inspection product by consolidating siloed imagery into an enterprise geospatial database, and presents internally on agentic/LLM tooling (LangChain/LangGraph, LangSmith observability).”
Senior Data Engineer specializing in cloud data platforms and real-time streaming
“Data engineer focused on building reliable, production-grade data systems end-to-end: batch and real-time pipelines (Airflow/Kafka/Spark) with strong data quality, monitoring/alerting, and incident response. Has experience integrating external API/web data with retries, throttling, and schema-change handling, and serving curated datasets to analytics (Power BI) and backend consumers with performance optimizations like Redis caching.”
Senior Software Development Engineer specializing in backend systems and data pipelines
“Backend-focused engineer with healthcare and finance experience (Cardinal Health, JPMorgan) who has owned end-to-end data flows powering dashboards, emphasizing strong validation/data quality and measurable frontend performance gains. Has shipped Spring Boot REST APIs with versioning and Swagger docs, and has stood up an MVP task management system with GitHub Actions CI/CD, Docker, and AWS EC2 deployment.”
Mid-level AI/ML Engineer specializing in agentic AI and production ML systems
“ML/AI engineer with hands-on experience shipping production computer vision and GenAI systems, including a fabric defect detection platform that combined vision models with agentic LLM workflows to reach 89% human-inspector agreement at 200 ms latency. Also built a RAG-based code QA tool for developers and emphasizes production monitoring, evaluation, caching, and reusable Python service design.”
Mid-level AI Engineer specializing in GenAI and RAG systems
“AI engineer who built a production e-commerce system that analyzes product images alongside sales and demographic data to generate actionable creative recommendations, now used by 20+ clients. Also built orchestrated document/agent pipelines (Airflow, LangGraph) including a compliance drift detector auditing 401 compliance documents, with an emphasis on traceability, logging, and production integration.”
Intern Software Engineer specializing in AI/LLMs and full-stack development
“AI/ML infrastructure-focused engineer who has built production RAG systems from scratch (Supabase/pgvector + OpenAI embeddings) and iterated using formal eval metrics to improve retrieval quality. Also debugged real-time audio issues in a LiveKit-based pipeline by correlating packet loss with VAD behavior, and has deep experience building brittle, customer-specific financial platform integrations in Python/Playwright (2FA, redirects, token refresh, rate limits).”
Mid-level Data & AI Engineer specializing in data engineering, analytics, and LLM/RAG apps
“Built a production RAG-based “unified assistant” that consolidates siloed company documents into a single chatbot while enforcing fine-grained access control via RBAC/metadata filtering with OAuth2/JWT. Experienced orchestrating LLM workflows with LangChain/LangGraph + FastAPI (async + caching) and measuring performance via retrieval accuracy and response-time SLAs. Also delivered a churn analytics solution with dashboards and automated retention campaigns using n8n.”
Mid-level Data & Machine Learning Engineer specializing in production ML and data platforms
“Built and deployed a production LLM system that scraped Google Maps menu photos, extracted structured prices via OpenAI, and cross-validated them against website-scraped data to automate data-quality verification at scale (replacing costly manual contractor checks). Demonstrates strong reliability instincts—precision-first prompting, output gating with image-quality metadata, and fuzzy matching/RAG techniques—plus solid orchestration (Dagster/Airflow) and observability (Sentry, Prometheus/Grafana).”
Senior Data Engineer specializing in cloud data platforms and real-time streaming
“Data engineer in healthcare (HCA) who owned end-to-end Azure-based pipelines at very large scale (50M+ daily claims/patient records). Strong focus on reliability: schema-drift fail-fast validation, quarantine layers, and Python/SQL data quality checks that reduced issues ~25%, plus performance tuning in Databricks/PySpark and versioned serving in Synapse for downstream consumers.”
Senior Data Scientist specializing in data engineering and analytics
“Data/NLP practitioner with experience in both financial services (Truist) and government (USDA), including an NLP-driven analysis of EU regulations to anticipate US regulatory focus and a major redesign/cleaning of complex pathogen lab-test public datasets. Built production data-quality pipelines with Dagster, Pandera, and Azure Synapse, and is comfortable validating hypotheses with historical backtesting and SME-driven quality controls.”
Mid-level Full-Stack Python Developer & Data Engineer specializing in ETL and web platforms
“Backend engineer who led major modernization efforts at GoDaddy, migrating legacy Perl services to Python/FastAPI with an incremental rollout strategy, containerization (Docker/Kubernetes), and CI/CD (Jenkins/GitHub Actions). Strong focus on secure, reliable API design (JWT, RBAC, PostgreSQL row-level security), rigorous testing, and data integrity—plus experience hardening an automated web-scraping pipeline against changing site structures and downtime.”
Intern Full-Stack Engineer specializing in web applications and AI
“Engineer with hands-on experience both using AI coding agents in production and building AI systems, including chatbot development and BERT fine-tuning at Atos. At Groupr, they applied strong systems judgment to live operational workflows, validating concurrency decisions manually for an admin portal supporting 500+ orders per day.”
Entry-Level Software Engineer specializing in full-stack development and machine learning
“Master’s CS candidate with backend internship experience modernizing live operational workflows at NatWest/NetWess, focusing on reliability improvements, safer CI/CD deployments, and incremental refactors using feature flags and rollback paths. Built FastAPI-based APIs with strong security patterns (JWT + 2FA/TOTP, centralized authorization, RLS) and demonstrated attention to edge cases like idempotency and data consistency in a Netflix-clone project.”
Senior ML Engineer & Data Scientist specializing in LLM agents, retrieval/ranking, and MLOps
“Machine Learning Engineer currently at Webster Bank building an enterprise-scale LLM agent for Temenos Journey Manager/Maestro, using RAG-style multi-stage retrieval with FAISS/Pinecone, hybrid dense+sparse search, and LoRA fine-tuning optimized via NDCG/MAP and A/B testing. Previously handled messy incident/telemetry data at Deuta Werke GmbH with deterministic + fuzzy entity resolution, and has strong production data engineering experience across Spark/Hadoop and Python ETL systems.”
Mid-level Software Engineer specializing in AI, full-stack systems, and platform engineering
“Full-stack/AI engineer with experience spanning supply-chain product deployments, biomedical agentic search, and research-grade RAG evaluation. Stands out for owning customer-facing migrations at scale (including 216,000 historical shipments), building measurable LLM systems, and pairing AI experimentation with rigorous evals, rollout controls, and auditability.”