Pre-screened and vetted.
Senior Data & Platform Engineer specializing in cloud-native streaming and distributed systems
“Financial data engineer who has built and operated high-volume batch + streaming pipelines (200–300 GB/day; 5–10k events/sec) using AWS, Spark/Delta, Airflow, Kafka, and Snowflake, with strong emphasis on data quality and reliability. Demonstrated measurable impact via 99.9% SLA adherence, major reductions in bad records/nulls, MTTR improvements, and significant latency/runtime/query performance gains; also built a distributed web-scraping system processing 5–10M records/day with anti-bot and schema-drift defenses.”
Mid-Level Backend Engineer specializing in SaaS, FinTech, and AI document intelligence
“Full-stack engineer who built an AI-driven document analysis and processing workflow end-to-end, including large-document ingestion, queued async processing, and low-latency retrieval for user-facing flows. Demonstrated practical performance tuning (moving heavy work off request path, polling, caching) and Postgres optimization validated with EXPLAIN ANALYZE, plus durable workflow resilience via retries and dead-letter queues.”
Mid-level Backend/AI Software Developer specializing in data pipelines for FinTech and healthcare
“Data engineer/backend data services builder with end-to-end ownership of production pipelines for a Pfizer client, combining Python/SQL ingestion and transformation with strong data quality controls. Delivered measurable performance gains (~30% faster queries) and improved reliability through monitoring/alerting (Splunk, Prometheus/Grafana), structured logging, and incident response; also built internal REST APIs with versioning and caching and set up GitLab-based CI/CD with containerized deployments.”
Entry-level Full-Stack Software Engineer specializing in FinTech and web applications
“Built end-to-end internal and user-facing automation/data features, including a Selenium-based BU course scraper with around 1,800 users and a CSV export system that became the company standard at Triple. Shows initiative in ambiguous environments, working directly with business stakeholders and resolving production infrastructure issues involving AWS and Terraform.”
Executive technology leader specializing in SaaS product and engineering
“Entrepreneur building an AI-driven outbound growth and automation startup, currently self-funding development through consulting while validating market demand. Stands out for combining technical workflow automation with practical go-to-market experimentation, including deliverability-focused outbound systems and webinar partnerships with local experts to drive trust and conversion.”
Junior Electrical & Computer Engineering student specializing in robotics, embedded systems, and ML
“DXArts PhD researcher and recent UW capstone contributor building autonomous robotics systems with ROS2 (SLAM Toolbox, Nav2) and Gazebo simulation. Currently focused on integrating a 9-DOF SparkFun IMU with motor controls on Raspberry Pi, and developing OpenCV ArUco-marker tracking for an automated BlueROV that can locate and retrieve underwater targets in collaboration with mechanical engineering.”
Mid-level Full-Stack Developer specializing in scalable web apps and AI/ML systems
“Built a healthcare app backend and supporting product pieces from scratch for Maverick Health—covering database schema, API structure, Node.js implementation, and UI design in Figma—while targeting 10,000 patients and keeping AWS run costs to ~$20–$30/month. Shipped an Android closed beta on Google Play and handled real-world launch hurdles like privacy policy compliance and push notification infrastructure.”
“Senior Unity UI/UX developer who led XR interaction design for a VR meetings/events platform (VRChat/Horizons-like), including a modular radial menu that served as the app’s primary control system. Also built a Pokemon Go-style Unity AR experience for a major shopping centre end-to-end, tackling real-world marker scanning challenges (lighting, reflections, installation) through iterative prototyping and testing.”
Senior AI Engineer specializing in Agentic AI and distributed systems
“LLM/agentic workflow engineer with healthcare domain experience who built a HIPAA-compliant multi-agent RAG system for clinical review automation at UnitedHealth Group, achieving 92% precision and cutting latency 40% through async orchestration and Redis semantic caching. Also has strong data engineering orchestration background (Airflow on AWS EMR with Great Expectations) and a proven clinician-in-the-loop feedback process that improved model faithfulness by 18%.”
Junior Software Engineer specializing in AI/ML and Full-Stack Development
“Built production LLM tooling focused on reproducibility and verification by enforcing JSON schemas and using multi-step checks with tools like Firecrawl and Perplexity. Also implemented the containerized infrastructure layer for a 9-agent app on K3s, dealing with rolling updates and uptime, and has experience advising a non-technical builder on search grounding and LLM data-flow design.”
Mid-level Software Engineer specializing in data pipelines and backend APIs
“Data engineer with Webster Bank experience owning end-to-end pipelines (APIs + databases) processing millions of records/day, improving data quality (25–30% fewer issues) and reliability (~99.9% successful runs). Built resilient external data ingestion/scraping systems (schema-change validation, idempotent backfills, monitoring/alerts) and shipped a FastAPI service exposing curated datasets with versioning and consistently low latency.”
Mid-Level Full-Stack Software Engineer specializing in web apps, data pipelines, and ML
“Software engineer who owned an Order Management System end-to-end at Reliance Jio, improving large-table performance via UI virtualization shipped behind feature flags and refined through direct ops-user observation. Also built an OCR automation tool at Piramal Realty using Python/Tesseract with validation and manual correction fallbacks, driving adoption by operations teams. Experienced integrating with Kafka-based microservices and improving observability using structured logging and correlation IDs.”
Mid-level AI/ML Engineer specializing in generative AI, RAG platforms, and LLM agents
“AI/LLM engineer who has shipped 10+ production applications, including InvestIQ on GCP—a production-grade RAG due-diligence engine that ethically scrapes web/PDF sources, builds a ChromaDB knowledge base, and delivers analyst-style dashboards plus a citation-backed chat copilot. Deep focus on reliability (evidence-only answers, hard citations, refusal gating), retrieval tuning, and orchestration (Airflow/Cloud Composer), plus multi-agent systems (CrewAI with 7 specialized finance agents).”
Mid-Level Software Engineer specializing in blockchain, data pipelines, and functional programming
“Built an LLM-powered production agent that scraped X.com data to categorize and profile accounts, deployed in Docker with automatic relaunch for high uptime. Interview details were limited, but the candidate does have hands-on experience shipping an AI automation workflow.”
Senior Software Engineer specializing in AI systems and platform engineering
“Backend/AI engineer with experience owning production systems in fintech and product startups, including a predictive scaling platform that cut AWS spend by 40% and an ambiguous social-intelligence feature that doubled MRR from $50K to $100K. Also building AI search and document-processing workflows, with reported 99.7% extraction accuracy and hands-on use of both classical forecasting and modern LLM stacks.”
Junior Data Scientist/Data Engineer specializing in ML pipelines and analytics
“Machine Learning Intern at Docsumo who delivered a customer-facing fraud-detection solution end-to-end: rebuilt the pipeline, deployed a Random Forest model, and shipped a Python/Flask microservice on AWS SageMaker. Drove measurable production impact (precision +30%, processing time cut in half, manual review -60%, customer satisfaction +15%) and demonstrated strong customer integration and live-incident response skills.”
Mid-level AI Engineer specializing in LLM orchestration, RAG, and multi-agent systems
“Research Assistant at the University of Houston who built and live-deployed a production RAG system for 1000+ research documents, using hybrid retrieval (dense+BM25+RRF) with cross-encoder reranking and RAGAS-based evaluation; reported 66% MRR, 0.85+ faithfulness, and 68% lower LLM inference costs. Also built a deployed LangGraph multi-agent research system (Researcher/Critic/Writer) with tool integrations (Tavily, arXiv) and dual memory (ChromaDB + Neo4j), plus freelance automation work delivering a WhatsApp chatbot and n8n workflows for a wholesale clothing business.”
Mid-level Data Engineer specializing in cloud ETL/ELT and lakehouse architecture
“Data engineer focused on sales/marketing analytics pipelines, owning ingestion from CRMs/ad platforms through warehouse serving and dashboards at ~hundreds of thousands of records/day. Built reliability-focused systems including dbt/SQL/Python data quality gates with alerting, a resilient web-scraping pipeline (retries/backoff, anti-bot tactics, schema-change detection, backfills), and a versioned internal REST API with caching and strong developer usability.”
Mid-Level Full-Stack Software Engineer specializing in AI/ML and cloud-native systems
“At BondiTech, built and deployed customer-facing backend improvements for enterprise dashboards handling 1M+ records, redesigning a .NET/Entity Framework API with server-side pagination/filtering and feature-flagged rollout to cut latency from ~15s to ~2s. Experienced integrating customer systems into existing APIs, including stabilizing a legacy CRM sync by normalizing inconsistent IDs, handling strict rate limits with batching, and adding DLQs plus reconciliation reporting.”
Mid-level Software Engineer specializing in backend systems and workflow automation
“Early-career AI engineer currently pursuing a Master's, with hands-on experience building and improving RAG pipelines using LangChain. They stand out for moving beyond naive retrieval into multi-step retrieval and feedback-loop designs to reduce hallucinations, and are now exploring multi-agent systems with distinct retrieval, coding, and validation roles.”
Mid-level Full-Stack Software Engineer specializing in cloud and data engineering
“Backend engineer with experience at Cigna evolving REST API services backed by PostgreSQL, emphasizing reliability/correctness, scalability, and observability. Has hands-on production experience with FastAPI (contract-first design, Pydantic schemas), performance tuning (indexes, caching), and secure auth patterns (OAuth/JWT, RBAC, row-level security via Supabase), plus low-risk incremental rollouts using feature flags and dual writes.”
Mid-level Data Scientist specializing in predictive analytics and LLM-powered data pipelines
“Early-career engineer from BNP Paribas who drove a large-scale observability modernization—selecting and implementing Prometheus/Grafana for a 2000+ server estate, then productionizing it on Kubernetes via Docker/Jenkins. Known for hands-on demos, strong documentation/templates, and pragmatic troubleshooting (including custom Python metrics) that improved visibility and cut debugging time by ~60%.”
Mid-level Data Engineer specializing in cloud lakehouse/warehouse pipelines
“Data engineer with HCA Healthcare experience building and operating end-to-end AWS-based pipelines for clinical and operational reporting (50–100 GB/day), serving curated data into Redshift/Snowflake for Power BI/Tableau. Emphasizes production reliability (Airflow SLAs/retries/alerting, logging/observability) and strong data quality controls (reconciliations, schema/null/duplicate checks), and has shipped versioned REST APIs to expose warehouse data to downstream systems.”
Senior Infrastructure Engineer specializing in FinTech and crypto payments
“Backend/full-stack engineer who built an autonomous trading system end-to-end, emphasizing correctness and long-term reliability via reconciliation, idempotent order handling, and production observability (Sentry, structured logs, alerts). Also built and maintained external-facing APIs (Firebase auth, Swagger docs) and resilient Selenium-based e-commerce scraping/automation with anti-bot and UI-change hardening.”