Pre-screened and vetted.
Mid-level Software Development Engineer specializing in backend systems and ML platforms
Mid-Level Technical Game Designer specializing in gameplay systems and Live Ops balancing
Mid-level Full-Stack Developer specializing in AWS modernization and Java/Angular
Junior Software Engineer specializing in full-stack, cloud infrastructure, and applied AI
“Master’s student at UC San Diego who built an LLM-powered healthcare chatbot for patient history-taking and sepsis-related output, using a Node.js backend integrated with FastAPI for RAG/LLM interactions and a Flutter client. Also has healthcare AI startup experience deploying on AWS (ECS/Terraform/Docker) and implementing Kubernetes autoscaling to improve efficiency and reduce costs, with strong iterative evaluation in collaboration with a physician.”
Mid-Level Software Engineer specializing in real-time data pipelines and ML deployment
“Ticketmaster data engineer who built CDC-driven Kafka pipelines feeding Snowflake for analytics and data science teams. Hands-on in production operations—scaled Kafka during sudden playoff-driven transaction spikes and improved monitoring for preemptive scaling. Known for using small-batch experiments and quantitative metrics to align stakeholders and drive cost-saving architecture changes (e.g., buffering to reduce AWS Lambda invocation frequency).”
Mid-level Applied AI Engineer specializing in LLM agents, RAG, and model alignment
“Applied Scientist with legal-tech experience who builds production LLM systems. Created and deployed Quibo AI, a LangGraph-based multi-agent pipeline that turns large markdown/Jupyter inputs into polished blogs and social posts, overcoming context limits via ChromaDB + HyDE RAG. Also built a large-scale iterative code-evolution workflow using multi-model orchestration (GPT/Claude/Gemini) with testing, debugging loops, and evaluation/observability practices.”
Junior Data/Backend Engineer specializing in distributed systems and streaming pipelines
Mid-Level Backend Engineer specializing in cloud-native distributed systems and data pipelines
Mid-level AI & Machine Learning Engineer specializing in production ML and LLM applications
Mid-Level Full-Stack Software Engineer specializing in FinTech and data platforms
Senior Software Engineer specializing in high-throughput systems across FinTech, e-commerce, and data platforms
Mid-level AI/ML Engineer specializing in production ML, NLP, and computer vision
Senior Software Engineer specializing in cloud-native microservices and real-time data pipelines
Senior Data Engineer specializing in cloud lakehouse platforms and healthcare data
Mid-Level Software Engineer specializing in data infrastructure and LLM applications
Senior AI/ML Engineer specializing in production AI systems for healthcare and finance
Senior Data Scientist / ML Engineer specializing in GenAI, LLMs, and NLP
“ML/NLP engineer focused on production GenAI and data linking systems: built a large-scale RAG pipeline over millions of support docs using LangChain/Pinecone and added a LangGraph-based validation layer to cut hallucinations ~40%. Also built scalable PySpark entity resolution (95%+ accuracy) and fine-tuned Sentence-BERT embeddings with contrastive learning for ~30% relevance lift, with strong CI/CD and observability practices (OpenTelemetry, Prometheus/Grafana).”
Intern Full-Stack/AI Software Engineer specializing in GenAI and cloud microservices
“Backend engineer who owned the AI/data pipeline layer for an EV-charging management platform (Ampure Intelligence), ingesting real-time charger telemetry via OCPP and serving FastAPI APIs to web/mobile clients. Strong in production reliability for asynchronous systems (state reconciliation, idempotency), Kubernetes GitOps (ArgoCD), Kafka streaming, and zero-downtime cloud-to-on-prem migrations; also improved LSTM-based forecasting through targeted preprocessing.”
Mid-level AI/ML Engineer specializing in Databricks, MLOps, and real-time fraud detection
“ML/LLM engineer building production, real-time fraud detection for financial transactions using a two-tier architecture (fast ML + GPT) to deliver both low-latency decisions and analyst-friendly risk explanations. Experienced orchestrating end-to-end retraining, drift monitoring, and automated model promotion with Databricks Jobs/Workflows and MLflow, and partnering closely with fraud analysts to tune alerts, thresholds, and dashboards.”
Senior Data Engineer specializing in cloud lakehouse and real-time streaming pipelines
“Senior data engineer with experience in both healthcare (CVS Health) and financial services (Bank of America), building large-scale Azure lakehouse pipelines (30+ EHR sources, ~5TB) and real-time streaming services (Event Hubs/Kafka) for patient vitals. Strong focus on reliability and data quality (Great Expectations, monitoring/alerting, schema drift automation), with measurable outcomes like 50% runtime reduction and 99%+ uptime for regulatory reporting pipelines.”