Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in LLM applications and cloud-native systems
“LLM engineer who has shipped production AI systems, including an RFP requirements extraction platform (OpenAI o4-mini + Azure AI Search + FastAPI) achieving 90%+ accuracy and ~5x throughput through grounding, structured outputs, parallelization, and caching. Also partnered with legal/compliance stakeholders at Nexteer Automotive to deliver an AI document comparison tool with traceability and confidence indicators, adopted by non-technical users and saving ~2 FTEs of review time.”
Mid-Level Software Engineer specializing in cloud-native systems, automation, and LLM-enabled robotics
“React-focused engineer who built a full-stack analytics/test-metrics dashboard (React frontend + Python backend) and turned common UI pieces (data tables, filter panels, chart wrappers) into a reusable internal component library with docs, examples, and basic tests. Strong on profiling-driven performance optimization (React Profiler, memoization) and on owning ambiguous internal-tool projects end-to-end; now planning to package internal patterns into public open-source components.”
Mid-Level Software Engineer specializing in Python automation, DevOps, and microservices
“Backend-focused engineer who built an internal wiki LLM chatbot end-to-end using FastAPI, Kubernetes, and ChromaDB vector search, including frontend integration. Also has strong DevOps/migration experience—automating large work-item and repo migrations (Jira/FogBugz/ADO on-prem to cloud) via Python scripts, JSON mappings, REST APIs, and validation test suites.”
Mid-level Software Engineer specializing in ML platforms and cloud-native backend systems
“Software engineer with experience at Google and the City and County of San Francisco building production AI systems, including a RAG-based internal support chatbot and ML-driven ticket priority tagging. Has scaled data/ML platforms with Airflow on GCP (1M+ records/day, 99.9% SLA) and deployed multi-component systems with Docker and Kubernetes (GKE), using modern LLM tooling (LangChain/CrewAI, Claude/OpenAI, Pinecone/ChromaDB, Bedrock/Ollama).”
Mid-level AI/ML Engineer specializing in NLP, RAG, and MLOps
“Built a production LLM/RAG-based “model excellence scoring” system at Uber to automatically evaluate hundreds of ML models, standardizing quality assessment and cutting evaluation time from days to minutes on GCP. Also delivered an NLP document classification solution for insurance claims at Globe Life, partnering closely with compliance/operations and improving routing accuracy from ~85% manual to 93% with the model.”
Intern Software Engineer specializing in distributed systems and security
“Built a production LLM-powered analyst assistant at Discern Security to speed up SOC investigations using a RAG pipeline over security vendor documentation (Python PDF ingestion, vector search). Demonstrates deep, security-critical LLM engineering: structure-aware chunking with custom table parsing, grounded/cited responses, prompt-injection defenses, and post-generation validation, validated via golden datasets and adversarial testing; tool is used daily by analysts.”
Senior Data Scientist specializing in GenAI agents and causal inference
“Built and deployed a production healthcare medical review agent that automates call-transcript summarization and medication reconciliation using a hybrid deterministic + LangGraph-orchestrated LLM workflow. Demonstrates strong reliability engineering (guardrails, schema validation, confidence thresholds, golden/adversarial eval, Langfuse monitoring) in a regulated environment, delivering 60% lower latency and 70%+ efficiency gains while partnering closely with care managers and operations.”
Intern Full-Stack Software Engineer specializing in cloud data pipelines and internal tools
“Built an internal Meta tool (HiVA Bot) for notification customization and end-to-end task tracking around advertiser-reported issues, including chat-thread creation, org-hierarchy opt-ins, SLA reminders, and search/typeahead features. Implemented the system with a Java/Spring Boot microservices approach and asynchronous patterns, and supported adoption via internal wiki documentation.”
Mid-level Full-Stack Engineer specializing in scalable APIs, cloud infrastructure, and GenAI apps
“Backend/platform engineer with experience across edtech, logistics, and AWS internal systems—owned a production course recommender end-to-end (model serving + APIs + caching/observability), delivering +30% CTR and -20% latency. Has scaled real-time delivery visibility/rerouting on Kubernetes/EKS to sub-200ms P95 during demand spikes and built billion-events/day telemetry pipelines on AWS (Kinesis Firehose, Lambda, S3, Redshift) with schema evolution, dedupe, and replay support.”
Senior Full-Stack Python Developer specializing in FinTech and cloud-native systems
Intern Data Scientist specializing in GenAI, RAG, and MLOps
Entry-Level Software Engineer specializing in embedded systems and machine learning
Intern Full-Stack Software Engineer specializing in real-time systems and GenAI applications
Mid-level AI/ML Engineer specializing in financial services and generative AI
Senior Generative AI Engineer specializing in LLM platforms, RAG, and MLOps
Mid-level AI/ML Engineer specializing in LLM fine-tuning, NLP, and MLOps
Senior Generative AI & Machine Learning Engineer specializing in LLMs and MLOps