Pre-screened and vetted in Texas.
Mid-Level Software Engineer specializing in full-stack web and backend systems
Junior Software Engineer specializing in microservices and FinTech payments
Junior AI/ML Software Engineer specializing in LLM agents and RAG systems
“AI/back-end engineer at Canon who helped build and operate an internal production LLM platform that acts as a secure middle layer between users and models, defending against jailbreaks/prompt injection while enabling RAG, memory, and grounded responses over company data. Experienced with LangChain/LangGraph orchestration, vector DB retrieval, and reliability practices (testing, monitoring, adversarial prompts) to run high-throughput, low-latency AI workflows in production.”
Mid-level AI Data Engineer specializing in GenAI, RAG, and cloud data pipelines
“LLM/agentic AI builder who deployed a production ITSM automation agent on Google ADK integrating ServiceNow and FreshService, with strong safety guardrails (human-approval gating and runbook-only command execution) and rigorous evaluation (500 synthetic tickets; 80%+ false-positive reduction). Also partnered with finance to deliver an AI agent that automated invoice/SOW retrieval and monthly reporting to account managers, reducing manual back-and-forth.”
Mid-level Software Engineer specializing in ML, optimization, and robotics
Mid-level Machine Learning Engineer specializing in reinforcement learning and autonomous systems
Mid-level Full-Stack Engineer specializing in cloud-native and Generative AI systems
Mid-level Software Systems Engineer specializing in cloud infrastructure and AI applications
Intern Full-Stack Software Engineer specializing in web development, data pipelines, and cloud
Entry AI/ML Engineer specializing in Generative AI, LLMs, and MLOps
“Built and productionized a MediCloud/Medicoud LLM microservice platform that lets clinicians query medical data in natural language, orchestrating multi-step RAG-style workflows with LangChain and evaluating/debugging with LangSmith. Delivered measurable gains (consistency ~70%→90% / +20%; latency ~2.0s→1.1s / -40%) by implementing structured prompts, fallback logic across multiple LLMs, hybrid retrieval tuning, and AWS Lambda performance optimizations (package size, async, caching).”
Junior Software/AI Engineer specializing in GPU-accelerated HPC and machine learning