Pre-screened and vetted.
Mid-level AI/ML Data Engineer specializing in analytics, ML pipelines, and LLM applications
Mid-level Full-Stack Developer specializing in FinTech and fraud detection
Mid-level AI Data Scientist specializing in financial risk, fraud detection, and NLP/LLM systems
Intern Software Engineer specializing in cloud governance and distributed systems
Junior Multimodal AI & Systems Engineer specializing in robotics and cloud infrastructure
Junior AI Product Engineer specializing in LLM workflows and analytics automation
Mid-level AI/ML Engineer and Developer Educator specializing in GenAI, RAG, and AI community building
Mid-level AI/ML Engineer specializing in MLOps, distributed ML, and RAG pipelines
Senior Data Scientist specializing in Generative AI, NLP, and MLOps
Senior Machine Learning Engineer specializing in NLP, Generative AI, and healthcare/legal AI
Mid-Level Software Engineer specializing in AI platforms and backend systems
Executive CTO/VP Engineering specializing in high-performance AI, data systems, and distributed infrastructure
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and fraud detection
“At PwC, built and productionized an agentic RAG enterprise search assistant over 6M internal documents (8M embeddings), deployed across AWS and GCP. Drove major retrieval gains (72%→92% precision via BM25+dense hybrid with RRF and cross-encoder re-ranking), reduced hallucinations 30%, achieved <2s latency at 50–60K queries/month, and cut support tickets 30%—boosting adoption to 2,500 users by adding source-cited answers.”
Mid-level Data Engineer specializing in cloud data platforms and real-time streaming
“Worked on onboarding a Middle East logistics client processing thousands of invoices/month, building a production-ready pipeline that routes known vendor PDFs to deterministic regex parsers via Tax ID matching and falls back to LlamaParse for unknown layouts. Added financial consistency validation plus human-in-the-loop review and logging/metrics to continuously reduce LLM usage and improve template coverage.”
Junior Full-Stack/ML Engineer specializing in LLM applications and cloud deployment
“Full-stack developer with capstone and project experience delivering production-ready systems in unstructured environments, including a Faculty Tracking system for real departmental use. Strong in React performance debugging (re-render optimization with useMemo), Prisma-backed multi-database setups (MySQL local / SQL Server production on a UCI Health VM), and end-user support workflows that feed back into improved Help documentation.”
Mid-level Backend Software Engineer specializing in FinTech APIs and microservices
“Backend/event-driven systems engineer who built an end-to-end “software robot” for AI-driven invoice processing: FastAPI ingestion + OCR integration + classification mapping, with strong emphasis on reliability (idempotency, retries) and scalability (background workers, event-driven architecture). Experienced in production-grade distributed systems tooling (Kafka, Docker/Kubernetes, GitHub Actions, ArgoCD) and real-time debugging via tracing/telemetry, and expects $10k–$12k/month.”
Intern Machine Learning Engineer specializing in LLMs, RAG, and vision-language systems
“Robotics ML/software engineer focused on Vision-Language-Action control for 7-DoF robots, replacing tokenized action decoding with continuous regression heads (including a logit-weighted expectation approach) to improve stability and real-time behavior. Strong in ROS1/ROS2 systems integration and debugging closed-loop manipulation issues via latency instrumentation, QoS-aware distributed messaging, and sim-to-real validation using Gazebo/Unity, Docker, and CI pipelines.”
Mid-level Software Engineer specializing in LLM agents and full-stack systems
“At Esri, the candidate is building a production LLM-powered WebGIS AI framework that embeds an AI assistant into web maps and routes natural-language requests into ArcGIS JavaScript SDK functions via a LangGraph-orchestrated, multi-agent system. They emphasize production reliability and scale (strict tool calling/JSON, live schema validation, query guardrails) and rigorous evaluation/observability using LangSmith, offline prompt datasets, and latency/tool-call accuracy tracking.”
“Built and deployed a production RAG-based LLM Q&A and summarization platform for internal documents, emphasizing grounded answers with structured prompting and citations to reduce hallucinations. Experienced orchestrating end-to-end LLM workflows with LangChain plus cloud pipelines (Azure ML Pipelines, AWS), and runs iterative evaluation using both metrics (accuracy/hallucination/latency/cost) and real user feedback to drive reliability.”