Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud ETL, streaming, and data warehousing
Mid-level Full-Stack .NET Developer specializing in Angular, Azure, and AI integrations
Senior Full-Stack Software Engineer specializing in cloud-native FinTech and data pipelines
Mid-level AI Engineer specializing in LLMs, agentic systems, and MLOps
“AI-focused engineer with Infosys experience building Azure/.NET chatbot applications and recent hands-on work with FastAPI/LangChain. Built a hackathon multi-agent legal counsel system showcasing agent orchestration, and emphasizes production readiness via Docker, GitHub Actions CI/CD, pytest automation, and adversarial simulations for auditable AI behavior. No direct robotics/ROS experience to date.”
Mid-level AI/ML Engineer specializing in fraud detection, credit risk, and NLP
“Built and deployed a production LLM-powered university support chatbot on Azure using a RAG pipeline, focusing on reducing hallucinations, improving latency, and handling ambiguous queries via confidence checks and clarification prompts. Also has hands-on orchestration experience (Airflow/Azure Data Factory), including hardening a demand-forecasting ingestion workflow with sensors, retries, and automated alerts, and uses a metrics-driven testing/monitoring approach for reliable AI agents.”
Director-level FinTech and Product Leader specializing in AI recruiting and iOS voice apps
“Built a production voice-first fitness coaching/logging system ("Hey Coach") that routes headphone audio, transcribes speech, and uses LLMs to parse workouts into structured data with human-in-the-loop QA and A/B-tested improvements. Also brings a commercial background: led capital markets at a Series A fintech securing $250M+ in commitments and drove early GTM at WorkHQ.com, landing its first Fortune 500 customer.”
Mid-level AI/ML Engineer specializing in LLMs, MLOps, and Azure
“AI/ML engineer who led Impacter AI’s production deployment of a specialized outreach LLM (CharmedLLM) fine-tuned on GPT-4.1, cutting API costs ~40% while boosting outreach effectiveness ~60%. Built the supporting MLOps and data infrastructure (MLflow, Kubernetes, PySpark, Kafka) and has agentic AI experience from University of Dayton, using LangChain + RAG and vector search (Pinecone) to improve reliability and reduce hallucinations.”
Executive AI/ML & Platform Technology Leader specializing in LLMs, GraphRAG, and security
Mid-level Backend Engineer specializing in cloud-native microservices and FinTech systems
Senior Software Engineer / DevOps specializing in cloud-native distributed systems
Senior Machine Learning Engineer specializing in LLMs, RAG, and Computer Vision
“Built a production LLM-powered clinical note summarization and retrieval system that structures patient/provider/payer discussions into standardized outputs (symptoms, treatments, clinical codes, and prior-auth decisions) and stores notes as embeddings for hybrid search and proactive prior-authorization prediction. Experienced with LangChain/LangGraph orchestration, RAG, and grounding against medical code databases, and has communicated model feasibility/limitations to business stakeholders (Virtusa/Comcast).”
Mid-level Software Engineer specializing in full-stack web, Go microservices, and AI integrations
“Backend/LLM engineer who ships production internal tooling end-to-end: automated data-request processing with monitoring-driven improvements (better error diagnostics and lower latency via query/index tuning). Also built a RAG-based internal Q&A system over company docs and operational logs with guardrails (similarity thresholds, fallbacks, response limits) and an eval loop using real user queries and human review to drive prompt/retrieval changes.”
Senior Software Engineer specializing in enterprise platforms and data engineering
“Backend/data platform engineer who owned an enterprise Django REST + PostgreSQL reporting backend and built Python ETL pipelines to normalize 3M+ legacy customer records, improving data reliability by 85%. Strong Kubernetes/GitOps practitioner (Helm, ArgoCD, Jenkins/GitHub Actions) with real-world production debugging experience, plus Kafka streaming at 5M events/day and a zero-downtime monolith-to-event-driven microservices migration on AWS that cut infra costs by 42%.”
Mid-level Data Scientist specializing in GenAI, RAG, and forecasting
“ML/NLP engineer focused on large-scale data linking for e-commerce-style catalogs and customer records, combining transformer embeddings (BERT/Sentence-BERT), NER, and FAISS-based vector search. Has delivered measurable lifts (e.g., +30% matching accuracy, Precision@10 62%→84%) and built production-grade, scalable pipelines in Airflow/PySpark with strong data quality and schema-drift handling.”
Mid-level Data Scientist specializing in credit risk, fraud detection, and ESG analytics
“AI/LLM practitioner who has deployed production chatbots across e-commerce, HRMS, and real estate, focusing on retrieval-first workflows for factual tasks like product and property search. Optimized intent understanding and significantly improved latency by using lightweight embeddings and tuning the inference pipeline on Groq (Llama 3.3), while applying modular orchestration and measurable production evaluation.”
Mid-level GenAI Engineer specializing in RAG systems and AI agents
“LLM/agentic systems builder who has deployed production solutions for a resource management firm, using an MCP-driven architecture with Neo4j + Elasticsearch and a ChatGPT frontend to generate candidate/company “SmartPacks” and answer entity Q&A. Also built a LangGraph/LangSmith-orchestrated multi-agent workflow that automates data-infra change requests end-to-end (impact analysis, SQL + tests, and PR creation), and delivered a ~60% latency reduction through TTL-based context caching while improving accuracy via a business data dictionary.”
Junior Data Engineer specializing in data pipelines and streaming ingestion
“Backend/data platform engineer who owned a near-real-time patient feedback ingestion system, building a FastAPI + Kafka service with Snowflake/Airflow orchestration. Demonstrates strong production Kubernetes/GitOps practices on AWS EKS (Helm, Argo CD, Sealed Secrets) and solved real-time data integrity issues via idempotent processing with Redis.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics engineering
“Built and deployed a production LLM-powered demand and churn forecasting system for an e-commerce client, combining open-source LLMs (LLaMA/Mistral) and Sentence-BERT embeddings to generate business-friendly explanations of forecast drivers. Strong focus on data quality and model trust (validation, baselines, segmented monitoring) and production reliability via Airflow-orchestrated pipelines with readiness checks, retries, and ongoing drift/A-B testing.”
Mid-level Software Engineer specializing in cloud and FinTech systems
“Backend/AI engineer who has built and operated production Node.js/Express services on AWS (Postgres/Redis) and has hands-on experience shipping an AI-powered support agent using RAG (Pinecone + LLM) with grounding checks and evaluation for hallucination rate. Demonstrates strong production reliability/performance debugging, including reducing peak latency from ~2s back to sub-300ms through query and caching optimizations, plus designing agent workflows with retries and human-in-the-loop escalation.”
Senior Engineering Manager specializing in AI platforms and cloud-native backend systems
“Player-coach engineering leader who stayed hands-on (coding/reviews) while leading delivery, including designing an event-driven AI workflow engine with explicit state modeling and robust retries. Built near real-time enterprise analytics for campaign measurement and drove reliability/process improvements (observability, incident runbooks, release management). Introduced lightweight CI/CD and automated testing to cut release time by ~40% while maintaining quality.”