Pre-screened and vetted.
Mid-level Machine Learning Engineer specializing in NLP and cloud MLOps
“Built and deployed a production LLM-powered internal documentation assistant using embeddings, a vector database, and a RAG pipeline to reduce time spent searching PDFs/manuals. Experienced in orchestrating end-to-end LLM workflows with Airflow/LangChain, improving reliability via monitoring/error handling, and driving measurable quality through retrieval and hallucination-focused evaluation metrics.”
Mid-level Machine Learning Engineer specializing in MLOps, NLP, and Computer Vision
“ML/AI engineer with production experience across retail and healthcare: built a real-time computer-vision shelf monitoring system at Walmart and optimized edge inference latency by ~30% using TensorRT/ONNX and pruning. Also partnered with CVS Health clinical/pharmacy teams to deliver a medication-adherence predictive model, using Streamlit explainability dashboards and achieving an 18% adherence improvement.”
Senior Data Engineer specializing in Azure Lakehouse, Databricks/Spark, and Snowflake
“Data engineer/platform builder with experience across PwC and Liberty Mutual delivering high-volume, production-grade pipelines and real-time data services. Has owned end-to-end streaming + batch architectures on AWS and Azure, including web scraping systems, with quantified reliability gains (99.9% availability, 90%+ error reduction, 30% latency reduction) and strong observability/CI-CD practices.”
Mid-level Full-Stack Developer specializing in FinTech and enterprise web platforms
“Financial-services AI engineer who shipped a production investment research assistant using RAG over internal research reports, SEC filings, and meeting transcripts, with a strong emphasis on truthfulness and guardrails. Built a structured evaluation loop (200+ golden test cases, RAG Triad metrics) that directly improved retrieval quality (e.g., fixing year-mismatch retrieval, boosting sensitive-query performance by 18% and cutting hallucinations to near zero) and scaled ingestion to ~10k messy documents with RabbitMQ + OpenTelemetry.”
Junior Machine Learning Engineer specializing in MLOps and statistical modeling
“Integration engineer at ES Foundry who led deployment of ELsentinel, a production EL image-based solar cell quality monitoring system using a Swin Transformer classifier (>0.8 F1 across 15+ classes) plus a live real-time prediction dashboard. Strong in solving messy labeling/data-quality problems with process-team collaboration and shipping ML systems despite limited compute/infrastructure.”
Mid-Level Software Engineer specializing in backend microservices and FinTech data pipelines
“Backend engineer at Goldman Sachs who built LLM-powered reconciliation/reporting services and high-throughput Kafka pipelines (8M+ events/day). Strong in production-grade Python/FastAPI microservices on Kubernetes with GitOps-style CI/CD, plus experience migrating legacy reporting/settlement services onto an internal Kubernetes platform using shadow deployments and gradual cutovers.”
Mid-level Data Engineer specializing in cloud data pipelines and real-time streaming
“Data engineer with PNC Bank experience owning high-volume financial transaction pipelines end-to-end (Kafka/REST ingestion through Spark/Glue transformations to Redshift serving) for risk and fraud analytics. Built strong reliability and data quality practices (Great Expectations, reconciliation, Airflow alerting, idempotent retries, incremental/windowed processing), reporting 40% ingestion efficiency gains and ~99.9% data accuracy.”
Senior Software Engineer specializing in low-latency ad targeting and distributed backend systems
“Backend/platform engineer who built a high-scale audience segmentation and real-time targeting system using Spark/Glue + S3/Hudi and low-latency API services backed by Redis/relational stores. Demonstrates strong production rigor: Spark performance tuning to eliminate OOM failures, API idempotency/caching to cut p95 latency ~40%, and careful dual-run/feature-flag migrations with reconciliation and rollback runbooks. Experienced implementing layered security with JWT/OAuth, RBAC/ABAC, and database row-level security to prevent privilege escalation.”
Intern Machine Learning Engineer specializing in LLMs, MLOps, and NLP
“Built and deployed a production LLM-driven Dungeons & Dragons game where the model acts as a dungeon master, adding a structured combat system and a macro-state tree to ensure campaigns converge to a clear ending. Fine-tuned Gemini 2.5 Flash on Vertex AI and deployed on GCP with Kubernetes, using RAG over DnD rules/spells plus multi-agent orchestration (intent-based routing between narrative and combat agents) to reduce hallucinations and improve reliability.”
Mid-level Software Engineer specializing in backend systems and AI automation
“Built a production Python microservice around Grafana Loki focused on reliability, with checkpointing, idempotency, replay tooling, tracing, and alerting to prevent data loss and silent lag. Also has hands-on experience hardening brittle Playwright automations against dynamic UIs, auth expiry, rate limits, MFA, and bot-detection constraints, plus turning tribal-knowledge SOPs into explicit state-machine-driven workflows.”
Intern Software Engineer specializing in AI and full-stack development
“Early-career software engineer with internship experience at CirrusLabs building a voice-enabled CRM workflow that integrated Google Text-to-Speech and GPT-based processing for automated deal creation. Stands out for a reliability-focused approach to AI integrations, including validation, structured logging, prompt refinement, and hardening asynchronous API/UI behavior in real-world application flows.”
Mid-level AI/ML Engineer specializing in multimodal AI and recommendation systems
“ML/AI engineer with hands-on ownership of a production LLM/RAG system at Goldman Sachs, focused on workflow automation and large-scale document search for operational teams. They combine strong MLOps and backend engineering skills with practical GenAI evaluation and safety practices, and cite measurable impact including 22% better task guidance accuracy and sub-second search across millions of records.”
Mid-level Software Engineer specializing in cloud platforms, SRE, and ML-powered engineering tools
“Platform-focused engineer/technical program leader working in silicon/wafer validation environments, with hands-on experience securing access to sensitive test results and engineering tooling. Has implemented RBAC/least-privilege controls with Azure Entra ID, Key Vault, PAM and integrated Checkmarx into dev workflows, while also deploying ML services on AKS using Bicep/Helm/Docker and Azure DevOps CI/CD with strong monitoring and incident response practices.”
Mid-Level AI Engineer specializing in NLP, computer vision, and LLM applications
“LLM/RAG practitioner who productionized an LLM-driven customer communication and transaction understanding system at PayPal, emphasizing privacy/compliance guardrails and large-scale data normalization. Experienced in real-time debugging of hallucinations via retrieval pipeline tuning and in leading hands-on developer workshops and sales-aligned POCs to drive adoption.”
Mid-level Back-End Python Developer specializing in cloud-native microservices and FinTech
“Backend engineer focused on building production-ready Python services (Flask/FastAPI) with strong performance and scalability instincts—Celery/Redis background processing, robust multi-tenant isolation (Postgres RLS), and pragmatic CI/Docker operations. Demonstrated measurable DB optimization impact (cut a critical analytics query from ~1–2s to ~100–150ms) and has hands-on experience integrating LLM/ML workflows (OpenAI, LangChain, embeddings, Redis/FAISS vector stores) without degrading API responsiveness.”
Senior Software Engineer specializing in Cloud DevOps and AWS automation
“Backend/automation engineer who led the design of an OOP Python test automation framework for AWS infrastructure (Behave + Jenkins), cutting regression effort from weeks to a 3–4 hour run. Has hands-on cloud and DevOps experience across AWS (boto3, ECS, AMI automation via GitHub Actions) plus data/migration work including on-prem-to-cloud Oracle Retail DB migration with rollback planning and a Kafka + ML fraud-detection streaming pipeline.”
Junior Full-Stack Software Engineer specializing in AI data systems
“Full-stack engineer with strong DevOps/AWS production experience who builds and operates multi-agent AI systems end-to-end (Streamlit/Python through Docker/Kubernetes and ECS/Fargate). Has delivered measurable outcomes: sub-2s latency and ~92% routing accuracy for an AI wellness assistant, shipped an AI-for-BI prototype in under 6 weeks cutting analysis time ~40%, and improved pipeline iteration speed ~35% via modularization and CI/regression checks.”
Senior Full-Stack Engineer specializing in AI/LLM and cloud-native SaaS
“Software engineer with strong end-to-end ownership across frontend, backend, data, and infrastructure, including real-time systems (Kafka/Postgres) and observability (Datadog). Built and productionized an AI-native RAG support assistant (OpenAI embeddings + Pinecone) with prompt/guardrail design, achieving 48% agent adoption and 30% faster responses. Experienced in legacy modernization and reliability work using feature flags, event/transaction replay, and rapid embedded delivery.”
Senior Software Engineer specializing in connected vehicle platforms and real-time data systems
“Open-source maintainer of KafkaJSUI, a Vue.js-based Kafka browser UI, focused on making large-topic exploration fast and responsive. Delivered major performance wins (incremental fetching, virtualized lists, WebSocket streaming, backpressure, Web Worker offloading) cutting load times to sub-200ms, and also strengthened CI and developer documentation while handling community-reported issues end-to-end.”
Senior Software Engineer specializing in AI/ML and data systems
“Built and shipped production LLM/AI agent systems including an NL-to-SQL query agent with semantic search and Redis-based caching, using schema-aware prompting and threshold validation to reduce hallucinations. Has orchestration experience running ML microservices on Kubernetes and automating event-driven insurance (P&C) workflows (claims/policy + fraud checks), reporting ~60% manual overhead reduction and ~99% uptime, with strong monitoring/drift-detection and business-facing Power BI reporting.”
Mid-Level Software Engineer specializing in data pipelines, APIs, and ML
“Software engineer whose recent work includes co-designing and building a "Shared Profile" feature for a social event-planning app (Again, Sometime). Previously at Pure Storage, set up Docker-standardized Ubuntu/Python environments to simulate hardware testbeds and support workload/performance regression testing for other engineering teams; no robotics/ROS experience.”
Senior Engineering Manager specializing in platform, data/ML, and identity/access systems
“Senior engineering leader from Goodyear’s AndGo startup-like division who scaled the org from 12 to 30+ across pod-based teams and introduced an Architect Guild/ARD governance model. Led a 4-month Europe launch requiring AWS regional infrastructure, GDPR compliance, i18n/l10n, and new EMEA reporting pipelines, and has hands-on depth in API performance, incident response, and GraphQL/Hasura adoption to boost product velocity.”
Senior Full-Stack Software Engineer specializing in Python, FastAPI/Django, and Azure
“Backend/data engineer with production experience building real-time IoT telemetry pipelines for wind/solar assets at Siemens (FastAPI on Azure Event Hubs/Service Bus, Cosmos DB + SQL Server) and deploying GPS/fleet telematics microservices on AWS ECS Fargate with Terraform and blue/green CI/CD. Demonstrated strong reliability and performance chops, including a 30s-to-<100ms SQL optimization and owning a Kafka pipeline incident resolved in ~20 minutes.”
Mid-level Data Engineer specializing in lakehouse ETL and analytics engineering
“Data engineer with strong end-to-end ownership of production lakehouse pipelines (Snowflake + Databricks + Airflow + dbt + Great Expectations), handling 8M+ records/month and 500K+ daily CDC updates. Delivered measurable reliability and efficiency gains (41% cost reduction, freshness improved from 4h to 30m, 35% fewer downstream incidents) and has experience building a lakehouse platform from scratch across 12 source systems.”