Pre-screened and vetted.
Mid-level Full-Stack Software Engineer specializing in FinTech and backend platforms
“Built an AI-native legal research platform that automated analysis across 100,000+ dense legal documents, combining LLM workflows, async backend architecture, and conversational retrieval in production. Also brings cross-domain experience in investment-analysis agents and healthcare claims/billing systems, with a strong emphasis on reliability, deterministic orchestration, and safe handling of messy operational data.”
Mid-level Software Engineer in Test specializing in AI and healthcare platforms
“QA/data pipeline engineer with hands-on AI product building experience, spanning enterprise AWS migration testing for Belgium postal services and personal multi-agent systems in fintech and recruiting. Stands out for combining rigorous validation and production stability work with modern LLM orchestration, guardrails, and messy-document normalization workflows.”
Junior Data Engineer / Analyst specializing in AI/ML data infrastructure
“Built and deployed a compliance-sensitive LLM pipeline that extracts rebate logic from hospital–supplier medical contracts, using multi-layer redaction (regex/NER/dictionary), schema-validated structured outputs, and secure placeholder reinsertion. Hosted models on Amazon Bedrock to avoid retraining on sensitive data and improved both accuracy and cost by splitting the workflow into a lightweight section classifier plus a fine-tuned extraction model, orchestrated with LangChain and evaluated via layered, test-driven agent assessments.”
Mid-level AI/ML Engineer specializing in fraud detection and healthcare predictive analytics
“Built and deployed a production LLM-powered calorie-counting chatbot that turns plain-English meal descriptions into normalized food entities, quantities, and calorie estimates using a hybrid transformer + rule-engine pipeline. Emphasizes reliability with schema/constraint guardrails, confidence-based routing (including embedding similarity search fallbacks), and strong observability/metrics (hallucination rate, calibration, latency, cost). Partnered closely with nutritionists to encode domain standards into mappings and validation logic.”
Mid-level Applied AI Engineer specializing in agentic LLM workflows
“Master’s-in-Data-Science candidate (UHV) with 4+ years in AI engineering building production LLM and multimodal systems. Designed an LLM-powered workflow automation platform using RAG over vector stores with guardrails (schema/output validation, fallbacks) and a rigorous evaluation/monitoring framework including drift tracking and shadow deployments. Experienced orchestrating large-scale vision-language pipelines with Airflow and Kubernetes (OCR, distributed training) and partnering with non-technical ops stakeholders to cut cycle time and reduce errors.”
Mid-level Data & AI Engineer specializing in healthcare data pipelines and MLOps
“Built and deployed a production LLM-powered clinical note summarization system used by care managers to speed review of 5–20 page unstructured medical records. Implemented safety-focused validation (prompt constraints, rule-based and section-level checks, human-in-the-loop) to reduce hallucinations while maintaining low latency and meeting privacy/regulatory constraints, integrating via APIs into existing clinical tools.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”
Intern Full-Stack/Software Engineer specializing in web apps, cloud, and data/ML systems
“Built and productionized LLM-driven content intelligence/SEO agents for a high-traffic media platform, automating tagging/summarization/metadata with FastAPI + async orchestration and strict JSON-schema outputs. Demonstrated measurable impact (40% faster publishing, +20% organic traffic in 3 months) and strong reliability practices (offline evals, shadow mode, canaries, fallbacks, idempotency, and monitoring).”
Mid-level Data Engineer specializing in cloud ETL/ELT and big data pipelines
“Data engineer focused on production-grade pipelines and data services: ingests millions of records/day into S3, performs SQL/Python quality validation and PySpark/SQL transformations, and serves curated datasets via Athena/Redshift. Has experience hardening external data collection with retries/rate-limit handling and shipping versioned internal data APIs with backward compatibility, monitoring, and CI/CD in early-stage environments.”
Mid-level ML Data Engineer specializing in MLOps and scalable healthcare data pipelines
“Data/ML platform engineer with healthcare (Cigna) experience owning an end-to-end pipeline spanning Airflow + Debezium CDC ingestion, PySpark/SQL transformations, rigorous data quality gates, and feature-store/API serving for ML training and inference. Worked at 10+ TB scale and cites a ~30% latency reduction plus stronger reliability via idempotent design, monitoring, and backfill-safe reprocessing; also built pragmatic early-stage data pipelines at Frankenbuild Ventures.”
Mid-level Software Engineer specializing in cloud microservices and data pipelines
“Data engineer/platform builder who has owned production pipelines end-to-end processing millions of records/day, with strong emphasis on data quality (quarantine workflows) and reliability (monitoring, retries, incremental loads). Also designed large-scale external data collection/crawling with anti-bot handling and backfills, and shipped versioned REST data services optimized for performance and developer usability in an early-stage environment.”
Mid-level AI/ML Engineer specializing in LLM, RAG/GraphRAG, and fraud analytics
“LLM/agent engineer who has deployed a production internal assistant to reduce employee inquiry resolution time while maintaining regulatory compliance. Experienced with RAG, hallucination risk triage, and graph-based orchestration (LangGraph) for enterprise/banking-style workflows, emphasizing schema-validated, citation-backed, tool-constrained agent designs and tight collaboration with non-technical business/compliance stakeholders.”
Entry-level AI/ML Engineer specializing in LLMs, RAG, and DevOps automation
“Built and owned a production-scale AI-driven software release/version intelligence platform orchestrated via GitHub Actions that tracks 1000+ upstream repositories and automatically generates SLA-bound JIRA upgrade tickets for hardened container images. Replaced brittle regex/PEP440 parsing with an LLM-based semantic filtering layer plus deterministic validation to handle noisy/inconsistent GitHub tags at scale, with monitoring for coverage, latency, and correctness validated against upstream ground truth.”
Mid-level Data Engineer specializing in cloud lakehouse, streaming, and MLOps
“Data engineer at AT&T focused on large-scale telecom (5G/IoT) data platforms, owning end-to-end pipelines from Kafka/Azure ingestion through Databricks/Delta Lake transformations to serving analytics and ML. Has operated at very high volumes (~50+ TB/day) and delivered measurable performance gains (25–30% faster processing) plus improved reliability via Airflow monitoring, robust data quality checks, and resilient external data collection patterns (rate limiting, retries, dynamic schemas).”
Junior Data Analyst specializing in analytics, BI, and machine learning
“Analytics-focused candidate with experience owning end-to-end data projects across AI transcription, retail forecasting, and transportation revenue analytics. They combine strong SQL/Python pipeline skills with dashboarding and stakeholder alignment, citing measurable impact including 60% lower ETL latency, 18% better forecast accuracy, and 25% operational efficiency gains.”
Mid-level Data Analyst and Product professional specializing in FinTech and AI applications
“Payments/product-focused operator with hands-on experience owning complex bank connectivity deployments at Paystand, including a migration that raised connection success from under 50% to 79%. Also built a production-grade multi-agent document intelligence system on AWS Bedrock for structured enterprise document extraction, combining real-world fintech domain pain points with modern LLM architecture.”
Junior Backend and ML Engineer specializing in distributed systems and LLM infrastructure
“Backend engineer with strong ownership across authentication, API infrastructure, and AI-powered document workflows. They built and operated a production auth microservice supporting 10,000+ users with measurable latency and security improvements, and also shipped hackathon and applied-AI systems including legal document and medical document retrieval/Q&A products.”
Mid-Level Software Engineer specializing in backend microservices and cloud platforms
“Backend engineer in healthcare data systems who has owned production pipelines end-to-end, from ingesting patient and claims data to serving it through secure APIs. Brings a strong mix of Python, SQL, microservices, cloud deployment, and data reliability practices, with measurable performance gains and experience building resilient integrations with external data sources.”
Entry-Level Software Engineer specializing in Java backend and distributed systems
“Built and shipped a production AI mock-interview platform where an LLM “interviewer” generates role-specific questions, runs multi-turn follow-ups, and outputs structured scoring/feedback using RAG. Demonstrates strong production-readiness practices (Prometheus/Grafana monitoring, retries/timeouts, fallback models/templates, schema validation, replay-based debugging) and achieved 99%+ availability with ~40% higher session completion. Also has experience integrating agents with messy SAP/ERP-style data sources using a data middleware layer, cleaning/validation, and idempotent write safeguards.”
Mid-level QA Automation Engineer / SDET specializing in web and API test automation
“QA automation engineer with Silicon Valley Bank experience owning end-to-end test automation for customer onboarding and payments. Built a hybrid framework (Playwright UI, REST Assured API, Python + SQL backend validation) integrated into Azure DevOps CI/CD with PR gating and nightly regressions, catching high-risk issues like silent API contract breaks and missing payment ledger entries before release.”
Junior Full-Stack & ML Engineer specializing in LLM data extraction and robotics
Intern Software Engineer specializing in AI agents and computer vision
Senior Data Engineer specializing in cloud data platforms and LLM/RAG solutions