Pre-screened and vetted.
“ML engineer/data scientist who deployed a production credit risk + insurance claims triage platform at Hartford Financial, combining XGBoost default prediction with BERT-based document classification. Demonstrated strong MLOps by cutting inference latency to sub-500ms and building drift monitoring plus automated retraining/deployment pipelines (MLflow, CloudWatch, GitHub Actions, SageMaker) with human-in-the-loop review and SHAP-based explainability for underwriting adoption.”
Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms
“AI/ML engineer who led production deployment of a multimodal (text/video/image) RAG system on GCP using Gemini 2.5 + Vertex AI Vector Search, scaling to 10M+ documents with sub-second latency and +40% retrieval accuracy. Strong MLOps/orchestration background (Kubernetes, CI/CD, Airflow, MLflow) with proven impact on reliability (75% fewer incidents) and deployment speed (92% faster), plus experience delivering explainable ML (XGBoost + SHAP + Tableau) to non-technical retail stakeholders.”
Mid-level Data Scientist specializing in cloud ML, MLOps, and predictive analytics
“NLP/ML engineer with hands-on healthcare and support-ticket text experience, building clinical-note structuring and semantic linking systems using spaCy, BERT clinical embeddings, and FAISS. Emphasizes production-grade delivery (Airflow/Databricks, PySpark, Docker, AWS/FastAPI/Lambda) and rigorous validation via clinician-labeled datasets, retrieval metrics, and user feedback.”
Mid-level Data Engineer specializing in healthcare data platforms and MLOps
“ML/NLP practitioner with healthcare payer experience at HCSC, focused on connecting messy unstructured clinical notes to structured claims/provider data to improve fraud-analytics workflows. Has hands-on experience fine-tuning transformers in AWS SageMaker, building large-scale embedding search with FAISS, and implementing robust entity resolution using golden datasets, precision/recall calibration, and production monitoring for drift.”
Mid-level Machine Learning Engineer specializing in LLM apps, RAG pipelines, and MLOps
“Software engineer with connected-car/automotive production experience who owned an end-to-end remote door lock/unlock feature and introduced unit testing (GTest) plus rig/simulator validation. Also built and productionized an AI-native AWS cloud cost assistant (Lex + GPT-based LLM + Lambda + RAG/vector DB) with guardrails and achieved 94% evaluation accuracy. Helped replace a third-party solution with an in-house build, saving the company ~€9M.”
Senior Full-Stack AI Engineer specializing in Generative AI and FinTech
“Backend engineer who built and owned an AI-powered financial research product end-to-end, using a typed NestJS/GraphQL backend with LangGraph-style agent routing to produce sourced, structured financial analysis. Emphasizes finance-grade correctness (Zod validation, metric registries, unit/empty-result guardrails) while keeping latency low via batching, caching, and fast token streaming, and has led incremental migrations using strangler/feature-flag/shadow traffic patterns.”
Executive CTO / Software Architect specializing in GenAI, FinTech, and PropTech
“Entrepreneur/fintech product builder who raised a $100K pre-seed from ex-Google/Microsoft execs and built a real-time, direct-to-vendor bill pay micropayments platform. Previously helped scale Norton LifeLock to 1M users (2003) and also created Karma LA, a fraud-resistant, verified donation system (including VA veteran verification) aimed at improving trust and conversion in giving.”
Mid-level Data Engineer specializing in cloud data platforms and AI/ML analytics
“Backend/data engineer in healthcare who built an AWS-based clinical analytics platform from scratch (DynamoDB/S3/Airflow/dbt) with sub-second clinician query goals, 99.9% uptime, and HIPAA-grade controls (KMS encryption, IAM RBAC, audit trails). Also modernized ML delivery by replacing a manual 4-hour deployment with a 30-minute Docker/GitHub Actions CI/CD pipeline using parallel runs, parity testing, and rollback, and caught critical EHR data edge cases (date formats/timezones) that could have impacted patient care.”
Mid-level AI & Machine Learning Engineer specializing in Generative AI and MLOps
“Built a production GPT-4/LangChain/Pinecone RAG “AI Copilot” at Northern Trust to automate financial report generation and analyst Q&A over internal structured (SQL warehouse) and unstructured policy data. Focused on real-world production challenges—grounding and latency—achieving major speed gains (seconds to milliseconds) via MiniLM embedding optimization and Redis caching, and implemented rigorous testing/evaluation with MLflow-backed metrics while aligning compliance and finance stakeholders for deployment.”
Mid-level Machine Learning Engineer specializing in LLMs, GenAI, and Computer Vision
“LLM/agent engineer who built a production multi-agent research automation system using LangGraph (planner, retriever with FAISS, supervisor, evaluator) with structured outputs and citation tracking for traceable reports. Emphasizes reliability and operations—LangSmith-based observability, multi-level testing, hallucination mitigation, and latency/cost controls—plus prior experience as a Computer Vision Software Engineer at Deepsight AI Labs working directly with non-technical customers.”
Mid-level AI/ML Software Engineer specializing in data pipelines, BI dashboards, and computer vision
“Graduate Assistant Intern at Friends University who built and deployed a GenAI-driven requirement understanding system that automates extraction and semantic grouping of technical requirements from large unstructured documents. Demonstrates strong LLM engineering rigor (golden datasets, regression testing, post-processing validation) and production-minded delivery using LangChain/LlamaIndex orchestration, FastAPI microservices, Docker, and cloud deployment.”
Senior Data Scientist/Software Engineer specializing in ML systems and cloud DevOps
“AI software engineer with experience spanning LLM/RAG production systems and regulated fintech infrastructure. Built an end-to-end natural-language-to-SQL analytics assistant (Weaviate + GPT-4 + Supabase) shipped as an API with 92% accuracy and major time savings for non-technical users, and also owned demand-forecasting and CI/CD/containerization improvements for a Bank of America core banking deployment at Infosys.”
Mid-level AI/ML Engineer specializing in GenAI and cloud MLOps
“Applied LLMs to high-stakes domains (wildfire risk for emergency teams and loan approval via a fine-tuned IBM Granite model), with a strong focus on reliability—using RAG-based cross-validation to reduce hallucinations and continuous ingestion pipelines (MODIS satellite imagery via AWS Lambda) to keep data current. Experienced in production orchestration and MLOps-style workflows using Airflow, AWS Step Functions, and SageMaker Pipelines, and collaborates closely with analysts on KPI-driven evaluation.”
Mid-level AI/ML Engineer specializing in LLMs, RAG pipelines, and cloud MLOps
“Built and deployed a production LLM/RAG system at CVS to automate clinical documents, addressing PHI compliance, retrieval accuracy, and latency; achieved a 35–40% reduction in review effort through chunking and FP16/INT8 optimization. Also has experience translating AI outputs into actionable insights for non-technical stakeholders (sports analysts).”
Mid-level AI/ML Engineer specializing in fraud detection and healthcare predictive analytics
“ML/AI engineer with production experience in high-scale banking fraud detection at Truist, building an end-to-end pipeline (Airflow/AWS Glue/Snowflake, PyTorch/sklearn) with automated retraining and Kubernetes-based deployment; delivered measurable gains (22% fewer false positives, 15% higher recall) and reduced manual ops ~40%. Also partnered with clinicians at Kellton to deploy an LLM system for summarizing/classifying clinical notes, improving review time and decision speed.”
Mid-level Machine Learning Engineer specializing in deep learning and generative AI
“ML/NLP engineer with hands-on experience building production systems for unstructured insurance claims and customer data linking. Delivered measurable impact at scale (millions of documents), combining transformer-based NLP, vector search (FAISS/Pinecone), and human-in-the-loop validation, and has strong production workflow/observability practices (Airflow, AWS Batch, Grafana/Prometheus).”
Principal Data Scientist specializing in cybersecurity ML and MLOps
“ML/NLP engineer (Beyond Identity) who built production semantic search and entity-resolution systems over internal security documentation, using LDA + BERT embeddings with FAISS/Pinecone to cut search time by 30%. Also scaled a real-time anomaly detection pipeline to millions of events/day with Spark and AWS Lambda, with strong emphasis on measurable validation (Precision@k, MRR, F1, ARI).”
Mid-level AI/ML Engineer specializing in fraud detection and NLP
“Built production AI/RAG-style systems for message Q&A and insurance claims workflows, combining data ingestion, indexing/retrieval, and LLM integration with fallback modes. Has hands-on orchestration experience (Airflow, Prefect, LangChain) and cites large operational gains (claims processing reduced to ~45 seconds; manual review -50%; false alerts -30%) through automated, monitored pipelines and close collaboration with non-technical stakeholders.”
“AI/ML engineer with banking domain experience (M&T Bank) who built a production credit-risk prediction and reporting platform combining ML models (XGBoost/TensorFlow) with a RAG pipeline (LangChain + GPT-4) over compliance documents. Delivered measurable impact (≈20% better risk detection/precision, 50% less manual reporting) and productionized workflows on Vertex AI/Kubeflow with CI/CD and monitoring; also implemented embedding-based semantic search using FAISS/Pinecone.”
Mid-level AI/ML Engineer specializing in healthcare ML and generative AI
“AI/LLM engineer at Humana who built and deployed a HIPAA-aware RAG system for clinical record retrieval, cutting search time dramatically and improving retrieval efficiency by 30%. Experienced with Spark-scale data preprocessing, QLoRA fine-tuning, LangChain orchestration, and MLflow+SageMaker integration, with a strong testing/evaluation discipline (A/B tests, human eval) to hit 95%+ accuracy and production latency targets.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps for financial services
“Built and deployed a production Llama 3-based RAG document Q&A system using FAISS, addressing context-window limits through chunking and keeping retrieval accurate by regularly refreshing embeddings. Has hands-on orchestration experience with LangChain and LlamaIndex for multi-step LLM workflows (including memory management) and collaborates with non-technical teams (e.g., marketing) to deliver AI solutions like recommendation systems.”
Principal Enterprise Architect specializing in AI, cloud modernization, and cybersecurity
“Senior technologist (25 years experience) who served as chief architect/CTO for a patented software startup that was acquired. Strong at building scalable, robust, technology-agnostic systems and translating technical value into investor-ready narratives (forecasts, roadmaps, documentation). Currently prefers joining an existing founding team as a key technical leader/mentor rather than leading entrepreneurship solo.”
Junior Software Engineer specializing in AI/ML, data pipelines, and cloud APIs
“Hands-on AI/LLM practitioner who built a RAG-based customer support chatbot and tackled production issues like data chunking complexity and response-time lag. Uses techniques such as overlapping chunks, semantic search, context engineering, and query routing, and has experience presenting technical demos/workshops to developer audiences.”
Mid-level AI/ML Engineer specializing in LLMs, RAG, and MLOps
“Built a production RAG-based healthcare chatbot to retrieve patient medical documents spread across multiple platforms, reducing manual and error-prone searching. Implemented semantic search with custom embeddings (Hugging Face) and Pinecone, deployed via FastAPI/Docker on AWS SageMaker with MLflow tracking, and optimized fine-tuning cost using LoRA while orchestrating retraining pipelines in Airflow.”