Pre-screened and vetted.
Mid-level Data Scientist & AI Engineer specializing in RAG, agentic AI, and production ML
“AI/data engineer who built a production LLM-powered schema drift detection system (LangChain/LangGraph) to catch semantic data changes before they break downstream analytics/ML. Deployed on AWS with Docker/S3 and implemented an LLM-as-a-judge evaluation framework to improve trust, reduce hallucinations, and control false positives/alert fatigue. Collaborated with non-technical risk/business analytics stakeholders at EY by delivering human-readable drift explanations that improved confidence in financial analytics dashboards.”
Senior Data & Backend Engineer specializing in cloud data pipelines and LLM/RAG systems
“Data engineer with end-to-end ownership of large-scale retail and clinical data ingestion/processing on AWS, including real-time streaming and batch pipelines. Delivered measurable outcomes: 20M daily transactions processed, latency cut from 4 hours to 5 minutes, ~70% fewer failures, and 120+ pipelines running at 99.8% reliability with full audit compliance.”
Senior Full-Stack & GenAI Engineer specializing in healthcare and financial services
“Built and deployed a production LLM-powered customer support assistant using a RAG backend in Python, focused on deflecting repetitive Tier-1 tickets and reducing resolution time. Demonstrates strong production engineering instincts around reliability (confidence scoring + human fallback), scalability/cost optimization (multi-stage pipelines), and workflow orchestration/observability (LangChain, custom DAGs, structured logging, step metrics).”
Senior Software Engineer specializing in AI-driven marketing and data platforms
“Backend/data engineer who builds production FastAPI microservices and AWS serverless/Glue pipelines for SMS analytics and marketing segmentation. Led a legacy batch modernization into modular services (FastAPI + Glue/Athena + ClickHouse) using shadow-mode parity checks, feature flags, and incremental rollout. Demonstrated measurable performance wins (12s to sub-second SQL; ~40% CPU reduction) and strong incident ownership with proactive schema-drift prevention.”
Mid-level AI/ML Engineer specializing in fraud detection and healthcare predictive analytics
“Built and deployed a production LLM-powered calorie-counting chatbot that turns plain-English meal descriptions into normalized food entities, quantities, and calorie estimates using a hybrid transformer + rule-engine pipeline. Emphasizes reliability with schema/constraint guardrails, confidence-based routing (including embedding similarity search fallbacks), and strong observability/metrics (hallucination rate, calibration, latency, cost). Partnered closely with nutritionists to encode domain standards into mappings and validation logic.”
Mid-level AI/ML Engineer specializing in LLM, RAG/GraphRAG, and fraud analytics
“LLM/agent engineer who has deployed a production internal assistant to reduce employee inquiry resolution time while maintaining regulatory compliance. Experienced with RAG, hallucination risk triage, and graph-based orchestration (LangGraph) for enterprise/banking-style workflows, emphasizing schema-validated, citation-backed, tool-constrained agent designs and tight collaboration with non-technical business/compliance stakeholders.”
Mid-level Data Scientist specializing in NLP, LLMs, and RAG systems
“Built and deployed a production-style vision-language pipeline that generates structured medical reports from chest X-rays using BioViLT embeddings, an image-text alignment module, and BiGPT fine-tuned with LoRA, delivered via Streamlit and hosted on AWS EC2. Also collaborating experience presenting EDA findings, feature importance, and model performance to Ford managers while working with vehicle parts data at Bimcon.”
Senior Agile/Product Delivery Leader specializing in enterprise transformation, data and cybersecurity
“Built a web-based online Sudoku game in JavaScript (multiplayer format supporting up to 6 teams with up to 5 players each) and demonstrates strong product/analytics orientation. Uses a KPI-driven approach (DAU/WAU, ARPU, session duration, LTV) and structured prioritization methods (MoSCoW, story mapping, cost of delay, DFV) to iterate toward targets; seeking a remote role around $70k/year.”
Mid-Level Full-Stack Software Engineer specializing in healthcare, cloud, and data platforms
“Backend/platform engineer who owned a real-time customer analytics microservice stack in Python/FastAPI with Kafka streaming into PostgreSQL, including schema enforcement (Avro) and high-throughput optimizations. Strong Kubernetes + GitOps practitioner (EKS/GKE, Helm, Argo CD) who has handled CI/CD reliability issues with automated pre-deploy checks and rollbacks, and supported major migrations (on-prem to AWS; VM to EKS) with blue-green cutover planning.”
Senior Software Developer specializing in AI/ML automation and cloud-native systems
“ML/MLOps practitioner who built production systems for telecom network analytics, including an automated labeling + multi-label Random Forest solution that cut labeling effort by 90% and sped up RCA. Led an Ericsson auto-deployment platform using Airflow, Azure IoT Hub, Docker, and Celery to orchestrate 120+ containerized ML/rule-based deployments, saving ~80 hours of setup per deployment.”
Mid-level Machine Learning Engineer specializing in LLM systems and healthcare data automation
“React performance-focused engineer who contributed performance patches back to an open-source context+reducer state helper after profiling and fixing excessive re-renders in an enterprise project management platform at Easley Dunn Productions. Also built an end-to-end LLM-driven pipeline at Prime Healthcare to normalize millions of supply-chain records, reducing defects by 80% and saving 160+ hours/month.”
Mid-level Data & AI Engineer specializing in healthcare data pipelines and MLOps
“Built and deployed a production LLM-powered clinical note summarization system used by care managers to speed review of 5–20 page unstructured medical records. Implemented safety-focused validation (prompt constraints, rule-based and section-level checks, human-in-the-loop) to reduce hallucinations while maintaining low latency and meeting privacy/regulatory constraints, integrating via APIs into existing clinical tools.”
Mid-level AI/ML Engineer specializing in Generative AI and NLP
“AI/LLM engineer with production experience building secure, scalable compliance-focused generative AI systems (GPT-3/4, BERT) including RAG over internal regulatory document bases. Has delivered end-to-end pipelines on AWS with PySpark/Airflow/Kubernetes/FastAPI, emphasizing privacy controls, monitoring, and iterative evaluation (A/B testing). Also partnered closely with bank compliance officers using prototypes to refine NLP summarization/classification and reduce document review time.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Mid-level AI/ML Engineer specializing in MLOps and LLM-powered applications
“AI/ML engineer with production experience building a RAG-based internal analytics assistant (Databricks + ADF ingestion, Pinecone vector store, LangChain orchestration) deployed via Docker on AWS SageMaker with CI/CD and MLflow. Strong focus on real-world constraints—latency/cost optimization (LoRA ~60% compute reduction), hallucination control with citation grounding, and enterprise security/governance. Previously at Intuit, delivered an interpretable churn prediction system (PySpark/Databricks, Airflow/Azure ML) that improved retention targeting ~12%.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and MLOps in Financial Services
“ML/LLM engineer at Charles Schwab who built a production loan-advisor chatbot integrated with internal knowledge and loan-calculator APIs, adding strict numeric validation to prevent rate hallucinations and optimizing context to control costs. Also runs ~40 Airflow DAGs orchestrating retraining/ETL/drift monitoring with an automated Snowflake→SageMaker→auto-deploy pipeline, and uses rigorous testing plus canary rollouts tied to business metrics and compliance constraints.”
Mid-level Data Scientist specializing in ML, MLOps, and customer analytics
“ML/NLP practitioner focused on insurance/claims analytics for a large financial firm, working with millions of fragmented structured and unstructured records. Built production-grade pipelines for entity extraction, entity resolution, and semantic search using Sentence-BERT + vector DB, including fine-tuning with contrastive learning (reported ~15% recall lift) and scalable ETL/containerized deployment on Kubernetes.”
Senior Data Scientist / ML Engineer specializing in NLP, anomaly detection, and cloud ML platforms
“ML/NLP practitioner who built customer-feedback topic modeling (NMF + TF-IDF) to diagnose chatbot-to-agent handovers and drove product/ops changes that reduced operational costs by 20%. Also developed LSTM-based intent recognition using Word2Vec/GloVe embeddings for semantic linking, and deployed an LSTM autoencoder for fraud anomaly detection that cut false positives by 25% while capturing 15% more fraud in A/B testing.”
“Built an AI-driven insurance policy summarization platform at Marsh, taking it end-to-end from messy PDF ingestion/OCR and custom extraction through LLM fine-tuning and AWS SageMaker deployment. Delivered measurable impact (25% reduction in manual review time, 99% uptime) and demonstrated strong production MLOps/LLMOps practices with Airflow/Step Functions orchestration, rigorous evaluation (ROUGE + human review), and continuous monitoring for drift, latency, and hallucinations.”
Senior Backend Software Engineer specializing in Java microservices, Kafka, and AWS
“AI engineer who shipped a production chat assistant for a storage company by building the underlying RAG-style knowledge base (document ingestion, chunking/embeddings, FAISS vector store) and an admin update interface to keep content current. Also has full-stack delivery experience (Python REST APIs + React/TypeScript UI) and AWS operations using Terraform/Jenkins, including handling a real production performance incident by optimizing DB queries and adding auto-scaling.”
Mid-level AI/ML Engineer specializing in GenAI agents, RAG pipelines, and MLOps
“AI/ML engineer who built a production RAG-based internal document intelligence assistant (LangChain + Pinecone) to let employees query enterprise reports in natural language. Demonstrated hands-on pipeline orchestration with Apache Airflow and tackled real production issues like retrieval grounding and latency using tuning, caching, and token optimization, while partnering closely with non-technical business stakeholders through iterative demos.”
Intern Data Scientist specializing in healthcare AI and experimentation
“Human-AI Design Lab practitioner who productionized a wearable-health anomaly detection system by evolving a standalone autoencoder into a hybrid autoencoder + GPT-based approach, backed by PySpark ETL and MLOps on AWS SageMaker/MLflow. Also has applied LLM troubleshooting experience (fine-tuned FLAN-T5 summarization) and partnered with BI teams to run A/B tests and improve retention via feature stores and experimentation.”
Mid-level Data Scientist specializing in Generative AI, RAG systems, and ML engineering
“AI/LLM engineer who built a production QA RAG for a University of Massachusetts faculty success initiative, cutting service tickets by 70%. Strong end-to-end RAG implementation skills (LangChain, Qdrant, hybrid/HyDE retrieval, FastAPI) with rigorous evaluation (RAGAS, LLM-as-judge) and practical handling of constraints like API rate limits and cost. Prior cross-functional delivery experience collaborating with SMEs and business owners at TCS and IBM.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”