Pre-screened and vetted.
“At Liberty Mutual, built a production underwriting decision assistant combining LLM reasoning with quantitative models and strong auditability. Implemented a claims-based response verification pipeline that cut hallucinations from 18% to 3% and materially improved user trust/validation scores. Experienced orchestrating ML/LLM workflows end-to-end with Airflow, Kubeflow Pipelines, and Jenkins, including SLA-focused pipeline hardening.”
Intern Data Scientist specializing in Generative AI and NLP
“Backend/AI engineer with internship experience building an AI-powered financial insights platform (FastAPI, Redis, BigQuery) and prior HCL experience leading a monolith-to-microservices refactor (Flask, Kafka) using blue-green deployments. Demonstrates strong performance/security focus (OAuth/JWT/RBAC, encryption) and measurable impact on latency, downtime, and ML model reliability; MVP was submitted to Google’s accelerator program.”
Mid-level AI Engineer specializing in NLP and production ML systems
“AI/LLM engineer who has shipped production RAG chatbots using LangChain/OpenAI with FAISS and FastAPI, focusing on real-world constraints like context windows, concurrency, and latency (reported ~40% latency reduction and <2s average response). Experienced orchestrating AI pipelines with Celery and fault-tolerant long-running workflows with Temporal, and has applied NLP model tradeoff testing (Word2Vec vs BERT) to drive measurable accuracy gains.”
Junior Software Engineer specializing in backend, cloud, and LLM-powered search
“Python backend engineer (BetterWorld Technology) who owns microservice systems end-to-end on Azure, including Kubernetes deployments, CI/CD, and production monitoring/alerting. Has hands-on experience integrating SQL/NoSQL (including Cosmos DB with vector search/graph workflow) and has built a Kafka + Spark Streaming pipeline to Snowflake with a reported 40% latency reduction.”
Mid-level Data Engineer specializing in AI/ML, RAG systems, and cloud data pipelines
“Built a production lead-generation system using AI agents that researches the internet for relevant leads and integrates RAG-based contact enrichment/shortlisting aligned to existing CRM data, enabling sales reps to focus more on selling. Also has hands-on AWS data orchestration experience (Glue, Step Functions) moving raw data into Redshift and evaluates agent performance with human-in-the-loop plus BLEU/perplexity metrics.”
Mid-level Data Engineer specializing in cloud data platforms and real-time pipelines
“Data engineer who has owned production pipelines end-to-end—from Kafka/Airflow ingestion through SQL/Python validation and dbt transformations into Redshift/BI. Also built and operated a large-scale distributed web scraping platform (50–100 sites daily, ~5–10M records/day) with Kubernetes, Kafka queues, robust retries/DLQ, anti-bot measures, and backfill-safe raw HTML storage.”
Junior Machine Learning Engineer specializing in multimodal systems and LLMs
“Built and productionized a domain-specific LLM-powered RAG knowledge assistant at JerseyStem for answering questions over large internal document corpora, owning the full stack from FAISS retrieval and LoRA/QLoRA fine-tuning to AWS autoscaling GPU deployment. Drove measurable gains (28% accuracy lift, 25% latency reduction) and improved reliability through hybrid retrieval, grounded decoding, preference-model reranking, and Airflow-orchestrated pipelines (35% faster runtime), while partnering closely with non-technical stakeholders to define success metrics and ensure adoption.”
Mid-level Data Engineer specializing in cloud data platforms and ETL automation
“Data engineer who has owned high-volume production pipelines end-to-end (200–300 GB/day) on AWS, implementing strong data quality/observability and achieving 99.9% reliability while cutting data issues ~33%. Also built a large-scale external data collection system ingesting millions of records/day with anti-bot/rate-limit handling and backfill tooling, and shipped a versioned REST service exposing curated Snowflake data to downstream teams.”
Entry-level Software Engineer specializing in AI, data engineering, and cloud DevOps
“Product-minded full-stack engineer with strong React/TypeScript, serverless AWS, and Postgres depth, highlighted by owning real-time personalization and onboarding experiences at mParticle. Stands out for combining deep performance debugging with measurable product impact—improving activation by 28%, reducing time-to-insights by 35%, and building reusable internal platform primitives adopted by 12 teams.”
Mid-level MLOps Engineer specializing in production machine learning systems
“Built an end-to-end churn prediction platform at Freddi's Flowers spanning Spark ETL on AWS, model serving, monitoring, and a stakeholder-facing dashboard. Stands out for combining MLOps rigor with product thinking—adding explainability, action-oriented workflows, and config-driven multi-tenant architecture while improving latency and automating drift response.”
Junior NLP/ML Engineer specializing in LLMs and retrieval-augmented generation
Mid-level Data Analyst specializing in AML, fraud detection, and cloud data pipelines
Mid-level Software Engineer specializing in full-stack and distributed systems
Mid-Level Software Engineer specializing in backend, cloud, and distributed systems
Mid-Level Software Engineer specializing in LLM and RAG applications
Mid-level Data Engineer specializing in cloud ETL and big data pipelines
Mid-level Data Scientist/AI Engineer specializing in cloud LLMs, NLP, and scalable data pipelines
Senior Data Engineer specializing in Azure/AWS lakehouse and real-time analytics
Senior Data Architect specializing in cloud data platforms (GCP/AWS)
Mid-level Data Scientist specializing in AI/ML, data engineering, and sensor systems
Junior Backend & ML Engineer specializing in distributed systems and MLOps
Mid-Level Backend Software Engineer specializing in Java/Spring Boot and cloud APIs
Mid-level AI Engineer specializing in LLMs, RAG pipelines, and multimodal automation