Pre-screened and vetted.
Mid-level Data Analyst/Data Engineer specializing in machine learning and NLP
Mid-level Data Engineer specializing in cloud ETL, streaming, and data warehousing
Mid-level Data Engineer specializing in AWS, Snowflake, Databricks, and PySpark
Mid-level Data Engineer specializing in Cloud & Big Data ETL/ELT
“Data engineer in financial services (Northern Trust) who has worked across ingestion, transformation, data quality, orchestration, and serving on AWS (S3/Glue/EMR) with Airflow. Highlights include processing ~15M transactions with validation/anomaly detection for regulatory reporting and improving Snowflake query performance by 27% for risk/compliance reporting. Also built a personal real-time streaming service (FastAPI, Kafka, Redis, Cassandra) and uses production reliability patterns like blue-green/atomic swaps and robust retry strategies.”
Mid-level AI Engineer specializing in agentic AI, LLM systems, and healthcare AI
“Healthcare-focused ML/AI engineer who has built production voice agents and clinical question-answering systems end-to-end, from experimentation through deployment, observability, and iteration. Particularly strong in making LLM systems reliable in real workflows via RAG, fine-tuning, guardrails, evaluation pipelines, and shared Python tooling; cites ~20% clinical QA accuracy gains and ~40% faster physician decision turnaround.”
Mid-level Data Engineer specializing in cloud data platforms and streaming pipelines
Mid-level Data Engineer specializing in cloud data pipelines and streaming analytics
Mid-level Data Engineer specializing in AI/ML, streaming, and lakehouse architectures
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Mid-level sales and data professional specializing in FinTech, telecom, and insurance
Senior Data Engineer specializing in AWS-based data pipelines and multi-tenant SaaS
Junior Data Engineer specializing in data pipelines and streaming ingestion
“Backend/data platform engineer who owned a near-real-time patient feedback ingestion system, building a FastAPI + Kafka service with Snowflake/Airflow orchestration. Demonstrates strong production Kubernetes/GitOps practices on AWS EKS (Helm, Argo CD, Sealed Secrets) and solved real-time data integrity issues via idempotent processing with Redis.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Mid-level Data Engineer specializing in cloud data pipelines and analytics engineering
“Built and deployed a production LLM-powered demand and churn forecasting system for an e-commerce client, combining open-source LLMs (LLaMA/Mistral) and Sentence-BERT embeddings to generate business-friendly explanations of forecast drivers. Strong focus on data quality and model trust (validation, baselines, segmented monitoring) and production reliability via Airflow-orchestrated pipelines with readiness checks, retries, and ongoing drift/A-B testing.”
Mid-level Data Scientist specializing in ML, data engineering, and real-time analytics
Senior Full-Stack Software Engineer specializing in web apps, integrations, and data pipelines
Mid-level Data Engineer specializing in FinTech and AI-ready data platforms
Junior Data Engineer specializing in cloud ETL/ELT and lakehouse platforms
Mid-level Generative AI & ML Engineer specializing in production LLM and RAG systems
“AI/ML engineer who shipped a production blood-test report understanding and personalized supplement recommendation product, using a LangGraph multi-agent pipeline on AWS serverless with OCR via Bedrock and RAG over vetted clinical research. Also built end-to-end recommender system pipelines at ASANTe using Airflow (ingestion, embeddings/features, training, registry, batch scoring/monitoring) with KPI reporting to Tableau, with a strong focus on safety, evaluation, and measurable reliability.”
Mid-level Software/Data Engineer specializing in LLM apps, RAG pipelines, and cloud microservices
“Backend/data engineer who built an enterprise LLM assistant (AI Genie) at Broadband Insights using a LangChain + GPT-4 + Pinecone RAG pipeline to automate broadband analytics reporting. Developed Python/Dagster ETL processing 10M+ records/day and improved data freshness by 60%, with production-grade scalability patterns (async workers, containerized microservices, Kubernetes) and strong multi-tenant isolation practices.”
Mid-level GenAI/Data Engineer specializing in LLMs, RAG systems, and fraud detection
“ML/NLP engineer with banking domain experience who built a GenAI-powered fraud detection and risk intelligence system at Origin Bank, combining RAG (LangChain + FAISS), fine-tuned BERT NER, and GPT-4/Sentence-BERT embeddings. Delivered measurable impact (25% higher fraud detection accuracy, 40% less manual review) and emphasizes production-grade pipelines on AWS SageMaker/Airflow with strong data validation and scalable PySpark processing.”
Junior Software Engineer specializing in full-stack and AI systems
“Built and shipped an LLM-powered support agent for Community Dreams Foundation that automated intake and Jira ticket creation using RAG, structured outputs, and strong production guardrails. Demonstrated practical production AI experience with 92% routing accuracy, 98% uptime, and a 31% improvement in first-response accuracy, plus hands-on work in observability, evals, idempotency, and failure handling.”
Mid-level Data Engineer specializing in cloud data pipelines and Snowflake
“Data engineer who has owned production pipelines end-to-end, ingesting 50–100 GB/day from APIs/S3 and near-real-time Kafka into Snowflake with strong data quality gates (Great Expectations/dbt) and Airflow-based reliability (SLAs, alerting, dashboards). Also built a Snowflake-backed REST data API with caching/pagination and versioned endpoints, and designed a compliant, scalable web-scraping system with anti-bot handling and safe backfills.”