Pre-screened and vetted.
Mid-level Data Engineer specializing in cloud ETL, streaming, and data warehousing
Mid-level Data Engineer specializing in Cloud & Big Data ETL/ELT
“Data engineer in financial services (Northern Trust) who has worked across ingestion, transformation, data quality, orchestration, and serving on AWS (S3/Glue/EMR) with Airflow. Highlights include processing ~15M transactions with validation/anomaly detection for regulatory reporting and improving Snowflake query performance by 27% for risk/compliance reporting. Also built a personal real-time streaming service (FastAPI, Kafka, Redis, Cassandra) and uses production reliability patterns like blue-green/atomic swaps and robust retry strategies.”
Senior Product Manager / Project Manager specializing in data platforms, BI, and cloud transformation
Mid-level Data Engineer specializing in AWS data platforms and streaming pipelines
Mid-level Cloud/DevOps Engineer specializing in AWS automation and CI/CD
“AWS Cloud DevOps Engineer focused on production Linux environments, building secure CI/CD pipelines (Jenkins/GitHub) to deploy Dockerized services to AWS ECS and automating infrastructure with Terraform/CloudFormation. Strong in operational troubleshooting and scaling (CloudWatch-driven performance remediation, Auto Scaling/ELB, multi-AZ HA patterns), but explicitly does not have IBM Power/AIX or PowerHA/HACMP experience.”
Mid-Level Data Engineer specializing in cloud data pipelines and big data platforms
“Data engineer with ~4 years of experience building Python-based data ingestion/processing services and real-time streaming pipelines (Kafka/PubSub + Spark Structured Streaming). Has deployed containerized data applications on Kubernetes with GitLab CI/Jenkins pipelines and applied GitOps to cut deployment time ~40% while reducing config drift. Also supported a legacy on-prem data warehouse/backend migration to GCP using phased migration and parallel validation to meet strict reliability/SLA needs.”
Mid-level Data Scientist specializing in Generative AI, RAG systems, and MLOps
Senior Cloud Software Engineer specializing in AWS microservices and DevOps
Mid-level Data Engineer specializing in FinTech and AI-ready data platforms
Junior Data Engineer specializing in cloud ETL/ELT and lakehouse platforms
Mid-level Generative AI & ML Engineer specializing in production LLM and RAG systems
“AI/ML engineer who shipped a production blood-test report understanding and personalized supplement recommendation product, using a LangGraph multi-agent pipeline on AWS serverless with OCR via Bedrock and RAG over vetted clinical research. Also built end-to-end recommender system pipelines at ASANTe using Airflow (ingestion, embeddings/features, training, registry, batch scoring/monitoring) with KPI reporting to Tableau, with a strong focus on safety, evaluation, and measurable reliability.”
Senior AI/ML Engineer specializing in Generative AI and healthcare analytics
“ML/AI engineer with strong healthcare insurance domain depth who has owned fraud detection and LLM claims products end-to-end in production. Stands out for combining modern MLOps and RAG architecture with measurable business impact, including millions in fraud savings, 40% faster analysis, and reusable platform tooling that accelerated multiple teams.”
Senior Software Engineer specializing in cloud-scale distributed systems and data platforms
“LLM/RAG-focused engineer who repeatedly takes agentic workflows from impressive demos to dependable production using rigorous evals, SLOs, and deep observability. Has led high-impact incident mitigation (22-minute MTTR during a major sale) and developer enablement workshops, and partnered with sales to close a $410k ARR enterprise deal with a tailored RAG pilot (FastAPI/pgvector/Okta/InfoSec-ready).”
Mid-level GenAI/Data Engineer specializing in LLMs, RAG systems, and fraud detection
“ML/NLP engineer with banking domain experience who built a GenAI-powered fraud detection and risk intelligence system at Origin Bank, combining RAG (LangChain + FAISS), fine-tuned BERT NER, and GPT-4/Sentence-BERT embeddings. Delivered measurable impact (25% higher fraud detection accuracy, 40% less manual review) and emphasizes production-grade pipelines on AWS SageMaker/Airflow with strong data validation and scalable PySpark processing.”
Mid-level Data Engineer specializing in cloud data platforms and real-time pipelines
“Data engineer who has owned production pipelines end-to-end—from Kafka/Airflow ingestion through SQL/Python validation and dbt transformations into Redshift/BI. Also built and operated a large-scale distributed web scraping platform (50–100 sites daily, ~5–10M records/day) with Kubernetes, Kafka queues, robust retries/DLQ, anti-bot measures, and backfill-safe raw HTML storage.”
Junior Backend Software Engineer specializing in search, data systems, and LLM applications
“Built and deployed a full-stack web product for international football fans visiting the U.S. for FIFA, owning everything from crawling and aggregating event data to frontend, backend, deployment, and maintenance. Particularly strong in data-heavy product work, using LLMs, Google Maps API, and SQL/RPC patterns to improve data quality, speed implementation, and support a polished user experience.”
Senior Data Engineer specializing in cloud lakehouse and AI/ML pipelines
Junior Data Engineer specializing in cloud data pipelines and warehousing
Mid-level Machine Learning Engineer specializing in production ML, MLOps, and LLM retrieval systems
Mid-Level Cloud Engineer specializing in AWS & Azure infrastructure automation