Vetted Data Engineers in Illinois

Pre-screened and vetted in Illinois.

SY

Mid-level Data Engineer specializing in healthcare data platforms and MLOps

Chicago, IL3y exp
Health Care Service CorporationWichita State University

ML/NLP practitioner with healthcare payer experience at HCSC, focused on connecting messy unstructured clinical notes to structured claims/provider data to improve fraud-analytics workflows. Has hands-on experience fine-tuning transformers in AWS SageMaker, building large-scale embedding search with FAISS, and implementing robust entity resolution using golden datasets, precision/recall calibration, and production monitoring for drift.

View profile
Teja Babu Mandaloju - Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms in Chicago, USA

Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms

Chicago, USA5y exp
VosynUniversity of North Texas

AI/ML engineer who led production deployment of a multimodal (text/video/image) RAG system on GCP using Gemini 2.5 + Vertex AI Vector Search, scaling to 10M+ documents with sub-second latency and +40% retrieval accuracy. Strong MLOps/orchestration background (Kubernetes, CI/CD, Airflow, MLflow) with proven impact on reliability (75% fewer incidents) and deployment speed (92% faster), plus experience delivering explainable ML (XGBoost + SHAP + Tableau) to non-technical retail stakeholders.

View profile
MD

Meet Doshi

Screened

Mid-level Data Engineer specializing in cloud data platforms and AI/ML analytics

Chicago, IL4y exp
EDNANortheastern University

Backend/data engineer in healthcare who built an AWS-based clinical analytics platform from scratch (DynamoDB/S3/Airflow/dbt) with sub-second clinician query goals, 99.9% uptime, and HIPAA-grade controls (KMS encryption, IAM RBAC, audit trails). Also modernized ML delivery by replacing a manual 4-hour deployment with a 30-minute Docker/GitHub Actions CI/CD pipeline using parallel runs, parity testing, and rollback, and caught critical EHR data edge cases (date formats/timezones) that could have impacted patient care.

View profile
GM

Mid-level Data Engineer specializing in Azure, Spark, and scalable ETL/ELT pipelines

Charleston, IL4y exp
Eastern Illinois UniversityEastern Illinois University

Data engineer with banking FP&A experience who led an end-to-end migration of 10+ TB from Teradata to Azure (ADF + Data Lake + Databricks/PySpark + Synapse). Emphasizes reliability (multi-stage validation, monitoring/alerts) and performance (Spark tuning, incremental loads, autoscaling), reporting ~99.5% pipeline reliability while supporting downstream consumers with stable schemas and clear change management.

View profile
DA

Mid-level Data Engineer specializing in cloud data pipelines and full-stack analytics

Chicago, IL4y exp
StarplotIllinois Institute of Technology
View profile
VV

Mid-level Data Scientist specializing in Generative AI, LLMs, and MLOps

Chicago, Illinois5y exp
EnigmaLewis University
View profile
TB

Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms

Chicago, USA6y exp
VosynUniversity of North Texas
View profile
KN

Mid-level Data Scientist specializing in ML, NLP, and MLOps

Chicago, IL4y exp
McKessonDePaul University
View profile
SJ

Senior Data Scientist specializing in NLP and LLM applications

Chicago, IL8y exp
Global Action AllianceIllinois Institute of Technology
View profile
SP

Mid-level Data Engineer specializing in FinTech and AI-ready data platforms

Illinois, USA3y exp
Northern TrustLindsey Wilson College
View profile
VB

Mid-level Data Engineer specializing in cloud data pipelines for Healthcare and FinTech

Chicago, IL5y exp
Tenet HealthcareEastern Illinois University
View profile
HK

Mid-level Data Engineer specializing in cloud ETL and big data pipelines

Naperville, IL4y exp
eAlliance CorporationLewis University

Data engineer focused on building reliable, production-grade pipelines and data services end-to-end, including a 50+ GB/day pipeline ingesting from APIs/files into Snowflake with PySpark/SQL transformations. Emphasizes strong data quality controls, monitoring/retries, and performance optimization, and has also shipped a Python data API with caching and backward-compatible versioning.

View profile

Need someone specific?

AI Search