Pre-screened and vetted in Illinois.
Mid-level Data Engineer specializing in healthcare data platforms and MLOps
“ML/NLP practitioner with healthcare payer experience at HCSC, focused on connecting messy unstructured clinical notes to structured claims/provider data to improve fraud-analytics workflows. Has hands-on experience fine-tuning transformers in AWS SageMaker, building large-scale embedding search with FAISS, and implementing robust entity resolution using golden datasets, precision/recall calibration, and production monitoring for drift.”
Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms
“AI/ML engineer who led production deployment of a multimodal (text/video/image) RAG system on GCP using Gemini 2.5 + Vertex AI Vector Search, scaling to 10M+ documents with sub-second latency and +40% retrieval accuracy. Strong MLOps/orchestration background (Kubernetes, CI/CD, Airflow, MLflow) with proven impact on reliability (75% fewer incidents) and deployment speed (92% faster), plus experience delivering explainable ML (XGBoost + SHAP + Tableau) to non-technical retail stakeholders.”
Mid-level Data Engineer specializing in cloud data platforms and AI/ML analytics
“Backend/data engineer in healthcare who built an AWS-based clinical analytics platform from scratch (DynamoDB/S3/Airflow/dbt) with sub-second clinician query goals, 99.9% uptime, and HIPAA-grade controls (KMS encryption, IAM RBAC, audit trails). Also modernized ML delivery by replacing a manual 4-hour deployment with a 30-minute Docker/GitHub Actions CI/CD pipeline using parallel runs, parity testing, and rollback, and caught critical EHR data edge cases (date formats/timezones) that could have impacted patient care.”
Mid-level Data Engineer specializing in Azure, Spark, and scalable ETL/ELT pipelines
“Data engineer with banking FP&A experience who led an end-to-end migration of 10+ TB from Teradata to Azure (ADF + Data Lake + Databricks/PySpark + Synapse). Emphasizes reliability (multi-stage validation, monitoring/alerts) and performance (Spark tuning, incremental loads, autoscaling), reporting ~99.5% pipeline reliability while supporting downstream consumers with stable schemas and clear change management.”
Mid-level Data Engineer specializing in cloud data pipelines and full-stack analytics
Mid-level Data Scientist specializing in Generative AI, LLMs, and MLOps
Mid-level Data Scientist/MLOps Engineer specializing in NLP, GenAI, and cloud ML platforms
Mid-level Data Engineer specializing in FinTech and AI-ready data platforms
Mid-level Data Engineer specializing in cloud data pipelines for Healthcare and FinTech
Mid-level Data Engineer specializing in cloud ETL and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and data services end-to-end, including a 50+ GB/day pipeline ingesting from APIs/files into Snowflake with PySpark/SQL transformations. Emphasizes strong data quality controls, monitoring/retries, and performance optimization, and has also shipped a Python data API with caching and backward-compatible versioning.”