Pre-screened and vetted in Illinois.
Mid-level Data Engineer specializing in cloud data platforms and AI/ML analytics
“Backend/data engineer in healthcare who built an AWS-based clinical analytics platform from scratch (DynamoDB/S3/Airflow/dbt) with sub-second clinician query goals, 99.9% uptime, and HIPAA-grade controls (KMS encryption, IAM RBAC, audit trails). Also modernized ML delivery by replacing a manual 4-hour deployment with a 30-minute Docker/GitHub Actions CI/CD pipeline using parallel runs, parity testing, and rollback, and caught critical EHR data edge cases (date formats/timezones) that could have impacted patient care.”
Mid-level Data Engineer specializing in Azure, Spark, and scalable ETL/ELT pipelines
“Data engineer with banking FP&A experience who led an end-to-end migration of 10+ TB from Teradata to Azure (ADF + Data Lake + Databricks/PySpark + Synapse). Emphasizes reliability (multi-stage validation, monitoring/alerts) and performance (Spark tuning, incremental loads, autoscaling), reporting ~99.5% pipeline reliability while supporting downstream consumers with stable schemas and clear change management.”
Mid-level Data Engineer specializing in cloud data pipelines and full-stack analytics
Mid-level Data Scientist specializing in Generative AI, LLMs, and MLOps
Mid-level Data Scientist specializing in healthcare and fraud analytics with GenAI/LLM expertise
Mid-level Data Engineer specializing in FinTech and AI-ready data platforms
Senior Data Scientist specializing in fraud detection and credit risk modeling
Junior Data Analyst specializing in ML, analytics engineering, and data visualization
Mid-level Data Engineer specializing in cloud data pipelines for Healthcare and FinTech
Intern Data Scientist specializing in GenAI agents, RAG, and ML platforms
“LLM/agent systems builder who deployed a production hybrid router for immerso.ai that dynamically selects retrieval vs reasoning vs generative pathways, achieving an 82% factual-accuracy lift. Deep hands-on experience optimizing local Mistral 7B inference (4–5 bit GGUF quantization, KV-cache reuse) and building reliable RAG/agent workflows with LangChain/LangGraph/AutoGen across GCP Cloud Run and AWS (ECS/Lambda).”
Junior Data Analyst specializing in reporting, BI dashboards, and financial data analysis
Mid-level Data Engineer specializing in cloud ETL and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and data services end-to-end, including a 50+ GB/day pipeline ingesting from APIs/files into Snowflake with PySpark/SQL transformations. Emphasizes strong data quality controls, monitoring/retries, and performance optimization, and has also shipped a Python data API with caching and backward-compatible versioning.”
Mid-level Sales Development & Engagement Analyst specializing in CRM-driven healthcare marketing
“SDR from Knowledge Services who reports ~$800K influenced pipeline over the last 2–3 quarters, booking 10–15 qualified meetings per month with ~25% conversion. Uses an ICP-driven outbound process (account research, segmentation/ranking, cadence) and gap-focused discovery messaging to secure demos and pass qualified opportunities to AEs.”
Mid-level Data Scientist specializing in computer vision and behavioral analytics