Pre-screened and vetted.
Mid-level Full-Stack Java Developer specializing in cloud-native microservices
“Full-stack engineer focused on enterprise, cloud-native microservices—building Spring Boot backends and React/Angular front ends with strong security (OAuth/JWT), AWS infrastructure (RDS/S3), and containerized deployments (Docker/Kubernetes). Has delivered data-heavy order/account/transaction platforms and healthcare solutions including EHR integrations for secure patient data exchange, with emphasis on testing, performance tuning, and reliability (load testing).”
Mid-level Full-Stack Software Engineer specializing in cloud-native microservices and AI integrations
“Backend engineer who has delivered large, measurable performance wins (10x throughput, 67% latency reduction) by combining Flask microservices, Redis caching, and AWS autoscaling/observability. Has hands-on depth in SQLAlchemy/Postgres optimization and production scaling pitfalls (cache consistency, connection exhaustion), plus experience deploying real-time ML inference (XGBoost) on AWS Lambda and building secure multi-tenant Kubernetes isolation.”
Mid-level Data Analyst specializing in AI/ML and advanced analytics
“Accenture data/ML practitioner who deployed a retail churn prediction and BERT-based sentiment analysis system to production, integrating behavioral + feedback data and operationalizing it with ETL automation, orchestration, and CI/CD. Experienced managing 2TB+ multi-source data, monitoring drift in Databricks, and translating results into Power BI dashboards for marketing teams (including K-means customer segmentation).”
Mid-level Data Engineer specializing in cloud data pipelines and financial services warehousing
“Data engineer (Charles Schwab) who took ownership of an unstable, ambiguous nightly financial data pipeline and rebuilt it into a reliable, incremental AWS Glue/Airflow/Redshift system feeding Power BI. Created a custom Python data-quality framework with hard-stop gating and schema drift detection, improving integrity (99.9%), cutting runtime (~20%), and reducing incidents/tickets (35% fewer schema-related dashboard incidents; 30% fewer investigations).”
Mid-level Data Engineer specializing in real-time pipelines and cloud data platforms
“Backend engineer with hands-on experience building secure Python/Flask services (sessions, JWT, RBAC) and optimizing PostgreSQL/SQLAlchemy performance, including custom SQL using CTEs/window functions profiled via EXPLAIN ANALYZE. Also integrates LLM features via OpenAI/Azure into backend systems and improves scalability with RabbitMQ-driven async processing, caching, and multi-tenant data isolation patterns.”
Senior Machine Learning Engineer specializing in MLOps and NLP/GenAI
“Built a production LLM-agent framework for a startup that performs daily financial/trading analysis by combining live market data with internal tools, including a centralized memory module to prevent context drift and reduce hallucinations. Also implemented an Airflow-orchestrated retail price forecasting pipeline deployed to AWS endpoints, scaling parallel workloads via Kubernetes Executor and validating systems with rigorous functional + LLM-specific metrics and cross-team collaboration.”
Mid-level Data Engineer specializing in capital markets post-trade data platforms
“Data/streaming engineer in capital markets who led an end-to-end trade settlement data product (Kafka→MongoDB→data lake) with rigorous data-quality logic and ~$175K first-year operational impact. Also built a low-latency Go-based CME market data engine feeding SOFR curve generation, using MSK on EKS with performance tuning (idempotency, compression, partitioning) to achieve sub-100ms delivery.”
Senior Data Analyst specializing in healthcare and financial analytics
“Healthcare analytics candidate with hands-on experience turning messy claims data in Redshift and S3 into validated reporting tables, plus automating KPI workflows in Python. They’ve owned end-to-end operational analytics projects, including a claims delay analysis that improved processing efficiency by about 20%, and have experience driving stakeholder adoption of standardized metrics across dashboards.”
Junior Data Analyst specializing in financial and operational analytics
“Analytics professional with experience at KPMG turning messy operational and financial data from SQL Server and AWS S3 into clean reporting datasets and automated Python workflows. They combine SQL, Python, Power BI, and experimentation methods to deliver stakeholder-aligned KPI dashboards and marketing performance insights with a strong focus on data integrity and reproducibility.”
Senior Data Scientist and AI/ML Engineer specializing in GenAI and cloud ML
“ML/AI engineer with hands-on experience owning systems from experimentation through deployment and monitoring, including a Bank of Montreal project that improved timely interventions by 12%. Also brings GenAI/RAG experience with evaluation and safety guardrails, plus clinical NLP pipeline work extracting medication data from notes for patient risk prediction.”
Mid-level Data Engineer specializing in cloud data platforms, Spark, and streaming pipelines
“Data/MLOps engineer (Cognizant background) who owned an AWS/Airflow/Snowflake healthcare transactions pipeline processing ~8–10M records/day and cut pipeline/data-quality incidents by ~33%. Also built and deployed a production FastAPI model-inference service on Kubernetes (Docker, HPA) with strong observability (Prometheus/Grafana), versioned endpoints, and resilient backfill/idempotent external data ingestion patterns.”
Mid-level Data Analyst specializing in healthcare and financial analytics
“Built and productionized an LLM-powered clinical documentation and insights pipeline at Cardinal Health using LangChain + GPT-4 with RAG to summarize long clinical notes, extract medication/dosage entities, and generate structured SQL-ready outputs for downstream analytics. Emphasizes clinical reliability via labeled benchmarking (precision/recall/F1), shadow deployments, clinician human-in-the-loop review, and ongoing monitoring/orchestration with Airflow, Lambda, S3, Postgres, and Power BI.”
Staff Software Engineer / Technical Architect specializing in cloud data platforms and GenAI agents
“Small-team builder of Promethium’s “Mantra” next-gen agentic text-to-SQL engine, using vector DB + LangGraph tooling and SQL validation/evaluation to improve query accuracy. Experienced in diagnosing production LLM workflow failures via LangSmith traces and in running hands-on developer workshops and pre-sales POCs with live debugging and real customer data.”
Mid-level Java Full-Stack Developer specializing in banking and telecom platforms
“Frontend-focused engineer with experience at T-Mobile and U.S. Bank who maintained a TypeScript utility library (types, tests, build pipeline, and docs) adopted by multiple teams, and improved React workflow performance by refactoring components and optimizing data fetching. Known for pragmatic cross-team support—reproducing issues quickly, shipping well-tested fixes, and managing changes carefully to avoid breaking downstream apps.”
Mid-level Data Scientist & AI Engineer specializing in RAG, agentic AI, and production ML
“AI/data engineer who built a production LLM-powered schema drift detection system (LangChain/LangGraph) to catch semantic data changes before they break downstream analytics/ML. Deployed on AWS with Docker/S3 and implemented an LLM-as-a-judge evaluation framework to improve trust, reduce hallucinations, and control false positives/alert fatigue. Collaborated with non-technical risk/business analytics stakeholders at EY by delivering human-readable drift explanations that improved confidence in financial analytics dashboards.”
Mid-level Cloud Data Engineer specializing in Azure/AWS pipelines and medallion architecture
“Data engineer focused on reliability and data quality, owning end-to-end pipelines processing ~100k–300k records/day. Implemented robust validation and monitoring that cut reporting issues by ~30%, and built stable external data collection with anti-bot measures, backfills, and schema-change detection while maintaining backward-compatible internal data services.”
Junior Full-Stack Software Engineer specializing in AI, FinTech, and e-commerce
“Built both traditional internal tooling and LLM-powered systems during an internship, including a React/Python/AWS calculator onboarding platform and a production-style ROS2 RAG assistant over 10K+ documents. Stands out for combining full-stack delivery, stakeholder coordination, and practical AI reliability work like retrieval tuning, source-grounded answers, and low-confidence fallbacks.”
Mid-level AI/ML Software Engineer specializing in cloud-native MLOps and FinTech
“Software engineer with JPMorgan Chase experience delivering end-to-end fintech features (Next.js/React/Node/Postgres on AWS) and measurable performance gains. Built and productionized an AI-native credit decisioning workflow combining LLMs, vector retrieval, and a rules engine with strong governance (bias checks, auditability, human-in-loop), improving precision and cutting underwriting turnaround time by 40%.”
Senior Software Engineer specializing in AI-driven marketing and data platforms
“Backend/data engineer who builds production FastAPI microservices and AWS serverless/Glue pipelines for SMS analytics and marketing segmentation. Led a legacy batch modernization into modular services (FastAPI + Glue/Athena + ClickHouse) using shadow-mode parity checks, feature flags, and incremental rollout. Demonstrated measurable performance wins (12s to sub-second SQL; ~40% CPU reduction) and strong incident ownership with proactive schema-drift prevention.”
Mid-level Data Engineer specializing in scalable ETL, streaming analytics, and cloud data platforms
“At Dreamline AI, built and productionized an AWS-based incentive intelligence platform that uses Llama-2/GPT-4 to extract eligibility rules from unstructured state policy documents into structured JSON, then processes them with Glue/PySpark and serves results via Lambda/SageMaker/API Gateway. Designed state-specific ingestion connectors plus schema validation and automated checks/alerts to handle frequent policy/format changes without breaking the pipeline, and partnered with business/analytics stakeholders to deliver interpretable eligibility decisions via explanations and dashboards.”
Mid-level AI/ML Engineer specializing in NLP, Generative AI, and MLOps in Financial Services
“ML/LLM engineer at Charles Schwab who built a production loan-advisor chatbot integrated with internal knowledge and loan-calculator APIs, adding strict numeric validation to prevent rate hallucinations and optimizing context to control costs. Also runs ~40 Airflow DAGs orchestrating retraining/ETL/drift monitoring with an automated Snowflake→SageMaker→auto-deploy pipeline, and uses rigorous testing plus canary rollouts tied to business metrics and compliance constraints.”
Mid-level AI/ML Engineer specializing in GenAI agents, RAG pipelines, and MLOps
“AI/ML engineer who built a production RAG-based internal document intelligence assistant (LangChain + Pinecone) to let employees query enterprise reports in natural language. Demonstrated hands-on pipeline orchestration with Apache Airflow and tackled real production issues like retrieval grounding and latency using tuning, caching, and token optimization, while partnering closely with non-technical business stakeholders through iterative demos.”
Senior Data Engineer specializing in cloud-native data platforms for finance and healthcare
“Data engineer/backend data services practitioner with Bank of America experience building real-time and batch transaction-monitoring pipelines and APIs (Kafka + databases, REST/GraphQL). Highlights include a reported 45% response-time improvement through performance optimizations and use of Delta Lake schema evolution plus CI/CD (GitHub Actions/Jenkins) and operational reliability patterns like CloudWatch monitoring and dead-letter queues.”
Senior Data Engineer specializing in cloud data platforms and big data pipelines
“Data engineer focused on building reliable, production-grade pipelines and external data collection systems on AWS (S3/Lambda/SQS/Glue/EMR) using PySpark/SQL, serving curated datasets to Snowflake/Redshift for finance and fraud teams. Has operated a large-scale crawler ingesting millions of records/day with anti-bot tactics, schema versioning/quarantine, and CloudWatch/Datadog monitoring, and also shipped a versioned REST API with caching and query optimization.”