Himanshu already has a relationship with Reval, so a warm intro from us gets a much better response than cold outreach.
Recommended
Already have an account?
About
Backend/ML engineer who has built both enterprise data pipelines and real-time AI products: modular Python (Flask/FastAPI) services integrating automation scripts and low-latency ML inference (MediaPipe, PyTorch) plus OpenAI-powered feedback. Demonstrated measurable performance wins (~30% faster HR workflows; ~40% faster AWS pipelines across 100+ Oscar Health feeds) and strong multi-tenant/data-isolation patterns (schema-based isolation, RBAC, microservices).
Experience
Software Engineer Co-opEPRI
Research AssistantUniversity of North Carolina at Charlotte
Software EngineerPersistent Systems
Software Engineer InternEastro Control Systems
Software EngineerRebecca Everlene Trust Company
Education
University of North Carolina at Charlottemaster, Computer Science (2025)
Key Strengths
Designed modular Python/Flask backend with stable endpoints and fallback/cached responses for unreliable automation scripts
Improved internal HR workflow backend performance by ~30% via WAL mode, batched writes, and retry logic
SQLAlchemy/PostgreSQL performance tuning using access-pattern-driven schema design, composite indexes, and materialized views
Built real-time ML backends (FastAPI) with low-latency inference using multithreading, batching, and model warmup
Integrated GPT/OpenAI API layer to generate personalized feedback from ML outputs
Implemented schema-based data isolation and RBAC/scoped queries for multi-tenant-style systems
Optimized high-throughput background pipelines on AWS (100+ production feeds) with parallel modular tasks, caching, and retries; ~40% faster processing
Designed and evolved backend data processing system supporting 100+ production data feeds for US healthcare clients
Restructured monolithic workflows into modular microservices (Spring Boot/Python) on AWS and GCP
Led production migration from GraphQL services to optimized SQL-driven microservices with zero downtime via parallel run and feature-flag rollout
Improved API response time by ~30% through query and architecture changes
Increased distributed pipeline throughput by ~40% via better orchestration and monitoring
Strong data integrity and risk controls (checksum validation, automated reconciliation, schema validation)
Built reliable low-latency FastAPI services using Pydantic schemas, async endpoints, and predictable error handling