Pre-screened and vetted.
Senior Frontend Developer and MERN Stack specialist
“Frontend engineer experienced leading client-facing portals (including WebGL integrations) and admin panels, with a strong focus on scalable architecture (monorepo, shared design system/atomic design) and performance. Has built complex React+TypeScript workflows like multi-step user creation with dynamic forms, and ships major features (e.g., video recording) using feature-flagged, trunk-based delivery and E2E testing.”
Junior Machine Learning & Data Science professional specializing in AI agents and applied ML
“IT Analyst/research background with hands-on experience deploying and hardening a multi-agent AI support/triage system (ticket ingestion + knowledge-base retrieval) with strong emphasis on reliability and observability. Has debugged real production issues spanning backend services and network latency (sync failures/partial writes) and is comfortable in Linux environments; also has academic exposure to robotics simulation and ROS2.”
Junior AI/ML & Mobile Engineer specializing in LLMs, synthetic data, and React Native
“Currently at Uplift AI shipping production LLM features that generate personalized growth insights from user reflections using BERT + embeddings + RAG, with strong safety/guardrail practices for sensitive contexts. Also built an end-to-end React Native UGC challenge submission/moderation system that improved repeat submissions and 7-day retention, and has applied rigorous clinical-style evaluation methods on a dental X-ray disease detection project to reduce false negatives.”
Senior Backend/Cloud Developer specializing in AWS serverless and legacy modernization
“AWS-focused backend/data engineer with hands-on production experience building serverless APIs (Lambda/API Gateway) secured with Cognito/JWT, deploying via Terraform + CI/CD, and managing secrets with Secrets Manager/Parameter Store. Also built AWS Glue ETL from S3 to RDS with schema evolution and data-quality controls, modernized a monolith into microservices using parallel testing, and delivered major SQL performance gains (minutes to seconds) while owning incident response for batch pipelines.”
Mid-level Data Engineer / Software Engineer specializing in streaming and cloud data platforms
“Backend engineer with deep Kafka/FastAPI microservices experience who redesigned a notification pipeline to cut end-to-end latency from ~5s to ~3s (including custom partition assignment and consumer tuning). Led a high-stakes ClickUp-to-Oracle migration of 1M+ records using idempotent ETL, reconciliation, and shadow deployment to achieve >99% integrity with zero downtime, and has hands-on production security implementation with Django/DRF (JWT + RBAC).”
Mid-level Machine Learning Engineer specializing in multimodal and time-series AI systems
“Backend engineer who rebuilt and refactored high-traffic systems at Phenom using Java/Spring Boot/Play and also designs Python/FastAPI services. Focused on measurable reliability and performance gains through DB/query optimization, async processing, and strong observability, with disciplined rollout practices (feature flags, parallel runs, rollback) and security patterns including token auth and row-level security.”
Junior Data Science and AI professional specializing in Python, machine learning, and analytics
“Built AI-EDU, an AI/LLM-powered learning platform created for a Technology Entrepreneurship class that predicts student engagement and generates personalized learning insights. Emphasizes strong data preprocessing/feature engineering on noisy student data, and has experience operationalizing workflows with basic Airflow/Prefect plus reliability practices (edge-case testing, metrics, logging, guardrails) and stakeholder-friendly dashboards/summaries.”
Mid-level AI Engineer specializing in RAG, conversational AI, and agentic systems
“Built and deployed a production RAG-based clinical decision support assistant at MedLib, focused on fast, trustworthy answers from large medical documents. Demonstrates deep practical experience improving retrieval accuracy (semantic chunking + metadata-aware search), controlling hallucinations with grounded generation and thresholds, and adding clinician-requested citations using chunk metadata, with evaluation driven by healthcare professional review.”
Intern Software Engineer specializing in Python data pipelines and backend systems
“Software engineering intern at the Florida Department of Transportation who built validation/anomaly-detection logic for a live operational telemetry + system log processing pipeline. Emphasizes fault-tolerant, state-driven system design (degraded modes, data freshness tracking, safe fallbacks) and debugs time-sensitive behavior via logging/latency analysis and replay-based testing—skills that translate well to robotics-style architectures despite no direct ROS/robot experience.”
Intern AI & Machine Learning Engineer specializing in computer vision and edge deployment
“Built and shipped a real-time AI robotic inspection system, using a synthetic data generation pipeline to address rare edge cases—cutting data collection costs ~60% and boosting hard-scenario accuracy ~20%. Experienced in productionizing ML on constrained Jetson hardware and orchestrating end-to-end ML workflows with Airflow/Docker/Kubernetes, with a metrics-driven approach to reliability, evaluation, and stakeholder communication.”
Junior Backend/Platform Engineer specializing in cloud-native APIs and data systems
“Startup-style full-stack/backend engineer with hands-on AWS architecture experience who shipped an LLM-driven assessment-question automation feature (Python microservice calling AWS Bedrock via SQS, deployed on Lambda) with strong validation/guardrails and retry strategies. Also improved production scalability by moving a CPU/IO-heavy file upload path out of a Go API into a queue/Lambda design monitored with CloudWatch, and has React+TypeScript experience optimizing analytics dashboards.”
Junior Machine Learning Engineer specializing in real-time ML systems and computer vision
“Built and shipped multiple production-grade LLM/agent systems, including a Slack-based DevOps orchestration agent that conversationally triggers CI/CD with human approvals and observability (cutting manual DevOps work ~60%), and a multi-agent fact verification pipeline processing ~50k news articles/day with schema-validated outputs and confidence thresholds (reducing manual fact checking by 90%+). Also implemented rigorous offline/online evaluation and monitoring loops (Grafana, NDCG/precision/recall) and improved recommender performance through targeted data and model tuning.”
Mid-level Full-Stack & AI Engineer specializing in LLM applications
“Full-stack engineer who has shipped and operated generative-AI chat/QA features end-to-end, including a RAG-based pipeline with guardrails and cost/latency monitoring in production. Experienced with React/TypeScript + Node/Postgres architectures, Dockerized deployments to AWS (EC2) via GitHub Actions CI/CD, and building reliable ingestion/ETL systems with idempotency, backfills, and reconciliation.”
Junior Data/AI Engineer specializing in MLOps, real-time pipelines, and LLM applications
“Built an LLM-driven MLOps agent at SBD Technologies that automated an EV-charging prediction workflow end-to-end, integrating with real-time Kafka/FastAPI systems supporting 120K+ chargers at 99.99% event delivery. Addressed frequent schema drift by implementing SQLAlchemy/Flyway validation (60% reduction in drift issues) and deployed as Kubernetes microservices with GitHub Actions CI/CD; also has Airflow-based ingestion/crawling experience into Snowflake and stakeholder-facing delivery via a Fleetcharge PWA.”
Mid-level Backend Engineer specializing in Python APIs and cloud-native services
“Data engineer with experience at Morgan Stanley and Star Health owning production-grade lakehouse pipelines for credit risk and healthcare datasets. Built Azure/Databricks/Delta/Snowflake-based platforms processing millions of records per day with strong data quality, observability (Monte Carlo/Azure Monitor), and reliability practices, plus experience delivering curated data services with performance tuning and backward-compatible versioning.”
Senior Full-Stack Engineer specializing in React and Python
“Backend/data engineer focused on production AWS systems: builds multi-tenant FastAPI services on ECS behind API Gateway/ALB with serverless orchestration (Lambda, SQS, Step Functions) and strong reliability practices (JWT/JWKS auth, idempotency, backoff retries, structured logging). Also delivers AWS Glue/PySpark ETL pipelines with schema/data-quality controls and has modernized legacy analytics logic into Python with parity validation; improved a key dashboard SQL query from ~12–25s to ~2–3s.”
Senior Full-Stack AI/ML Engineer specializing in MLOps and GenAI
“Senior backend/data engineer who has built and maintained HIPAA-compliant, real-time clinical FastAPI services on AWS, orchestrating ML/LLM and vector DB calls with strong reliability patterns (auth, timeouts/retries, graceful degradation, idempotency). Also delivered AWS IaC/CI-CD (Terraform/Helm/GitHub Actions) across EKS/Lambda/SageMaker and built Glue/Spark ETL with schema evolution and data quality controls, plus demonstrated large SQL performance wins (15 min to <9 sec) and hands-on incident ownership.”
Mid-level XR Software Engineer specializing in real-time AR/VR digital twins
“Built and owned an end-to-end real-time IoT telemetry backend that powers a digital twin experience on a Meta Quest headset, integrating Cisco LoRaWAN sensors and external REST data sources. Migrated from Azure Functions to a FastAPI service to overcome firewall constraints, add caching/fallback reliability, and significantly reduce operating cost while improving performance and evolvability.”
Mid-level AI Engineer specializing in GenAI, agentic workflows, and RAG systems
“Built a production multi-agent RAG assistant using LangChain/LangGraph with OpenAI embeddings and FAISS, focusing on retrieval quality and latency (Redis caching, parallel retrieval, precomputed embeddings). Experienced orchestrating ETL/ML pipelines with Airflow and Databricks Workflows, and has delivered an AI assistant for business ops to extract insights from policy/compliance documents through close non-technical stakeholder collaboration.”
Junior AI/ML Engineer specializing in Generative and Agentic AI
“Built and deployed a production-grade LLM agent for credit management and accounts receivable automation, integrating ERP/MySQL data via a RAG pipeline and exposing services through FastAPI with Pydantic-validated outputs on AWS Bedrock. Emphasizes reliability and compliance for financial operations using schema validation and human-in-the-loop review, reporting ~32% reduction in manual work and ~41% improvement in response time/reliability.”
Junior Full-Stack Software Engineer specializing in GenAI and web platforms
“AI/software engineer with hands-on experience deploying an LLM-powered quiz generation platform for students, integrating Python services with Gemini APIs plus frontend and database components. Emphasizes production-grade reliability through observability, schema validation, async processing, and performance tuning under high concurrency, and has collaborated with product/operators (e.g., at Colombo AI) to translate real-world constraints into scalable technical solutions.”
Mid-level Backend Engineer specializing in high-scale systems and LLM pipelines
“Open-source-focused TypeScript/JavaScript engineer who built a lightweight Node.js utility library to standardize LLM-agent message formatting, tool invocation, and safe schema-validated JSON outputs. Emphasizes composable abstractions, real-world performance profiling/benchmarks, and strong community feedback loops (GitHub issues, structured errors, logging hooks). Also did research at Syracuse University on converting natural language into structured JSON with validation layers.”
Junior Software Engineer specializing in distributed systems and cloud platforms
“Software engineer (Lance Soft Engineering) who built a Java/gRPC real-time request tracking system supporting ~20K simultaneous requests, using Kafka event streaming and PostgreSQL to improve transparency and cut support requests by 35%. Demonstrates strong production operations skills—resolved live latency spikes with Kafka async messaging (+48% throughput) and executed safe migrations using parallel runs, staging validation, and blue-green deployments.”