No cost, no commitment - we'll make a personal intro
ESHWANTH D. G
Mid-level Robotics Software Engineer specializing in autonomous perception and sensor fusion
HoneywellUniversity at BuffaloCA, USA4 Years ExperienceMid LevelWorks On-Site
Connect with ESHWANTH
ESHWANTH already has a relationship with Reval, so a warm intro from us gets a much better response than cold outreach.
Typically responds within 24 hours
Recommended
Already have an account?
About
Robotics engineer with Honeywell and Tata Motors experience deploying ROS/ROS2 autonomous mobile robot fleets into live factory environments, integrating sensors, safety PLCs, and on-prem services. Known for solving end-to-end latency and stability issues (including network spikes under load) using gRPC, Docker, and improved diagnostics—cutting diagnosis time from hours to minutes and achieving sub-150 ms control response.
Hire with Reval
Find your next great hire
Our AI agents source, screen, and vet candidates for your open roles. Get qualified candidates within 48 hours.
Artificial Intelligence Fundamentals – IBM SkillsBuildAI for Autonomous Vehicles and Robotics – University of Michigan (Coursera)Optimizing TensorFlow Models for Deployment with TensorRT – NVIDIA Deep Learning InstituteVisual Perception for Self-Driving Cars – University of TorontoState Estimation and Localization for Self-Driving Cars – University of Toronto
Mid-level Robotics Software Engineer specializing in teleoperation, simulation, and autonomy
San Francisco, CA5y exp
MetaNortheastern University
“Robotics engineer who helped bootstrap Meta’s humanoid robotics effort, building simulation training and deployment infrastructure for vision-language-action (VLA) models. Evaluated multiple physics backends (Bullet, MuJoCo, Isaac, internal) to minimize sim-to-real gap and addressed control-loop frequency mismatches via sequence optimization/MPC-like approaches and trajectory-output modifications. Published research that contributed a new addition to ROS 2 and has built ROS2 node stacks spanning control, perception, teleop, tactile sensing, and imaging.”
Senior Robotics & Embodied AI Engineer specializing in closed-loop perception-to-action systems
Santa Clara, CA9y exp
AmazonUniversity of Denver
“Robotics software engineer who built the behavior-tree orchestrator for the Vulcan Stow robotic system, migrating from a state machine to significantly improve testability. Experienced with ROS 1 and Baidu Apollo workflows (rosbag, LiDAR/image extraction) from self-driving simulation work at LG Silicon Valley Lab, and currently focused on stable Docker/docker-compose-based deployments with disciplined QA and hotfix processes.”
Junior Robotics & AI Engineer specializing in ROS2 autonomy and real-time computer vision
Dallas, US3y exp
ComputerVisionaries.aiNorthwestern University
“Robotics software engineer from Stanley Black & Decker’s autonomous team who built and deployed a ROS2-based model predictive control system for a commercial autonomous lawn mower, integrating real-time localization, Nav2 planning, and custom control under real-time constraints. Has hands-on field debugging experience (Foxglove, TF timing, covariance/noise tuning) to resolve issues that only appeared outside simulation, plus containerized deployment and CI/CD experience.”
Mid-level Robotics & Computer Vision Engineer specializing in autonomous systems and edge AI
College Station, TX6y exp
Mitsubishi Electric Research LaboratoriesTexas A&M University
“Robotics/perception researcher (MVOS Lab, South Dakota State University) who built an end-to-end multimodal RGB-D + LiDAR pipeline for autonomous greenhouse harvesting and 3D plant phenotyping. Demonstrated strong production ownership by diagnosing motion blur with ROS-bag + OpenCV metrics and shipping an edge-deployed, scan-quality-aware workflow that boosted barcode read rate to 98% and supported ~70% autonomous pepper detection/harvesting accuracy.”