Pre-screened and vetted.
Intern Machine Learning Engineer specializing in vision-language models and robotics
“Robotics software engineer with hands-on experience building a vision-guided grasping pipeline on a 7-DOF Franka arm, implementing gradient-based IK with null-space optimization and RRT* motion planning in ROS1. Strong in sim-to-real deployment and real-world debugging—addressed frame misalignment via hand-eye calibration and centralized TF configuration, and reduced replanning/jitter by tuning a weighted pose filter using rosbag replay and variance/grasp-time metrics. Also built an ESP32-based mobile robot architecture combining embedded decision-tree control with WiFi/web high-level commands.”
Mid-level Robotics Engineer specializing in autonomous mobile robots and computer vision
“Robotics software engineer with extensive ROS2 academic project experience (UMDCP), including a drone-based 3D object reconstruction system using Mast3r where they built ROS2 nodes for autonomous image capture, containerized the ROS2/OpenCV stack for hardware deployment, and automated AWS uploads/compute-triggered reconstruction. Demonstrated strong sim-to-real debugging using ROS bags and PlotJuggler to correct yaw/trajectory offsets, and built multi-node TurtleBot navigation using visual cues (horizon/stop signal/obstacle detection) feeding a cmd_vel controller.”
Junior Robotics Engineer specializing in semantic navigation and computer vision
Junior Robotics & AI Engineer specializing in ROS2 autonomy and real-time computer vision
“Robotics software engineer from Stanley Black & Decker’s autonomous team who built and deployed a ROS2-based model predictive control system for a commercial autonomous lawn mower, integrating real-time localization, Nav2 planning, and custom control under real-time constraints. Has hands-on field debugging experience (Foxglove, TF timing, covariance/noise tuning) to resolve issues that only appeared outside simulation, plus containerized deployment and CI/CD experience.”
Intern Robotics Engineer specializing in robotics testing, controls, and automation
“Robotics engineering intern and mechanical engineering master’s student who bridges hardware testing and ML/ROS2 software: built a PyTorch model to map motor test data across motor types using electrical specs (Kv/Kt/R/L) and validated it against new motors to meet strict torque/thermal accuracy targets. Also integrated CNN-based perception into ROS2 for real-time navigation and implemented MPC with time-synchronized multi-topic messaging to avoid stale-data control issues.”
Junior AI & Software Engineer specializing in robotics and ML infrastructure
“Robotics engineer from UIUC’s Intelligent Motion Lab who led the perception stack for a humanoid robotic nurse, fusing camera/LiDAR/IMU on NVIDIA Jetson Orin for real-time localization and scene understanding across six robots. Deep expertise in ROS 2 and edge ML optimization (TensorRT, CUDA, zero-copy), delivering major latency/throughput gains (10 FPS to 22+ FPS) and building fault-tolerant pipelines with gRPC offloading and real-time reliability practices.”
Entry-level Aerospace/ADCS Researcher specializing in spacecraft controls and simulation
“Robotics/control-focused candidate with hands-on ROS2 + Gazebo experience implementing MPC with online state identification on a Crazyflie drone, including camera-based position determination fixes. Also worked on multi-agent spacecraft formation control and constellation optimization, debugging numerical drift and redesigning leader-follower control laws to handle delayed/outdated updates; uses Docker to ensure reproducible simulation results across machines.”
Junior Controls & Motion Planning Engineer specializing in MPC, RL, and autonomous systems
“Robotics researcher focused on learning-based navigation: builds sub-goal generation and cost-to-go models (Bayesian network-based) integrated with motion planning and MPC/NMPC control. Has hands-on ROS 2 package development across vehicles, drones, and manipulators, and uses a broad simulation stack (Isaac Sim, Gazebo, MuJoCo, PyBullet, PX4) to test and integrate systems.”
Intern Robotics Software Engineer specializing in motion planning and robot perception
“Robotics software engineer with Amazon Robotics internship experience who built a visual-servoing architecture from scratch, navigating multiple simulator pivots to achieve a closed-loop motion-planning and execution prototype. Currently working with ROS 2 on a medical assistive feeding robot using the Kinova Kortex platform (MoveIt2, ros2_control, Gazebo/RViz), and has demonstrated strong real-time debugging and distributed-system synchronization using Carbon and Docker.”
Intern Robotics Engineer specializing in ROS, motion planning, and embedded systems
“Robotics software engineer who delivered the Lunar ROADSTER—an autonomous bulldozing rover for lunar terrain manipulation—building the control system, path planning, and perception in ROS 2. Implemented crater detection using a YOLO model fused with ZED stereo depth to recover crater geometry, and structured autonomy around ROS 2 actions integrated into an FSM with CI/CD-backed system testing. Also has industrial robotics experience controlling a Fanuc arm for additive manufacturing and building ROS interfaces for PLC I/O.”
Staff Embedded/Automotive Systems Architect specializing in IVI, ADAS, and digital cockpit platforms
“Robotics-adjacent software engineer with hands-on ROS 2 experience building and integrating sensor nodes (IMU, GNSS, wheel encoder) and working with distributed pub/sub concepts via ROS IDL and DDS. Also has Gazebo exposure through Udacity coursework and uses Docker as a core development/deployment tool, with related experience in automotive camera-based solutions.”
Intern Machine Learning/Robotics Engineer specializing in computer vision and 3D simulation
Junior Robotics Engineer specializing in perception, control, and mechatronic prototyping
Senior Robotics & Embedded Systems Engineer specializing in ROS2 navigation and perception
Intern Machine Vision & Robotics Engineer specializing in computer vision and reinforcement learning
Senior Robotics Systems Engineer specializing in manipulation, motion planning, and real-time control
Intern Robotics & Autonomy Simulation Engineer specializing in digital twins and GNC validation
Intern Robotics Engineer specializing in autonomy, SLAM, and multi-sensor perception
Mid-level Robotics & Machine Learning Engineer specializing in vision-language-action policies
Junior Robotics Engineer specializing in ROS2 autonomy and embedded systems
Junior Robotics Software Engineer specializing in ROS 2 perception, planning, and control
Junior Robotics & AI Engineer specializing in autonomous systems and machine learning
Senior Robotics Researcher specializing in SLAM and 3D computer vision
“Robotics software engineer (10+ years ROS/ROS 2) currently leading the perception stack for Omron’s AMR fleet, including a scalable factory SLAM system that combines vision with laser SLAM to handle corridor aliasing. Strong in real-time embedded optimization on NVIDIA Jetson (CUDA + profiling) and fleet-scale validation via multi-robot Isaac Sim scenarios (USD-to-ROS 2 bridging, Nav2 in crowded scenes). Also contributed to a cloud-native reality-capture/3D reconstruction pipeline at Hilti using Docker and Kubernetes.”
Mid-level Software Engineer specializing in robotics autonomy and safety-critical systems
“Robotics software engineer working on an electric seaglider autonomy/perception stack on NVIDIA Orin, tackling multi-modal operating constraints (5–10 knots float mode up to ~100 knots flight). Previously built a ROS-based multi-robot search-and-rescue system, including navigation integrated with SLAM/task allocation/perception, and improved real-world performance by switching to a 2D planner with a velocity-obstacles controller to handle slip and timing uncertainty.”