Pre-screened and vetted.
Intern Computer Vision/Perception Engineer specializing in LiDAR and autonomous systems
“Robotics/AV-focused engineer who built an end-to-end gesture controller for a GEM e2 autonomous vehicle using YOLOv8 pose and ROS, including model training, ROS perception nodes, and a safety-oriented state machine (stop override + hold-to-register). Also has internship experience at Intramotev integrating LiDAR object detection via Redis pub/sub and performing sensor-frame calibration (roll/pitch correction using ground-plane normals), plus Dockerized deployments and Gazebo-based testing.”
Mid-level Robotics & Controls Engineer specializing in safe autonomy and perception-aware motion planning
“Robotics software engineer who built an open-source, real-time Cartesian controller for Universal Robots UR5/UR5e, targeting sub-mm accuracy at 500 Hz within ROS2/ros2_control. Demonstrates strong real-time debugging skills (timing profiling, singularity handling with Tikhonov regularization) and sim-to-real iteration using Gazebo/Isaac Sim plus physical hardware tuning; also has ROS1 experience building URDF/xacro and EKF configs for an underwater vehicle and has developed drone/robot packages.”
Intern Robotics/ADAS Engineer specializing in perception, sensor fusion, and state estimation
“Robotics software engineer who built a multi-agent dense warehouse mapping system in ROS 2, including LiDAR-camera fusion SLAM, timestamp-based synchronization, and DDS-based inter-robot pose/keyframe exchange under bandwidth constraints. Also applied Gaussian Splatting for selective photorealistic dense reconstruction and optimized real-time performance with node composition, bounded queues, and QoS tuning; experienced with Gazebo/CARLA/Unity simulation and Dockerized ROS 2 deployments.”
Mid-level Robotics Planning & Control Engineer specializing in UAV autonomy
“Robotics software engineer focused on autonomy for fixed-wing and quadrotor UAVs, with deep experience in planning and advanced control (geometric control, trajectory optimization, nonlinear MPC). Recently designed an energy-aware NMPC for an autonomous glider, building a custom simulation/visualization framework to tune reward formulations. Has hands-on field deployment experience integrating ROS with PX4, optimizing node architecture for zero-copy performance, and building heterogeneous robot comms using Zenoh.”
Mid-level Robotics Software Engineer specializing in ROS2 autonomy and manipulation
“Robotics software engineer with hands-on experience building ROS2-based distributed multi-robot systems, including task allocation (distance cost matrix) and navigation using Nav2/DWA. Previously on Vicarious’s grasping team, implementing real-world box-picking with vacuum suction and force/torque sensing plus recovery behaviors. Also brings strong engineering hygiene with Gazebo simulation, Dockerized deployments across offices, and GitHub Actions CI/CD with unit/integration testing.”
Mid-level Software Engineer specializing in embedded AI and full-stack systems
“Robotics software engineer who built and owned core navigation components for a TurtleBot in ROS/ROS2 and Gazebo, including an RRT-based planner, waypoint-to-velocity motion planning, and PID trajectory tracking. Demonstrates strong real-time debugging skills (control-loop timing under CPU load), costmap/occupancy-grid tuning, and distributed ROS2 communication design using DDS/QoS, plus Docker and CI/CD automation experience from Keysight.”
Intern Software Engineer specializing in C++ systems and performance optimization
“Robotics software intern who worked on a customized ROS1-based middleware, building ROS node orchestration and a ROS topic monitoring system. Improved intra-machine ROS topic performance by using shared memory and circular buffers instead of socket-based IPC, and integrated nightly Jenkins CI with Groovy/Python to run tests and produce code coverage reports.”
Executive Automotive Software Leader specializing in SDV, OTA, and embedded-cloud-AI platforms
“Automotive software and OTA/infotainment platform leader who has repeatedly built new lines of business as an intrapreneur—most recently taking an infotainment app marketplace from concept to production in <7 months with $3M seed funding and delivering ~$200M ROI while scaling the team from 0 to 90. Deep hands-on experience solving OTA fragmentation across ECUs/telematics and multiple OS/backends, with 18 patent processes submitted; exploring an AI-driven platform to automate OTA software qualification and cut release cycles from 9–18 months to ~2 weeks.”
Mid-level Robotics Software Engineer specializing in multi-robot control and automation
“Robotics software engineer with ~7 years of ROS/ROS2 experience spanning dual-arm metal additive manufacturing and prior work on the DARPA Subterranean Challenge. Developed in-house multi-arm collision/trajectory planning and achieved a major calibration improvement (from ~6 cm error to ~0.5 mm) via ICP point-cloud registration, with strong simulation/digital-twin, SLAM, and deployment (Docker/CI/CD) exposure.”
Mid-level Robotics Researcher specializing in kinodynamic motion planning
“Robotics software engineer focused on real-time estimation/control and motion replanning, currently integrating a factor-graph-based estimation/control stack with sampling-based replanning in a ROS environment validated on both MuSHR hardware and MuJoCo simulation. Strong in distributed-system debugging (rosbags/logging, controlled test scenarios) and ROS performance patterns (nodelets, TF/TF2), with prior multi-robot experience from SSL RoboCup using custom UDP protocols.”
Intern Mechatronics/Robotics Software Engineer specializing in ADAS and ROS2
“Robotics software engineer with experience spanning embedded C++ control on microcontrollers and ROS/ROS2 production systems in automotive and marine robotics contexts (Harbinger Motors, Impossible Metals). Has deep hands-on experience debugging real-time image pipelines (DDS/QoS tuning, HIL stress testing) and building large automated test suites (1200+ tests) plus CI/CD (Dockerized Playwright tests on Jenkins).”
Mid-level Robotics & Computer Vision Engineer specializing in ADAS and real-time perception
“Robotics/ADAS engineer who built an assistive feeding robot with reliable 3D mouth tracking (RealSense + MediaPipe) and ROS 2 integration to a WidowX250s arm, solving depth-noise, timing, and workspace/singularity issues for stable low-latency behavior. Also optimized a real-time lane-keeping controller at Hyundai using signal logging/replay, filtering (LPF/Kalman), and feedforward+PI tuning, with experience across SIL/HIL and CAN-based ECU integration.”
Junior Software Engineer specializing in full-stack systems, ML, and robotics perception
“Robotics software engineer with autonomous driving lab experience at UCSD, building and optimizing ROS2 perception and control pipelines (camera-based real-time object detection) with a strong focus on low-latency performance and robust message interfaces. Also brings production deployment experience from Hewlett Packard Enterprise, using Docker and Kubernetes for containerized environments and deployment pipelines.”
Junior Robotics & ML Engineer specializing in robot learning and simulation
“Robotics engineer with a 2024 internship building an end-to-end software stack for an autonomous humanoid robot that follows natural-language audio commands to make coffee and deliver snacks, including perception (OpenCV), mapping, and ROS Navigation. Also contributing to a robotics foundation model effort by building data preprocessing pipelines using GroundingDINO and SAM2, and has multi-robot coordination experience with algorithms designed to handle real-world communication drops.”
Mid-level Robotics Engineer specializing in autonomous navigation, SLAM, and MPC control
“Autonomous marine surface algorithms engineer at CURLY contributing across the full autonomy stack in ROS 2 (C++/Python), from GNSS-IMU InEKF localization (100 Hz) and GTSAM object-level SLAM to semantic mapping and A*/Lie-group MPC planning/control. Strong focus on real-time optimization for constrained embedded hardware, with disciplined debugging/validation using ros2_tracing, rosbag2 replay, and Gazebo, and reproducible deployment via Docker/CI.”
Mid-level Robotics Software Engineer specializing in real-time control and perception
“Robotics software engineer focused on controls and motion planning for autonomous flight systems using ROS 2 (rclcpp), Gazebo/RViz, and BehaviorTree.CPP. Has hands-on real-time control experience (1ms loop rate) and has improved system performance by tracing latency issues and refactoring vision components (singleton camera init). Also built low-latency Ethernet/TCP comms on top of the IgH Ethernet stack and uses digital-twin simulation (Gazebo, MuJoCo; beginner Isaac Sim) to validate algorithms.”
Entry-level Robotics & Automation Engineer specializing in robot learning and manufacturing automation
“Robotics software engineer focused on real-time teleoperation and high-quality robot-learning data pipelines, including synchronized multimodal sensing (RGB-D, tactile, joint states) for Diffusion Policy training on a bimanual ALOHA robot. Strong ROS practitioner who debugs real-time control issues with ROS tooling and builds simulation environments in Isaac Lab and PyBullet; also packages data-collection stacks with Docker.”
Mid-level Robotics & Software Engineer specializing in robot learning and simulation
“Robotics software engineer/researcher with hands-on real2sim experience for deformable manipulation: led real-world data collection and diffusion policy deployment on an Aloha robot, then built a MuJoCo + Gaussian-splat digital twin with point-cloud alignment. Also brings 3 years of production software engineering experience, including Docker/CI/CD and a zero-downtime Blue-Green upgrade of a core API router, plus ROS/ROS2 work spanning autonomous vehicles and UR20 pick-and-place with MoveIt2.”
Mid-level AI/ML Engineer specializing in robotics perception and AR/VR systems
“AI engineer with robotics perception experience at Forterra, building and deploying moving-object/obstacle detection models into real-time robot pipelines. Addressed training crashes/latency via sub-batch training and optimizer tuning, and improved debugging using ROS/ROS2 tooling with 3D voxel visualization and color-coded validation.”
Mid-level Robotics & Control Researcher specializing in safe control for UAVs and manipulators
“Robotics software engineer who led an end-to-end learning-based UAV controller project, addressing oscillation issues through simulation, gain tuning, and a shift to geometric control. Has ROS experience spanning UAV mocap-based perception and an autonomous driving stack (LiDAR, mapping, AMCL, controller), plus real-world distributed ROS communication over WiFi with performance troubleshooting.”
Junior Robotics Software Engineer specializing in ROS, embedded control, and SLAM
“UCLA RoMeLa research assistant (since Oct 2025) building an embedded control and sensor-data platform for multi-robot coordination in a simulated warehouse. Deep hands-on experience with ROS on NVIDIA Jetson under RTOS constraints, secure MQTT/TLS telemetry, and SLAM performance optimization (including ORB-SLAM3) validated in Gazebo and deployed via Docker/Kubernetes and CI/CD.”
Senior Software Engineer specializing in mapping and localization for robotics/autonomous vehicles
“Robotics software engineer with hands-on GPU/CUDA vision work (solo-built a 4-fisheye panorama stitcher using camera intrinsics/extrinsics) and mapping/localization expertise, including radar-driven pose-graph mapping optimized with Ceres. Strong ROS background (Cartographer, AMCL, TEB) and demonstrated localization improvements by biasing AMCL with Cartographer to reduce drift; experience shipping modules deployed across large robot/vehicle fleets (e.g., retail scanning robots and automotive).”
Junior Data Scientist and ML Researcher specializing in Transformers, multimodal AI, and autonomy
“Autonomous robotics student who built an end-to-end ROS2 semantic goal navigation system as a solo course project, integrating CLIP-based vision-language understanding with SLAM Toolbox and Nav2 to execute natural-language commands in Gazebo/RViz. Also implemented and tuned an RRT planner from scratch in Python and uses Docker plus GitHub workflows for reproducible, tested robotics codebases.”
Junior Robotics & Computer Vision Engineer specializing in simulation and embedded systems
“Robotics software contributor with hands-on experience building a Gazebo/ROS(2) Mars rover simulation integrating LiDAR and image segmentation for autonomous navigation and SLAM (Nav2). Comfortable debugging low-level sim/model integration issues (URDF/XML) and building sensor-data pipelines, and has also shipped a real-world telemetry setup streaming vibration data over UDP with packet-loss mitigation.”