Pre-screened and vetted.
Junior Robotics Engineer specializing in perception, SLAM, and reinforcement learning
“Robotics software engineer with hands-on ROS 2 experience across drones, mobile robots, and manipulators. Built an end-to-end visual SLAM + navigation stack on a real robot using RTAB-Map, and implemented ROS 2-based coordination between a mobile robot and manipulator for camera-triggered object pickup. Optimizes real-time behavior by moving performance-critical code to C++ and deploying TensorRT-compressed models.”
Mid-Level Software Engineer specializing in backend, cloud, and event-driven systems
“Robotics software engineer focused on backend and distributed systems for real-time robot operations, including sensor ingestion, robot state management, and robot-to-cloud communication. Hands-on with ROS/ROS2 integration and real-time navigation debugging, plus production-grade monitoring, CI/CD, and containerized deployments (Docker/Kubernetes) to improve stability and performance.”
Mid-level Autonomous Robotics Engineer specializing in ROS2, SLAM, and perception
“Robotics software engineer with deep ROS2 experience who built a modular autonomous robotics stack (perception/sensor fusion, localization+mapping, and planning). Led development of a LiDAR+camera fusion and multi-object tracking pipeline (PCL + YOLO + Kalman filtering) and debugged real-time SLAM/localization issues via QoS/timestamp synchronization, EKF tuning, and SLAM Toolbox parameter optimization using Gazebo/RViz and rosbag replay.”
Senior Robotics Software Engineer specializing in ROS 2 autonomy and distributed systems
“Robotics Software Engineer with 2.5 years at the Army Research Lab building production tools and cloud infrastructure for large-scale ROS/Unity simulation on AWS. Created a Python GUI to streamline analysis of massive (100GB) ROS bag/MCAP datasets and has deep ROS2/Nav2 performance debugging experience (executor/QoS/TF tracing). Also built an in-house ROS perception pipeline for an assembly-line use case, reaching 92% accuracy.”
Intern Data Scientist specializing in robotics localization and SLAM
“Robotics/embodied-AI practitioner who built a TurtleBot3 LiDAR-fingerprint localization pipeline end-to-end (autonomous data collection + multi-head NN) achieving ~30 cm error in a 10x10 m space. Also has industry experience at Infineon building large-scale production data/AI pipelines and rapidly fixing a deployed recommendation system by correcting upstream data normalization, improving accuracy by 20%+.”
Mid-level Design Engineer transitioning to Robotics & Reinforcement Learning
“Robotics software engineer with hands-on depth across simulation (Isaac Sim, Gazebo, Webots), ROS/ROS2 integration, and real-time embedded control. Led an end-to-end quadruped (12-motor) Isaac Sim build from Fusion 360 CAD-to-URDF through physics tuning to achieve a stable walking gait, and optimized a 5-servo arm by cutting IK compute time by 60%+ using lookup tables to eliminate jitter.”
Intern Robotics Engineer specializing in autonomous navigation and perception (ROS2)
“Recent UC Riverside master’s graduate focused on uncertainty-aware imitation learning for indoor robot navigation, building a full ROS 2 Humble stack (perception, learned policy, uncertainty estimation) with adaptive speed control. Demonstrated strong real-time robotics debugging and systems skills, achieving 92% autonomous navigation success across 100 trials and improving reliability through uncertainty calibration and SLAM/loop-closure optimization.”
Junior Robotics & AI Engineer specializing in autonomous systems and 3D perception
“Robotics software engineer who led system design for an Autonomous Trash Collecting ASV presented at the IEEE ICRA 2025 “Robots in the Wild” workshop, integrating YOLOv8-based perception with ROS autonomy logic to detour for trash while preserving a scientific survey mission. Also built ROS2 UAV capabilities combining ArUco detection, RTAB-Map SLAM, and PX4 integration, with strong simulation (Gazebo/VTD/MSC Adams) and CI/CD QA automation experience.”
Mid-level Robotics Controls Engineer specializing in ROS2 real-time motion control
“Robotics software engineer at Earthwise building a full ROS2 Humble warehouse AMR stack for bin picking—owning perception (Livox/Orbbec/RPLidar fusion + calibration), Nav2 navigation with custom planners/behavior trees, and application-layer nodes (barcode scanning, safety monitoring, web HMI). Demonstrated strong real-world debugging and performance tuning (sub-cm AprilTag docking; ~80% reduction in localization failures) plus solid simulation/CI practices (Gazebo + Docker + GitHub Actions).”
Mid-level Robotics & Computer Vision Engineer specializing in perception and industrial automation
“Robotics software/vision engineer with hands-on experience building motion-tracking systems that fuse camera-based 3D tracking with IMU orientation to reproduce tool motion for automated spray painting. Has implemented ROS nodes/packages for Orbbec camera streaming and SAM3-based segmentation, plus CAN bus coordination between robots and Dockerized deployment for a pick-and-place robotic cell.”
Junior Robotics Software Engineer specializing in fleet management and multi-robot coordination
“Robotics software engineer (2 years) at a startup building a universal fleet management system, owning core integrations and real-time data pipelines for heterogeneous AMR/AGV fleets. Implemented Kalman-filter-based collision prediction integrating RTLS for human-driven forklifts, built MQTT microservices aligned with VDA5050, and is now architecting a PostGIS-backed path-planning service for dynamic, traffic-aware routing with future ML optimization.”
Intern Robotics & Autonomous Driving Engineer specializing in ROS and computer vision
“Robotics software engineer with multi-robot perception and ROS integration experience, including work on CoLoc-Net improving global visual descriptors (DINOv2-SALAD style) and training a metric head for scale-aware 3D pose/odometry with a UKF backend. Built a ROS node/GUI to synchronize monocular vision and radar outputs at ITRI, and independently created a custom camera driver to enable reliable image sharing across AgileX Limo robots under real hardware constraints.”
Mid-level Machine Learning Engineer specializing in LLM platforms and robotic perception
“Built and shipped a production multi-agent personal financial assistant at AlphevaAI on AWS ECS, combining FastAPI microservices, Redis/SQS orchestration, and Pinecone-based hybrid RAG (semantic + BM25) to ground financial guidance. Improved routing accuracy with an embedding-based SetFit + logistic regression intent classifier feeding an LLM router, and optimized UX with live streaming plus cost controls via model tiering and caching.”
Junior Robotics & AI Engineer specializing in perception, planning, and manipulation
“Robotics software engineer who led the full perception/manipulation/planning stack for an autonomous watermelon-harvesting robot, including ripe-vs-unripe instance segmentation deployed on Jetson AGX Orin with TensorRT and quantization. Deep ROS 2 experience (custom ZEDx mask driver, LiDAR+stereo fusion, MoveIt 2/Nav2/ros2_control) and proven real-time optimization—cut latency ~40% and achieved consistent 7-second pick cycles in outdoor field conditions.”
Junior Data Scientist and Robotics Perception Engineer specializing in GenAI and autonomous systems
“Robotics software architect who built an automated pick-and-place palletizing prototype at BLACK-I-ROBOTICS, spanning perception (multi-RealSense fusion, segmentation, 6D pose, ICP), GPU-accelerated motion planning (MoveIt 2 + NVIDIA CuRobo), grasp generation, and safety (human detection + safe mode). Also brings cloud/CI/CD depth from VERIDIX AI (AWS Cognito/Lambda/ECS and CodePipeline stack) and demonstrated strong debugging chops by reducing outdoor rover EKF drift to ~5 cm via Allan variance-based IMU tuning.”
Mid-level Robotics Software Engineer specializing in perception, sensor fusion, and motion planning
“Robotics/Perception Software Engineer at Berkshire Grey who built and hardened a production ROS-based perception + supervision stack for autonomous trailer-unloading robots (RGB-D + LiDAR), including grasp/geometry estimation and segmentation. Diagnosed real-time behavior issues by instrumenting ROS pipelines, then implemented runtime RANSAC-based compensation for LiDAR yaw bias and TF-window validation; also supports containerized deployment on Kubernetes and is actively porting the system from ROS1 to ROS2.”
Intern Robotics Software Engineer specializing in SLAM and edge deployment
“Robotics software engineer who built a full LiDAR SLAM pipeline from scratch in C++ (ICP, pose graph optimization, loop closures) and validated it quantitatively against ground-truth datasets. Extensive ROS2 experience from academics and an internship building a localization system, plus practical deployment work using Docker across x64 and ARM edge devices; also trained RL policies for TurtleBots in Gazebo.”
Mid-level Robotics Software Engineer specializing in perception, localization, and autonomous navigation
“Robotics software engineer with hands-on ROS2 experience building perception-driven navigation for AMRs, integrating YOLO11 + Depth Anything V2 and multi-sensor fusion (LiDAR/RGB-D/IMU) to boost pose accuracy by 30%. Strong in real-time debugging and edge deployment on NVIDIA Jetson (ONNX/CUDA), plus cloud-enabled telemetry (Azure) and simulation-driven testing (Isaac Sim) that cut physical test cycles by 25%.”
Junior Robotics & AI/ML Engineer specializing in multi-agent reinforcement learning and computer vision
“Robotics software candidate whose thesis focused on multi-robot warehouse coordination using MAPPO reinforcement learning, trained in simulation (LBF environment, Isaac Sim/RViz) and deployed onto three real-time robots. Built custom ROS 2 Humble nodes for multi-robot control with namespaces, TF broadcasting, and an RL pipeline integrating LiDAR odometry and camera observations.”
Intern Robotics Software Engineer specializing in SLAM, perception, and motion planning
“Robotics software engineer with hands-on experience building Visual-Inertial SLAM and ROS2 sensor-fusion pipelines for autonomous warehouse forklifts (ArcBest), including rigorous calibration (AprilTags, Allan variance, temporal sync) and recovery features like pose injection. Also implemented RL-based local planning at RollNDrive using Isaac Sim with domain randomization to bridge sim-to-real, improving real-world navigation success back to ~90% after initial deployment.”
Mid-level Robotics Software Engineer specializing in autonomous systems and perception
“Robotics software engineer with a Master’s in Robotics who built a digital twin of an excavator by creating a high-fidelity URDF (kinematics, joint limits, inertial properties) to stress-test controllers near saturation/limit conditions using ROS2 + MoveIt. Has hands-on ROS/ROS2 experience building perception (AprilTag/OpenCV) and sensor interface nodes (IMU/encoders/CAN), plus data-driven debugging and SLAM tuning for GPS-denied navigation using ROS bags and loop-closure validation.”
Mid-Level Full-Stack Software Engineer specializing in cloud microservices and data engineering
“Software engineer with robotics and data-platform experience from CVS Health, spanning Java/Spring Boot microservices, secure APIs, React dashboards, and Snowflake/SSIS ETL optimization. Hands-on ROS 2 developer who built real-time LiDAR obstacle-detection nodes, improved SLAM performance, and coordinated multi-robot communication using DDS, with simulation/testing via Gazebo and CI/CD deployments using Docker and Jenkins.”
Mid-level Software Engineer specializing in Machine Learning and LLMs
“Software engineer with robotics and ML background (BS Software Engineering w/ Robotics minor; MS CS w/ ML minor) who built autonomy-focused student robotics projects combining RFID + camera sensing, path planning (Dijkstra), and fuzzy logic, and experimented with neural-network approaches. Also brings production-grade software practices from a Dell software analyst role, emphasizing maintainability, documentation, and testing for real-time systems.”
Senior AI/ML & Robotics Research Engineer specializing in SLAM and multi-modal perception
“Robotics engineer who built a smart campus tour robot on a Kobuki Turtlebot using ROS 1, implementing a full navigation stack (semantic world model, A* planner, tour executor, path follower) and integrating SLAM (gmapping) plus a hybrid reactive safety controller. Experienced taking systems from Gazebo simulation to real hardware, including extensive real-world debugging and Docker-based development to handle ROS/Ubuntu version constraints; planning a move to ROS 2 on Turtlebot 4.”