Pre-screened and vetted.
Entry Robotics Engineer specializing in ROS 2 autonomy and simulation (Isaac Sim)
“Robotics software engineer (PhD background) who owned an end-to-end autonomy stack for a 2025 GTC demo, integrating ROS2/MoveIt2 with a high-fidelity NVIDIA Isaac Sim environment for regression testing and sim-to-real validation. Has hands-on experience optimizing MoveIt2 planning (parallel pipelines + evaluation metrics) and building outdoor Nav2 localization using dual EKF with GNSS and LiDAR/IMU sensor fusion; currently building simulation environments at Richtech Robotics.”
Mid-level Robotics & ML Engineer specializing in perception, control, and scalable systems
“Robotics software engineer/researcher focused on perception, SLAM, and sensor fusion, with hands-on experience taking systems from simulation to embedded/real-time deployment. Led transparent-surface (glass) detection using GDNet and achieved a major real-time speedup (~7–9 FPS to ~30 FPS) while preserving >90% recall, and has built ROS-based EKF GPS-IMU fusion plus profiled/optimized Visual SLAM for performance and memory stability. Also brings production-style deployment skills via Docker/Kubernetes orchestration of ML inference services with autoscaling and model update rollouts.”
Junior Robotics Software Engineer specializing in ROS 2, controls, and applied AI
“Robotics software engineer with 2+ years across ROS1/ROS2 projects spanning humanoid behavior engines and agricultural robots. Built an LLM-driven, ROS2-lifecycle-based decision system plus micro-ROS firmware on Teensy for modular sensors/motors, adding health monitoring that improved reliability 10x. Strong simulation/testing and deployment discipline (Gazebo, 95% coverage, Docker + AWS Greengrass/ECR, CI/CD) and demonstrated localization expertise with EKF sensor fusion achieving <0.5% error.”
Mid-level Software Engineer specializing in systems, cloud, and applied machine learning
“Robotics software engineer focused on ROS 2 localization/SLAM: built a particle-filter (Monte Carlo) localization system in Python with likelihood-field modeling to handle noisy LiDAR and dynamic environments. Strong in debugging ROS 2 integration issues (tf2 frame sync, DDS/QoS message reliability) and in profiling/optimizing pipelines to reach real-time performance (~10 Hz) using precomputation and KD-trees.”
Junior Machine Learning Engineer specializing in computer vision and LLM applications
“Built and led an autonomous driving software effort for Formula Student, owning the full autonomy stack (perception, planning, control) orchestrated in ROS. Implemented stereo depth + YOLO object detection, RRT/RRT* planning, and a robust SLAM pipeline (Kalman filter, submapping) while leveraging Gazebo simulation and modern deployment tooling (Docker/Kubernetes, AWS, GitHub Actions CI/CD).”
Mid-level Robotics Software Engineer specializing in autonomous perception and sensor fusion
“Robotics engineer with Honeywell and Tata Motors experience deploying ROS/ROS2 autonomous mobile robot fleets into live factory environments, integrating sensors, safety PLCs, and on-prem services. Known for solving end-to-end latency and stability issues (including network spikes under load) using gRPC, Docker, and improved diagnostics—cutting diagnosis time from hours to minutes and achieving sub-150 ms control response.”
Junior Controls & Autonomy Engineer specializing in robotics and trajectory optimization
“MS thesis work in the University of Washington Autonomous Controls Lab building a full quadrotor guidance/navigation/control stack, including high-fidelity dynamics modeling and an SCP trajectory optimizer made robust to wind via trust regions and MPC-style replanning. Also built an autonomous RC car using ROS on Jetson Xavier with ZED stereo/VIO, implementing perception (point cloud filtering/clustering) and state estimation while addressing real-time synchronization and latency challenges.”
Mid-level Applied AI Engineer specializing in knowledge graphs, GraphRAG, and urban mobility
“ML/NLP practitioner focused on knowledge-graph-based retrieval for LLM question answering, including an urban/autonomous-vehicle decision-making use case. Built a hierarchical GraphRAG + vector database system and an entity-resolution pipeline that blends spatial and semantic similarity, validated using LLM-generated synthetic datasets; uses Python tooling like RDFLib, GraphDB, OpenAI APIs, and LangChain.”
Junior Robotics Engineer specializing in UAV control, MPC, and SLAM
“Master’s robotics candidate at Northeastern (Silicon Synapse Lab) who built and tuned an NMPC for the M4 multi-modal morphobot to achieve high-speed (>10 m/s) aggressive flight maneuvers and even hover under a full rotor failure, using MATLAB/CasADi/Simulink/Simscape with IPOPT. Also has ROS/ROS 2 experience spanning SLAM/navigation on a UGV and GPS/IMU sensor-fusion + dead-reckoning with custom ROS 2 nodes/messages, with a strong simulation-first and real-time debugging approach.”
Mid-level AI Engineer specializing in multi-agent LLM systems and multimodal tutoring
“LLM/agentic systems builder who has deployed multi-agent educational chatbots using LangChain + LangGraph, with LangFuse-based tracing and FastAPI hosting. Focused on production reliability and performance (latency reduction via agent decomposition and caching) and on evaluation/testing (routing test scenarios, LLM-as-judge). Partnered with product to add image understanding by parsing and storing images in S3, expanding chatbot coverage to 30+ books with images.”
Senior Robotics Software Engineer specializing in autonomous navigation and robotic manipulation
“Robotics software engineer with deep ROS/ROS 2 autonomy experience across warehouse fleets (Knapp delivery robots and quadrupeds), spanning SLAM, EKF-based sensor fusion localization, Nav2, and behavior-tree mission orchestration. Built a simulation-first testing approach using Isaac Sim Replicator with Dockerized, statistically analyzed repeat runs to catch nondeterminism, and personally owned real-world validation. Also developed a custom UR10 singularity-check ROS node based on manipulability.”
Junior Robotics Researcher specializing in vision-based manipulation and learning-based control
“Robotics software candidate with experience spanning simulation (MuJoCo, Gazebo, Webots) and ROS1/ROS2 development, including hardware-oriented work on a hexapod and a Mecademic Meca500 R3 arm. Built a visually guided interactive indoor robot system using a CV pipeline plus POMDP + imitation learning with PPO-based residual RL, and has practical debugging experience improving LiDAR SLAM stability and migrating sensor interfaces from ROS1 to ROS2.”
Mid-level Automotive & Robotics Test Engineer specializing in ADAS validation and ROS
“Robotics software developer with hands-on ROS experience building a timer-driven closed-loop controller for a differential-drive robot in Gazebo, including square/figure-8 trajectory planning and RViz/rqt_graph-based debugging. Currently extending the system with LiDAR-based obstacle detection, safety overrides, and reactive velocity arbitration for collision-free motion.”
Mid-level Software & Robotics Engineer specializing in AGVs, perception, and motion planning
“Robotics software engineer with real customer deployment impact at Dematic, improving AGV front-guided steering, localization sensor fusion, and control-loop performance while integrating with Beckhoff PLC safety systems. Also built a multi-robot ROS milling cell in graduate work, combining URDF/Gazebo simulation, MoveIt/OMPL planning, ROS performance profiling, and CNN-based defect detection to drive coordinated robotic milling.”
Junior Machine Learning Engineer specializing in GPU-accelerated computer vision
“Robotics software lead from Texas A&M Aggie Robotics who built WoopLib, a SLAM-based vision/navigation library using PID pure pursuit. Has hands-on ROS/ROS2 and Jetson Nano experience integrating Intel RealSense (T265/D435i) with wheel odometry for accurate state estimation, including compiling deprecated sensor support from source and optimizing by moving to Python with C++ bindings and serial streaming to a microcontroller.”
Entry-Level Robotics Researcher specializing in autonomous vehicles, SLAM, and motion planning
“Robotics/AV engineer with strong ROS2 and autonomy stack integration experience, including bringing Autoware Universe up on a real Lexus autonomous vehicle platform. Also built a hierarchical reinforcement learning proof-of-concept for Boston Dynamics Spot (navigation + manipulation) and tackled sim-to-real challenges by implementing PD torque conversion for Jetson-based hardware; improved localization accuracy via GNSS+EKF fusion with a reported 28% drift reduction.”
Senior Robotics Researcher specializing in neurosymbolic robot learning and manipulation
“Robotics software researcher who led a Boston Dynamics SPOT project on non-prehensile manipulation of heavy boxes, combining MuJoCo-based RL, ViT-based perception, and SPOT SDK control; the work is under review for ICRA 2026. Also built a ROS planning-and-learning stack on a LoCoBot using PDDL task planning, RTAB-Map SLAM, MoveIt motion planning, and RL to recover from execution failures.”
Mid-level Robotics Software Engineer specializing in ROS/ROS2 systems
“Robotics software engineer focused on production-deployed industrial automation, owning robot behavior end-to-end across integration and production support. Has hands-on experience coordinating multiple robots with PLC safety, conveyors, and vision, using state-machine orchestration, deep debugging (logging/I-O tracing), and performance tuning to achieve stable run-at-rate operation. Also builds ROS/ROS 2 distributed systems in C++/Python and tunes DDS/QoS for reliable multi-machine communication.”
Junior Machine Learning & Edge AI Engineer specializing in IoT and robotics
“Robotics/ROS2-focused early-career engineer who built a stereo visual-odometry SLAM system for autonomous navigation and optimized it to run reliably in real time on Raspberry Pi. Strong in sensor fusion (camera+IMU), ROS2 debugging/profiling, and distributed robotics/IoT pipelines (ROS2 + MQTT + cloud), with added experience extracting WiFi CSI for sensing/localization and shipping via Docker + GitHub Actions CI/CD.”
Junior Mechatronics Engineer specializing in robotics, embedded systems, and safety-critical automation
“Robotics software engineer who worked on NYU’s Medi Assist robot, owning navigation sensor bring-up (LiDAR/radar/IMU) and SLAM stability, plus delivering a safety-critical braking system. Built a YOLOv8 perception pipeline on Jetson Nano and wrote STM32 firmware to actuate brakes, achieving ~50ms reaction time, and implemented diagnostics/health checks and reliable inter-board comms (ROS2 + UART with checksums/heartbeats).”
Junior Robotics Engineer specializing in computer vision and SLAM
“Robotics software engineer focused on ROS2 autonomy, with hands-on work building a monocular visual odometry system on KITTI (including GPS-based scale correction and RViz trajectory visualization) and an end-to-end Gazebo simulation integrating URDF, slam_toolbox, and Nav2. Demonstrates strong practical debugging skills around TF frames, lifecycle nodes, and Gazebo plugin/version compatibility.”
Junior Robotics Engineer specializing in perception, controls, and industrial automation
“Robotics software engineer who led development of a vision-based end-effector stability/vibration analysis tool using phase-based motion magnification and frequency-domain analysis (FFT/Bode) to uncover resonances missed by motor-only diagnostics. Experienced with ROS 2 C++ perception/navigation (ArUco + PnP) and real-time industrial integration, including optimizing a 1 kHz EtherCAT/Beckhoff PLC/Modbus TCP diagnostic pipeline and designing deterministic interfaces across heterogeneous subsystems.”
Mid-Level Software Engineer specializing in Robotics, AI/ML, and XR
“Candidate states they have worked on many robotics software system projects and has overcome many technical challenges, but declined to provide any project details during the screening and ended the interview early.”