Pre-screened and vetted.
Junior Robotics Engineer specializing in computer vision and sensor fusion
“Robotics software engineer with ~3 years of ROS experience spanning drone autonomy and perception. Recently improved drone barcode scanning by shifting to segmentation and deploying an optimized instance-seg model to edge hardware (FP16 quantization, convex-hull masks), while also building ROS drivers/parameters for field-tunable behavior. Has hands-on experience integrating LegoLOAM and calibration/TF systems, including creating RViz visualization tools to validate transforms and debugging real-world drift issues caused by lighting/glare.”
Junior Robotics Engineer specializing in controls, simulation, and production debugging
“Robotics software engineer who helped build a startup "robo-chef" system end-to-end, including pick-and-place simulation using ArUco-marked stations and smooth motion planning. Hands-on ROS 2 integrator across LiDAR/IMU/camera perception-to-navigation stacks (Nav2, SLAM Toolbox, ros2_control), with demonstrated ability to debug real-time timing drift and improve repeatable placement through calibration and motion blending. Uses Gazebo simulation plus Docker/CI pipelines to validate and deploy robotics software reliably.”
Junior Software Engineer specializing in cloud microservices and full-stack development
“Robotics software engineer with hands-on ROS (ROS 1) experience building sensor-processing and state-based control pipelines in Python/C++. Demonstrated measurable reliability and performance gains in autonomous navigation—cut runtime failures by 30%, reduced replanning by 35%, and improved debugging efficiency by 40%—using timing-aware state machines, message/interface discipline, and simulation/testing with Gazebo, rosbag, Docker, and CI/CD.”
Junior Robotics Engineer specializing in autonomous navigation and computer vision for agriculture
“Robotics software engineer who led an autonomous nursery management robot project at Auburn University, spanning RGB-D/IMU sensor fusion, SLAM navigation, and real-time ML for plant detection/quality assessment. Strong ROS1/ROS2 background (C++/Python) with deployment on NVIDIA Jetson, including profiling-driven optimization of YOLO segmentation for real-time behavior and multi-robot (UGV/UAV) communication using ROS2.”
Mid-level Robotics Software Engineer specializing in ROS, C++ and embedded Linux
“Robotics software lead at Icor who grew from intern to owning the end-to-end software lifecycle for a mobile manipulator platform deployed to 300+ customers globally. Deep hands-on ROS2/MoveIt2 and navigation-stack integration (URDF/TF, sensors, behavior engine) plus production infrastructure (CI/CD, OTA, field OS upgrades) and real-world performance tuning for motion planning in EOD multi-robot environments.”
Mid-level Systems Integration & Test Engineer specializing in embedded robotics and automation
“Senior engineering student leading a robotics capstone using a Jetson Nano + Yahboom DOFBOT to play whiteboard games (Tic-Tac-Toe, Hangman) via computer vision and ML. Owns the inverse kinematics and OpenCV pipeline, uses Gazebo/URDF for simulation, and is planning C++/multithreading/Pybind11 optimizations to meet real-time constraints on limited embedded hardware.”
Mid-level Robotics Engineer specializing in ROS 2, control systems, and manipulation
“Robotics software engineer with hands-on ROS2 experience across manipulation, SLAM/localization, and sensor fusion. Recently built an end-to-end hybrid force-position control system for a Ufactory xArm7 with a 6-axis force/torque sensor to enable compliant, force-guided shaft insertion, including real-time Jacobian computation, TF pipeline, and MoveIt2 trajectory execution validated on hardware.”
Entry-Level Robotics Software Engineer specializing in ROS 2 autonomy and multi-robot systems
“Robotics software engineer focused on ROS 2 multi-robot coordination, having built task allocation and reliable inter-robot communication for simulated TurtleBot3 fleets. Demonstrates strong integration/debugging skills across Nav2 + gmapping (SLAM drift, TF tree, odometry/sensor fusion) and pairs it with production-minded tooling—Docker/Kubernetes deployments and CI/CD simulation testing via GitHub Actions.”
Entry-level Robotics Research Assistant specializing in multi-agent autonomy and reinforcement learning
“ROS2/Python robotics engineer who led a 4-person team building a simulated multi-robot warehouse system (SLAM + NAV2 + centralized task allocation) in Gazebo Ignition, including a distance/priority-based controller that reduced task completion time by ~30%. Also has hands-on real-time debugging/tuning experience for both mobile robots and a MyCobot 600 Pro manipulator, plus simulation work in CARLA using RL (TD3) and Social-LSTM for pedestrian behavior modeling.”
Junior Machine Learning Engineer specializing in computer vision and robotics
“Research assistant who single-handedly built and integrated an indoor autonomous wheelchair system using NVIDIA Jetson Nano, LiDAR, and a stereo camera. Implemented a multi-sensor perception pipeline (OpenCV/PCL) with ROS-based modular nodes, TF frame management, and robust debugging via RViz/rosbag, plus simulation testing in Gazebo and Dockerized environments for portability.”
Junior Robotics/Mechatronics Engineer specializing in SLAM, motion planning, and autonomy
“Robotics software engineer focused on autonomy stacks for high-payload AMRs using ROS2/Nav2, with hands-on expertise in SLAM/localization and sensor fusion (RTK GPS, IMU, wheel odom, ZED2) to eliminate drift and stabilize real-time behavior on deployed hardware. Also built multi-robot coordination in ROS2/Gazebo and uses Docker + Git/CI-style testing to create reproducible simulation-to-hardware pipelines.”
Junior Robotics Engineer specializing in ROS2 perception and multi-sensor calibration
“Entry-level robotics software engineer/team lead with hands-on experience spanning multi-robot UAV simulation (Gazebo + PX4 SITL) and autonomous vehicle stack integration (ROS2 Humble + Autoware Universe). Has tackled real-time perception optimization (OpenCV + custom deep learning) and built robust cross-protocol communication interfaces to connect ROS2 systems with embedded ESP32 devices.”
Mid-level Robotics Software Engineer specializing in ROS, motion planning, and perception
“Robotics software engineer who built a ROS/C++ workcell stack to automate coating wooden panels with a 6-DOF arm, including trajectory generation, MoveIt/OMPL planning, and a single launch/config setup that runs in both Gazebo and on real hardware. Strong in debugging real-world planning failures (e.g., intermittent aborted/no-plan regions) through logging, planner swaps, and collision/kinematics tuning, and in designing modular ROS/ROS2 systems with versioned interfaces and translation layers for heterogeneous robots.”
Mid-level Robotics Engineer specializing in ROS2 autonomy, perception, and manipulation
“Deployment engineer at a robotics startup who owned end-to-end field deployments in greenhouse environments, including integrating humanoid robots (XArm 6), tuning perception stacks for real-world lighting shifts, and coordinating rapid fixes with hardware/software teams. Experienced debugging complex robotics integrations (LiDAR + NVIDIA Jetson + ROS2 + networking) and hardening solutions by automating configuration at boot, while also working directly with customers and training operators for ongoing support.”
Junior Robotics/Software Engineer specializing in autonomous navigation and embedded systems
“Robotics simulation/localization engineer who built a lunar crater navigation stack in ROS/ROS2 and Gazebo, including custom localization/perception/planning packages. Demonstrated strong debugging skills by using tf2 frame analysis to fix camera-to-base_link alignment, cutting heading error from 75° to 0.48°, and handled large NASA lunar imagery (~4GB) by converting/downsampling data for Gazebo.”
Intern Software & AI Engineer specializing in distributed systems and LLM applications
“Stony Brook Fall 2024 capstone contributor who built a ROS2-based warehouse mobile robot prototype, owning perception and SLAM integration end-to-end. Strong in real-time robotics optimization on Jetson Orin (TensorRT/CUDA, ROS2 tracing/Nsight) and in distributed ROS2 communications (DDS discovery/QoS, MAVLink-to-ROS2 bridging), with a full simulation/testing/deployment toolchain (Gazebo, CI tests, Docker/K3s).”
Entry-level Robotics Engineer specializing in SLAM, sensor fusion, and embedded avionics
“Robotics software engineer focused on perception/SLAM and systems integration, recently built a quasi-dynamic mapping pipeline to track and reconstruct articulated objects (e.g., drawers) from RGB video using SAM2, COLMAP SfM, and 3D Gaussian Splatting. Also has strong ROS2 sensor-pipeline experience (custom messages, MCAP rosbag deserialization, tf2) and demonstrated real-time performance tuning by accelerating an ICP-based LiDAR SLAM component ~30x (from ~3s to <100ms per frame).”
Entry-level Robotics Engineer specializing in autonomous systems and computer vision
“Robotics software engineer with ~4 years of ROS experience who implemented a real-time diffusion-policy control loop entirely in Gazebo, focusing on inference-latency reduction (warm-start + truncated denoising) for stable closed-loop execution. Has hands-on experience building custom ROS control nodes, optimizing AMR navigation (SLAM + RRT) with sensor-fusion for dynamic obstacles, and designing deterministic multi-robot coordination; also uses Dockerized ROS environments and automated simulation/benchmark pipelines.”
Mid-level Autonomy Engineer specializing in drone robotics and LiDAR SLAM
“Autonomy Engineer at Joulea Inc (Atlanta) with ~3 years building a drone autonomy stack end-to-end, spanning controls, swarm path planning, SLAM/LIO, and multi-sensor fusion (lidar/IMU/GPS RTK/camera). Notable work includes lidar degeneracy detection using Hessian-based constraints in an EKF and fusing visual odometry to reduce drift, plus ongoing lidar-camera synchronization and calibration.”
Mid-level Robotics/Software Engineer specializing in autonomous navigation and ROS2
“Early-career robotics software engineer with a couple years of ROS/ROS2 experience focused on agricultural mobile robots. Led integration of a Livox MID-360 3D LiDAR on the Farm-ng Amiga platform, patching ROS2 drivers/QoS and building a 3D-to-2D mapping pipeline so Nav2 could run reliable SLAM/navigation in GPS-denied greenhouse/hop-field environments, enabling stable autonomous row-following.”
Junior Robotics Engineer specializing in AI, perception, and autonomous navigation
“Robotics software engineer with 2+ years of ROS/ROS2 experience who built a mobile robot stack from scratch (Fusion 360 → URDF → ROS) and integrated teleop, SLAM, and navigation. Worked in an ASU lab applying deep learning for person tracking on a TurtleBot setup, and solved real deployment issues like Raspberry Pi video-stream latency via compression and on-board processing. Also reports experience with CI/CD tooling (Jenkins) and Kubernetes.”
Senior UAV/Robotics Engineer specializing in perception, sensor fusion, and localization
“Robotics software engineer working on GNSS-denied UAV localization using 5G PRS (OpenAirInterface + USRP B210) and multi-sensor fusion, with a published AIAA SciTech 2026 result achieving 1.5m RMSE on low-cost hardware. Also integrating a Vision-Language-Action model (SmolVLA) onto the Stretch 2 platform for language-assisted manipulation, leveraging ROS 2, imitation learning data collection with RGB-D, and simulation in Gazebo/MuJoCo for sim-to-real deployment.”
Entry-level AI Engineer specializing in LLM agents, RAG, and computer vision
“Robotics/AV-focused candidate who contributed to an F1TENTH autonomous vehicle college project, building key autonomy components from raw sensor data to driving commands. Strong in perception and state estimation (visual odometry, particle-filter localization), plus mapping (occupancy grids) and planning/control (RRT, Gap Follow, PID), with hands-on ROS tooling and simulation validation in Gazebo/RViz and ROS environment containerization using Docker.”
Mid-level Aerospace & Robotics Engineer specializing in UAVs and autonomous systems
“Robotics/ROS engineer who led development of ROS 2 nodes for supervising and making safety/mission decisions for autonomous fixed-wing UAVs using PX4 and Gazebo, including handling sensor/battery failures, wind, and obstacle conditions. Has hands-on experience debugging ROS2 multi-node communication (QoS, publish rates) and navigating sim-to-real deployment from SITL to real flight hardware.”