Pre-screened and vetted.
Mid-level Robotics Software Engineer specializing in SLAM and 3D computer vision
“Robotics software engineer focused on outdoor mobile robot localization and navigation, building ROS1/ROS2 systems with NavSat+EKF sensor fusion and custom Nav2/Costmap2D extensions for 3D obstacle clearance. Demonstrates strong real-world troubleshooting by tracing localization drift to a failing IMU connector, repairing it, and then creating sensor-health monitoring tooling; experienced taking features from Gazebo simulation through field testing to Docker/Kubernetes deployment with CI via GitHub Actions.”
Mid-level Haptics & VR/AR Researcher specializing in immersive interactive systems
“Built and owned an end-to-end VR surfing simulator in Unity/C#, integrating Arduino inputs, external tracking, and a motion platform with multisensory feedback. Deep focus on VR comfort and control stability (PD control, quaternion-based orientation, latency/jitter mitigation) validated via IMU measurements and 200+ user studies, plus some Unreal gameplay framework and replication/prediction experience.”
Intern Controls Software Engineer specializing in robotics and autonomous vehicles
Junior Robotics & AI/ML Engineer specializing in autonomous systems and computer vision
Senior Software Engineer specializing in cloud platforms, Kubernetes, and real-time streaming
Mid-level Machine Learning & Data Engineer specializing in MLOps and cloud data platforms
Mid-level Control Systems Engineer specializing in automation, embedded systems, and hydrogen energy
Mid-level Robotics/Mechatronics Engineer specializing in autonomous systems and robot manipulation
Intern Robotics Engineer specializing in autonomous navigation and robot integration
Intern Aerospace/Robotics Engineer specializing in GNC, autonomy, and sensor fusion
“University robotics researcher graduating May 2026 who integrated an Intel RealSense D435i onto a TurtleBot3 (Jetson Nano) and built a ROS 2 node + OpenCV pipeline to feed color-based cues into navigation/path planning for RL grid-world experiments. Has hands-on ROS 2 experience spanning Gazebo simulation, Nav2, ros2_control, multi-robot namespacing, and ROS1-to-ROS2 bridging, plus CI/CD exposure (GitLab CI, Jenkins) from internships including aircraft navigation work.”
Mid-level Robotics Engineer specializing in SLAM, perception, and state estimation
“Robotics software lead with 4+ years of ROS/ROS2 experience spanning a startup (Inductive Robotics) and General Motors, building autonomous mobile manipulation and AMR material-handling stacks. Has hands-on depth in SLAM/navigation (Cartographer/Nav2), perception, and simulation, and has directly modified Cartographer to handle real-world sensor dropouts. Currently working on fleet-scale mapping capabilities (map merging/editing, trajectory pruning) for multi-robot deployments.”
Mid-level Software & Robotics Engineer specializing in autonomous systems and ROS 2
“Robotics software engineer focused on production-grade autonomy in GPS-denied environments, building full navigation stacks (perception, EKF/UKF sensor fusion, planning, control) in ROS2. Integrated YOLOv8/semantic segmentation/RL policies into real-time NAV2 pipelines via a custom perception-aware costmap layer, with emphasis on deterministic control loops, embedded GPU performance, and robust system observability/fault tolerance.”
Mid-Level Software Engineer specializing in cloud-native systems, automation, and LLM-enabled robotics
“React-focused engineer who built a full-stack analytics/test-metrics dashboard (React frontend + Python backend) and turned common UI pieces (data tables, filter panels, chart wrappers) into a reusable internal component library with docs, examples, and basic tests. Strong on profiling-driven performance optimization (React Profiler, memoization) and on owning ambiguous internal-tool projects end-to-end; now planning to package internal patterns into public open-source components.”
Junior Robotics Perception Engineer specializing in autonomous navigation and robot learning
“Robotics software/perception engineer with production AMR experience at Symbotic, building a real-time SKU case re-identification pipeline used in high-volume Walmart/Target warehouse operations. Strong in ROS2 + Docker deployments on Jetson (TensorRT quantization) and system-level performance debugging, including cutting inference latency from ~13s to ~2s through architecture changes. Also has lab experience integrating SLAM/MPPI/behavior trees for rule-compliant navigation and distributed perception-to-UR5e manipulation systems (MoveIt/ros_control) with multi-camera sensing and 3D reconstruction.”
Junior Data Scientist / Software Engineer specializing in LLM analytics and robotics
“Robotics/ML engineer who implemented TD3 and PPO in PyTorch to solve the challenging OpenAI Gymnasium humanoid-v5 MuJoCo task, including custom networks, rollout logic, and training scripts. Also has hands-on robotics coursework experience with ROS-based RRT motion planning on a real robotic arm, plus practical CI/CD and containerization experience (Docker, Jenkins, GitHub Actions). Currently exploring world models (VAE + sequence generator) using Euro Truck Simulator data.”
Junior Embedded Systems & Wireless Software Engineer specializing in BLE/Wi-Fi performance
“Master’s capstone contributor on an autonomous rover navigation project, serving as an embedded/robotics software designer. Built low-level wheel control and odometry from encoders, integrated RealSense and RPLidar via ROS, and solved sensor-fusion/coordinate-frame issues by creating custom TF transforms. Used Gazebo to debug sim-to-real behavior and improved reliability on rough terrain by moving to dual-channel encoders when IMU data proved unreliable.”
Mid-level Robotics Software Engineer specializing in real-time distributed autonomous systems
“Robotics software engineer at Tesla who led end-to-end development of a distributed real-time control and orchestration platform for autonomous systems. Deep production ROS 2 experience (nav2, slam_toolbox), with demonstrated wins reducing end-to-end latency 25–30%+ via profiling, multithreaded executors, and QoS tuning, plus simulation and deployment at scale using Gazebo/Webots, Docker/Kubernetes, and CI/CD.”
Intern Full-Stack/AI Software Engineer specializing in GenAI and cloud microservices
“Backend engineer who owned the AI/data pipeline layer for an EV-charging management platform (Ampure Intelligence), ingesting real-time charger telemetry via OCPP and serving FastAPI APIs to web/mobile clients. Strong in production reliability for asynchronous systems (state reconciliation, idempotency), Kubernetes GitOps (ArgoCD), Kafka streaming, and zero-downtime cloud-to-on-prem migrations; also improved LSTM-based forecasting through targeted preprocessing.”
Intern Robotics/Software Engineer specializing in autonomy, computer vision, and controls
“Robotics software engineer with a master’s focused in the field who has integrated a multi-sensor robotics fusion laser system (fault detection, PLC comms, PyTorch-based CV diagnostics, and an engineer-facing status front end) under NDA. Has ROS experience from the University of Michigan Autonomous Robotic Vehicle team using Nav2/SLAM Toolbox/Gmapping with RViz and ROS bag-driven debugging, plus Gazebo simulation work and upcoming drone path-planning optimization research.”
Junior Robotics & Computer Vision Engineer specializing in SLAM and 3D perception
“Robotics software engineer with Samsung Research America internship experience as primary developer on a real-time dense mapping system producing point clouds, plus a monocular depth-estimation framework using positional data. Hands-on ROS 2 and CAN integration from a University of Michigan autonomous shuttle project, and practical SLAM/motion-planning experience including handling the kidnapped robot problem and Dockerizing ORB-SLAM3 environments.”