Pre-screened and vetted.
Executive AI/Robotics Tech Leader specializing in LLM products and automation
Senior Robotics Software Engineer specializing in manipulation, perception, and multi-sensor fusion
Junior Robotics Researcher specializing in multi-robot planning, control, and perception
Junior Robotics Software Engineer specializing in computer vision and ROS
Mid-level Manufacturing Equipment Engineer specializing in automation and production tooling
Junior Multimodal AI & Systems Engineer specializing in robotics and cloud infrastructure
Mid-level AI/ML Engineer and Developer Educator specializing in GenAI, RAG, and AI community building
Junior Robotics Software Engineer specializing in embedded radar and ROS2 autonomy
“Robotics software engineer who has built full ROS 2 stacks for both a semi-automated robotic ultrasound system (UR5e + depth camera) and a quadrotor planning/MPC pipeline in Gazebo. Strong in integrating major ROS 2 frameworks (MoveIt/Nav2/RTAB-Map), writing custom packages (URDF, ACADOS-based MPC, laser landmark detection), and optimizing real-time behavior via GPU parallelization and distributed multi-threaded ROS 2 architectures; also contributes to ROS 2 core (structured parameters).”
Junior Robotics Engineer specializing in motion planning and control
“Robotics software engineer who built a ROS2-based ping-pong ball interception system on a 7-DOF Sawyer arm, spanning real-time vision, trajectory prediction, and an MPC joint-velocity controller to hit a flying ball within ~1 second. Demonstrated strong real-time debugging and systems integration skills (timestamp-based latency analysis, event-based redesign, ROS2 QoS tuning) and is currently working with Isaac Sim in Docker with GitHub-based CI/CD for assembly-task simulation.”
Mid-level Robotics Software Engineer specializing in autonomy, ROS2, and SLAM
“Robotics software engineer leading an autonomy stack migration from ROS1 to ROS2, including a custom-built global parameter server to preserve existing infrastructure while shipping continuous production releases. Hands-on across navigation/safety/monitoring packages, control (ROS2 PID for steering/speed), and localization performance work (particle filter optimization), with strong ownership of CI-driven test strategy and release quality.”
Intern Robotics & Reinforcement Learning Engineer specializing in ROS2 manipulation and SLAM
Senior Full-Stack Java Developer specializing in microservices, cloud, and real-time systems
Junior Machine Learning Engineer specializing in generative modeling and computer vision
Junior Robotics Engineer specializing in humanoid RL, teleoperation, and sim-to-real deployment
Junior Robotics & Controls Engineer specializing in state estimation, simulation, and ROS2
Junior Robotics Engineer specializing in computer vision and autonomous systems
Mid-level Mechatronics & Robotics Engineer specializing in controls, ROS2, and autonomous systems
Mid-level Full-Stack Engineer specializing in Python, FastAPI, and cloud-native systems
Entry-level Robotics Engineer specializing in autonomous navigation and computer vision
“Robotics/IoT engineer who deployed a fog-enabled real-time monitoring system (edge Raspberry Pi + MQTT + cloud logging) and validated it via an IEEE-indexed publication. Strong in autonomous navigation with ROS/Gazebo, SLAM/localization, and cross-layer debugging using timing/transform-delay correlation. Extends Python computer vision pipelines (YOLO + OpenCV/Albumentations) for custom datasets and weather-specific conditions.”
Mid-level Robotics Software Engineer specializing in ROS2 and medical microrobotics
“Robotics software engineer with ~7 years of ROS/ROS2 experience spanning Mars rover simulation/navigation, robot arm integration (URDF/MoveIt/ros2_control), and medical magnetic actuation systems using RGB-D feedback. Built a 5-DOF CNC-like magnetic capsule navigation prototype end-to-end and has deep hands-on skill debugging real-time control issues (CAN, encoder timing, controller tuning) plus PLC/Modbus-to-ROS2 integration.”
Intern Computer Vision/Perception Engineer specializing in LiDAR and autonomous systems
“Robotics/AV-focused engineer who built an end-to-end gesture controller for a GEM e2 autonomous vehicle using YOLOv8 pose and ROS, including model training, ROS perception nodes, and a safety-oriented state machine (stop override + hold-to-register). Also has internship experience at Intramotev integrating LiDAR object detection via Redis pub/sub and performing sensor-frame calibration (roll/pitch correction using ground-plane normals), plus Dockerized deployments and Gazebo-based testing.”
Mid-level Robotics & Computer Vision Engineer specializing in ADAS and real-time perception
“Robotics/ADAS engineer who built an assistive feeding robot with reliable 3D mouth tracking (RealSense + MediaPipe) and ROS 2 integration to a WidowX250s arm, solving depth-noise, timing, and workspace/singularity issues for stable low-latency behavior. Also optimized a real-time lane-keeping controller at Hyundai using signal logging/replay, filtering (LPF/Kalman), and feedforward+PI tuning, with experience across SIL/HIL and CAN-based ECU integration.”