Pre-screened and vetted.
Junior Computer Vision Researcher specializing in deep learning and object detection
“Robotics engineer who built and scaled a distributed perception stack on a Unitree Go1 quadruped, coordinating 5 Jetson Nanos and a Raspberry Pi to capture, aggregate, and stream multi-camera video in real time via UDP/GStreamer and custom ROS nodes. Also implemented a YOLOv9-based detection pipeline enhanced with Grad-CAM-driven selective image enhancement (e.g., MIRNet/UFormer) to improve real-time detections and robot reactions to visual stimuli.”
Intern Test Engineer specializing in embedded systems, robotics, and data automation
“Robotics software contributor on an SJSU Robotics Mars rover hub, where they built a C++ camera gimbal driver using the Libhal open-source library and implemented/tuned PI/PID control to achieve stable servo behavior.”
Mid-level Audiovisual & Network Systems Engineer specializing in live sound and streaming
“Audio designer/editor with a producer background who edits music and SFX as a compositional tool—shaping narrative tension in projects like a stop-motion horror short film and optimizing pacing for YouTube retention. Runs an end-to-end, template-driven post workflow (dialogue cleanup, bussing, automation, loudness targets, and stem deliverables) designed for fast, high-volume iteration.”
Junior Robotics Engineer specializing in autonomous navigation and computer vision for agriculture
“Robotics software engineer who led an autonomous nursery management robot project at Auburn University, spanning RGB-D/IMU sensor fusion, SLAM navigation, and real-time ML for plant detection/quality assessment. Strong ROS1/ROS2 background (C++/Python) with deployment on NVIDIA Jetson, including profiling-driven optimization of YOLO segmentation for real-time behavior and multi-robot (UGV/UAV) communication using ROS2.”
Junior Machine Learning Engineer specializing in NLP, data pipelines, and LLM workflows
“Built and shipped a production LLM-powered decision system that replaced a slow, inconsistent manual review process by turning messy text into structured, auditable outputs behind an API. Demonstrates strong end-to-end ownership of reliability and operations (schema validation, retries/fallbacks, latency/cost controls, monitoring for drift) and a disciplined approach to evaluation and regression testing. Experienced collaborating with non-technical reviewers to define success criteria and deliver interpretable outputs that get adopted.”
Senior Computer Vision Engineer specializing in AI/ML for scientific imaging
“Computer-vision engineer with hands-on experience designing UAV-based production imaging systems for object detection/tracking, including camera selection and resolution/zoom tradeoffs. Improved segmentation/measurement accuracy by implementing orthorectification using ground points plus intrinsic/extrinsic calibration to correct perspective distortion, and has built Python/OpenCV pipelines (including barcode-focused grayscale processing and multithreaded execution).”
Mid-level AR/VR & Unity Developer specializing in mobile XR and real-time 3D
“Game/VR simulation developer who built and shipped multiple VR training levels (e.g., nursing and scientific method) at VXR Labs, owning level implementation and backend logic. Experienced in Unreal Engine 5 Blueprints prototyping (including a horde mode) and in designing tightly gated, step-based educational experiences while collaborating closely with educators/subject-matter experts to balance realism with VR feasibility.”
Senior AI/ML Engineer specializing in LLMs, RAG, and VR/XR multimodal systems
“PhD researcher (University of Utah) who built a production RAG-powered Virtual Reality Research Assistant to answer lab research questions with concrete citations. Implemented an end-to-end LangChain pipeline using PyPDFLoader, chunking strategies, OpenAI embeddings, and ChromaDB, with emphasis on grounding to reduce hallucinations and ensure research-grade accuracy. Collaborated closely with a non-technical PhD advisor to scope requirements, manage cost constraints, and demo iterative progress.”
Mid-level Robotics Engineer specializing in ROS 2, control systems, and manipulation
“Robotics software engineer with hands-on ROS2 experience across manipulation, SLAM/localization, and sensor fusion. Recently built an end-to-end hybrid force-position control system for a Ufactory xArm7 with a 6-axis force/torque sensor to enable compliant, force-guided shaft insertion, including real-time Jacobian computation, TF pipeline, and MoveIt2 trajectory execution validated on hardware.”
Mid-level Robotics Software Engineer specializing in ROS, C++ and embedded Linux
“Robotics software lead at Icor who grew from intern to owning the end-to-end software lifecycle for a mobile manipulator platform deployed to 300+ customers globally. Deep hands-on ROS2/MoveIt2 and navigation-stack integration (URDF/TF, sensors, behavior engine) plus production infrastructure (CI/CD, OTA, field OS upgrades) and real-world performance tuning for motion planning in EOD multi-robot environments.”
Mid-level Systems Integration & Test Engineer specializing in embedded robotics and automation
“Senior engineering student leading a robotics capstone using a Jetson Nano + Yahboom DOFBOT to play whiteboard games (Tic-Tac-Toe, Hangman) via computer vision and ML. Owns the inverse kinematics and OpenCV pipeline, uses Gazebo/URDF for simulation, and is planning C++/multithreading/Pybind11 optimizations to meet real-time constraints on limited embedded hardware.”
Junior Robotics Engineer specializing in controls, simulation, and production debugging
“Robotics software engineer who helped build a startup "robo-chef" system end-to-end, including pick-and-place simulation using ArUco-marked stations and smooth motion planning. Hands-on ROS 2 integrator across LiDAR/IMU/camera perception-to-navigation stacks (Nav2, SLAM Toolbox, ros2_control), with demonstrated ability to debug real-time timing drift and improve repeatable placement through calibration and motion blending. Uses Gazebo simulation plus Docker/CI pipelines to validate and deploy robotics software reliably.”
Senior AI/ML Engineer specializing in Generative AI and healthcare analytics
“ML/AI engineer with strong healthcare insurance domain depth who has owned fraud detection and LLM claims products end-to-end in production. Stands out for combining modern MLOps and RAG architecture with measurable business impact, including millions in fraud savings, 40% faster analysis, and reusable platform tooling that accelerated multiple teams.”
Junior Machine Learning Engineer specializing in computer vision and robotics
“Research assistant who single-handedly built and integrated an indoor autonomous wheelchair system using NVIDIA Jetson Nano, LiDAR, and a stereo camera. Implemented a multi-sensor perception pipeline (OpenCV/PCL) with ROS-based modular nodes, TF frame management, and robust debugging via RViz/rosbag, plus simulation testing in Gazebo and Dockerized environments for portability.”
Entry-level Robotics Research Assistant specializing in multi-agent autonomy and reinforcement learning
“ROS2/Python robotics engineer who led a 4-person team building a simulated multi-robot warehouse system (SLAM + NAV2 + centralized task allocation) in Gazebo Ignition, including a distance/priority-based controller that reduced task completion time by ~30%. Also has hands-on real-time debugging/tuning experience for both mobile robots and a MyCobot 600 Pro manipulator, plus simulation work in CARLA using RL (TD3) and Social-LSTM for pedestrian behavior modeling.”
Mid-level Robotics Software Engineer specializing in ROS, motion planning, and perception
“Robotics software engineer who built a ROS/C++ workcell stack to automate coating wooden panels with a 6-DOF arm, including trajectory generation, MoveIt/OMPL planning, and a single launch/config setup that runs in both Gazebo and on real hardware. Strong in debugging real-world planning failures (e.g., intermittent aborted/no-plan regions) through logging, planner swaps, and collision/kinematics tuning, and in designing modular ROS/ROS2 systems with versioned interfaces and translation layers for heterogeneous robots.”
Entry-Level Software Engineer specializing in AWS data pipelines and AI automation
“AI research engineer who has built and tested LLM agents end-to-end, including a Telegram real-time voice-to-typing assistant integrated with calendar scheduling. Emphasizes production concerns (security via mic-triggered activation, multi-model fallbacks, monitoring) and agent predictability using a GPT-3.5-based critic plus structured outputs (Pydantic) and ReAct-style orchestration.”
Junior Robotics Engineer specializing in ROS, perception, and robotic manipulation
“Robotics software engineer focused on ROS2 autonomy stacks, with hands-on work spanning semantic 3D SLAM, sensor fusion, and controller customization. Built an indoor GPS-denied semantic SLAM system (>95% accuracy) and extended Nav2’s MPPI controller with a custom C++ critic to keep an agricultural rover centered in crop rows, boosting CO2 laser weeding effectiveness by 40%. Strong in simulation-to-real workflows (Isaac Sim, Gazebo Ignition) and deployment automation (Docker on Jetson Orin NX, GitHub Actions CI/CD).”
Junior Robotics Engineer specializing in ROS2 perception and multi-sensor calibration
“Entry-level robotics software engineer/team lead with hands-on experience spanning multi-robot UAV simulation (Gazebo + PX4 SITL) and autonomous vehicle stack integration (ROS2 Humble + Autoware Universe). Has tackled real-time perception optimization (OpenCV + custom deep learning) and built robust cross-protocol communication interfaces to connect ROS2 systems with embedded ESP32 devices.”
Mid-level Embedded Software Engineer specializing in real-time control and automated testing
“Master’s thesis researcher building an intelligent fault diagnosis and predictive maintenance stack for autonomous quadcopters—covering simulation-based fault injection, signal processing (Id/Iq), ML fault classification, and real-time edge deployment on Raspberry Pi with Hailo-8 acceleration. Previously delivered production C++ middleware/microservices at Accolite and has hands-on experience with constrained networking via a LoRaWAN IoT communication stack.”
Junior Robotics/Mechatronics Engineer specializing in SLAM, motion planning, and autonomy
“Robotics software engineer focused on autonomy stacks for high-payload AMRs using ROS2/Nav2, with hands-on expertise in SLAM/localization and sensor fusion (RTK GPS, IMU, wheel odom, ZED2) to eliminate drift and stabilize real-time behavior on deployed hardware. Also built multi-robot coordination in ROS2/Gazebo and uses Docker + Git/CI-style testing to create reproducible simulation-to-hardware pipelines.”
Mid-level Machine Learning & AI Engineer specializing in Generative AI, NLP, and MLOps
“Built and deployed production LLM systems for summarizing sensitive legal and financial documents, emphasizing GDPR-aligned privacy controls and scalable hybrid cloud architecture. Experienced with Kubernetes/Airflow orchestration and rigorous testing/monitoring practices, and has delivered measurable business impact (18% conversion lift) by translating AI outputs for non-technical marketing stakeholders.”
Mid-level AI/ML Engineer specializing in data engineering, LLM/RAG pipelines, and recommender systems
“Research assistant at St. Louis University who built and deployed a production document-intelligence RAG system (Python/TensorFlow, vector DB, FastAPI) on AWS, focusing on grounding to reduce hallucinations and latency optimization via caching/async/batching. Also developed a personalized recommendation system for the Frenzy social platform and partnered closely with product/UX to define metrics and iterate on hybrid recommenders and cold-start handling.”
Junior Software Engineer specializing in AI/ML and cybersecurity
“Salesforce-focused engineer with hands-on depth across Sales Cloud, Service Cloud, Apex, LWC, and Aura. Stands out for owning end-to-end automation features, making thoughtful async architecture decisions to balance performance and reliability, and designing responsive Lightning interfaces that hold up under large data volumes.”