Pre-screened and vetted.
Senior Python AI/ML Engineer specializing in MLOps, data engineering, and LLM applications
Junior Machine Learning Engineer specializing in NLP and computer vision
Junior Robotics Software Engineer specializing in GNSS localization, perception, and controls
Junior Robotics & AI/ML Engineer specializing in autonomous systems and computer vision
Mid-level AI & Machine Learning Engineer specializing in computer vision and MLOps
Mid-level AI/ML Engineer specializing in production ML, NLP, and computer vision
Mid-level Machine Learning Research Engineer specializing in foundation models and GenAI
Staff Machine Learning Engineer specializing in Generative AI, MLOps, and Computer Vision
Intern Robotics Engineer specializing in autonomous navigation and robot integration
Junior Software Engineer specializing in data engineering and machine learning
Intern Aerospace/Robotics Engineer specializing in GNC, autonomy, and sensor fusion
“University robotics researcher graduating May 2026 who integrated an Intel RealSense D435i onto a TurtleBot3 (Jetson Nano) and built a ROS 2 node + OpenCV pipeline to feed color-based cues into navigation/path planning for RL grid-world experiments. Has hands-on ROS 2 experience spanning Gazebo simulation, Nav2, ros2_control, multi-robot namespacing, and ROS1-to-ROS2 bridging, plus CI/CD exposure (GitLab CI, Jenkins) from internships including aircraft navigation work.”
Mid-level Software & Robotics Engineer specializing in autonomous systems and ROS 2
“Robotics software engineer focused on production-grade autonomy in GPS-denied environments, building full navigation stacks (perception, EKF/UKF sensor fusion, planning, control) in ROS2. Integrated YOLOv8/semantic segmentation/RL policies into real-time NAV2 pipelines via a custom perception-aware costmap layer, with emphasis on deterministic control loops, embedded GPU performance, and robust system observability/fault tolerance.”
Mid-level Software Engineer specializing in AWS, full-stack development, and AI data systems
“Backend engineer who built a Python-based data profiling/statistics platform processing up to 50M rows and ~300 metrics, using a DAG execution model, multithreading, and smart caching to cut processing time by up to 70%. Also improved PostgreSQL query performance from 12s to 2s via indexing/query rewrites, integrated an LLM (LangChain + OpenAI) for explainable “chat with the pipeline” functionality, and designed an AWS EC2+SQS architecture for scalable, isolated per-user processing.”
Senior Autonomous Driving Software/ML Engineer specializing in localization, mapping, and V2X
“Autonomous driving perception/mapping engineer who designed a vectorized local mapping module end-to-end across labeling, training, and evaluation for multi-modal inputs (vision/LiDAR/nav maps). Implemented automated evaluation/regression testing and improved occlusion/long-range precision by ~5% using temporal features and transformer-based reference points, while optimizing models via pruning/EfficientNet to fit system resource constraints.”
Mid-level Robotics Engineer specializing in SLAM, perception, and state estimation
“Robotics software lead with 4+ years of ROS/ROS2 experience spanning a startup (Inductive Robotics) and General Motors, building autonomous mobile manipulation and AMR material-handling stacks. Has hands-on depth in SLAM/navigation (Cartographer/Nav2), perception, and simulation, and has directly modified Cartographer to handle real-world sensor dropouts. Currently working on fleet-scale mapping capabilities (map merging/editing, trajectory pruning) for multi-robot deployments.”
Mid-level Full-Stack Software Engineer specializing in cloud and data platforms
“Full-stack engineer with experience spanning Amazon IMDb and Northeastern’s NeuroJSON portal, combining consumer product work with complex scientific data applications. Built IMDb’s streaming providers feature—described as the company’s most impactful feature of 2023—and has hands-on experience with React/Angular, GraphQL, AWS, Python services, and production monitoring.”
Mid-level Data Scientist specializing in business intelligence and machine learning
“Internship experience building a production LLM-powered podcast operations agent that automated lead intake (HubSpot), guest research, scheduling (Calendly), meeting-summary evaluation (Gemini), and human approval via Slack bot—while retaining rejected candidates for future outreach. Also contributed to ideation of a multi-agent orchestration framework with parsing and task routing, and emphasized reliability via structured prompts, HITL feedback, and prompt-based test sets.”
Intern Robotics & Computer Vision Engineer specializing in surgical robotics
“Robotics software engineer who built and owned an autonomous laparoscope tracking system on a UR3e with an eye-in-hand RealSense camera, integrating YOLO-based tool detection with velocity control under a strict RCM constraint and deploying successfully in a hospital setting. Deep ROS2/MoveIt2 experience (architecture, QoS, custom nodes) plus autonomy stack work across SLAM, planning, and real-time latency/control debugging.”
Senior Machine Learning Engineer specializing in LLMs, RAG, and computer vision
“Built an "AskMyVideo" system that turns YouTube videos into queryable knowledge graphs by transcribing audio (Whisper), chunking and embedding content, and enabling traceable answers back to exact timestamps. Strong in entity resolution (rules + fuzzy matching + TF-IDF/cosine with PR-curve thresholding) and modern retrieval stacks (FAISS, hybrid dense/sparse, domain fine-tuning with ~12% precision gain), with a production mindset using Airflow/Prefect, Docker/FastAPI, and LangSmith/Prometheus/Grafana observability.”
Junior Embedded Systems & Wireless Software Engineer specializing in BLE/Wi-Fi performance
“Master’s capstone contributor on an autonomous rover navigation project, serving as an embedded/robotics software designer. Built low-level wheel control and odometry from encoders, integrated RealSense and RPLidar via ROS, and solved sensor-fusion/coordinate-frame issues by creating custom TF transforms. Used Gazebo to debug sim-to-real behavior and improved reliability on rough terrain by moving to dual-channel encoders when IMU data proved unreliable.”