Pre-screened and vetted.
Mid-level AI/ML Engineer specializing in GenAI, computer vision, and real-time ML pipelines
Senior Computer Vision & Sensor Algorithms Engineer specializing in imaging systems
“Robotics/remote-sensing software engineer who built and validated multisensor image-processing and spectral chemical-detection pipelines (RX anomaly detection, ACE), including calibration protocols with a motorized shutter and rigorous data QC. Uses white-box NumPy simulators to debug SLAM/registration issues before translating logic to C++, and partnered with hardware teams to solve temperature-driven signal variation via combined software calibration and improved thermal management.”
Intern Robotics Engineer specializing in autonomous navigation and SLAM
“Robotics software engineer with deep ROS2 Humble/Nav2 experience who built an SDF-based navigation system (RRT* global planning + gradient-based local avoidance) and implemented scan-matching localization. Proven real-time performance debugging and optimization on hardware (Unitree B1), including halving compute-cycle latency and resolving ROS2 jitter/message-drop issues through explicit QoS and executor/callback-group design.”
Mid-level AI/ML Engineer specializing in MLOps and production ML systems
“Backend/ML engineer who has shipped high-scale real-time systems across e-commerce and healthcare: built a PharmEasy real-time recommendation engine for ~2M monthly users (cut feature latency 5 min→30 sec; +15% cross-sell) and architected a HIPAA-compliant multimodal clinical diagnostic workflow (DICOM+EHR) with XAI, MLOps (MLflow/Airflow/K8s), and drift/monitoring guardrails supporting 10k+ daily predictions.”
Entry Robotics Engineer specializing in ROS 2 autonomy and simulation (Isaac Sim)
“Robotics software engineer (PhD background) who owned an end-to-end autonomy stack for a 2025 GTC demo, integrating ROS2/MoveIt2 with a high-fidelity NVIDIA Isaac Sim environment for regression testing and sim-to-real validation. Has hands-on experience optimizing MoveIt2 planning (parallel pipelines + evaluation metrics) and building outdoor Nav2 localization using dual EKF with GNSS and LiDAR/IMU sensor fusion; currently building simulation environments at Richtech Robotics.”
Mid-level Machine Learning Engineer specializing in computer vision and generative AI
“Built and deployed an LLM/RAG system that uses differential privacy and distributional similarity checks to transform private data into a non-sensitive knowledge base while preserving utility. Also has experience demonstrating adversarial ML concepts (FGSM) to non-technical audiences by focusing on observable model behavior rather than implementation details.”
Senior AI/ML Engineer specializing in Generative AI and RAG
“ML/NLP practitioner at Morf Health focused on unifying fragmented healthcare data by linking structured patient/encounter records with unstructured clinical notes. Has hands-on experience with transformer embeddings, vector databases, and domain fine-tuning, plus rigorous evaluation (precision/recall) and human-in-the-loop validation with clinical SMEs to make pipelines production-grade.”
Mid-level AI/Robotics Engineer specializing in autonomous systems and perception
“Robotics software engineer in an Autonomous Vehicle Lab building an end-to-end ROS 2 autonomous golf cart stack (sensor integration, SLAM, planning, and camera+LiDAR perception). Demonstrated strong systems-level debugging by fixing a FastLIO2 LiDAR timestamp/IMU-window issue that restored mapping quality, and stabilized real-time GigE camera perception by diagnosing backpressure and tuning ROS 2 QoS plus compressed transport.”
Junior Machine Learning Engineer specializing in computer vision and LLM applications
“Built and led an autonomous driving software effort for Formula Student, owning the full autonomy stack (perception, planning, control) orchestrated in ROS. Implemented stereo depth + YOLO object detection, RRT/RRT* planning, and a robust SLAM pipeline (Kalman filter, submapping) while leveraging Gazebo simulation and modern deployment tooling (Docker/Kubernetes, AWS, GitHub Actions CI/CD).”
Mid-level Robotics Software Engineer specializing in autonomous perception and sensor fusion
“Robotics engineer with Honeywell and Tata Motors experience deploying ROS/ROS2 autonomous mobile robot fleets into live factory environments, integrating sensors, safety PLCs, and on-prem services. Known for solving end-to-end latency and stability issues (including network spikes under load) using gRPC, Docker, and improved diagnostics—cutting diagnosis time from hours to minutes and achieving sub-150 ms control response.”
Senior Research Scientist specializing in AI for autonomous driving and semiconductors
“Robotics perception engineer focused on autonomous driving 3D detection, integrating PETR embeddings into BEVFormer and tackling hard orientation/temporal alignment issues in multi-camera BEV pipelines. Uses Gazebo with custom sensor plugins to validate calibration, timing, and transforms, and blends synthetic labels with real imagery for scalable 3D box generation.”
Mid-level Software Development Engineer specializing in C++ and EDA toolchains
“Worked at AMD on Vivado tool releases, focusing on adding and integrating new IP/SoC functionality into the Vivado flow and validating expected behavior through testing. Comfortable engaging with customers and open to travel for hands-on customer-facing work.”
Junior Cloud & AI/ML Engineer specializing in AWS GovCloud and MLOps
“Robotics software engineer with hands-on ROS 2 autonomy experience on an obstacle-avoiding quadrotor (ROS 2 + Gazebo + PX4 + Nav2/SLAM), including custom work to extend Nav2 into a 3D aerial domain and output PX4 trajectory setpoints. Also built cost-saving ML infrastructure (PostgreSQL + AWS data-cleaning pipeline) and improved object detection accuracy by 40% using CUDA/PyTorch, with strong containerization and CI/CD practices (Docker + Kubernetes, aggressive version pinning) to prevent environment drift.”
Mid-level AI/ML Engineer specializing in GenAI, LLMs, and computer vision
“Built and productionized a multi-agent, LLM-powered document understanding system to replace manual review of long documents, using LangGraph orchestration plus RAG to reduce hallucinations. Implemented layered reliability controls (structured templates, checker agent, and human-in-the-loop feedback) and reported ~40% speed improvement after orchestration; also has hands-on Airflow experience for scheduled data pipelines.”
Mid-level AI/ML Engineer specializing in NLP, computer vision, and Generative AI
“Built and deployed a production LLM-powered clinical insights/summarization assistant for healthcare teams, including a Spark+Airflow pipeline, fine-tuned transformer models, and a FastAPI Docker service on AWS. Demonstrates strong MLOps/LLMOps depth (Airflow on Kubernetes, custom AWS operators/IAM, MLflow, CloudWatch) and practical reliability work like hallucination mitigation, confidence scoring, and retrieval-backed evaluation with shadow deployments.”
Executive Technology Leader (CTO) specializing in IoT sensing, AI/ML, and RF/embedded systems
“Currently a startup CTO who thrives on building new technology stacks and rapidly turning technical ideas into products. Interested in partnering with a CEO/business team to commercialize embedded/edge concepts such as multi-sensor drone localization (video/audio/RF with SDR), low-cost solar+battery power nodes networked via LoRa, and an Amazon Sidewalk/LoRa connectivity device with cloud management.”
Junior Full-Stack & AI/ML Engineer specializing in LLMs and multimodal document processing
“Built a production RAG-based NBA player scouting assistant that embeds player profiles into FAISS, orchestrates retrieval and LLM recommendations with LangChain, and surfaces results via embedded Tableau dashboards. Demonstrates strong focus on evaluation/monitoring (batch tests, LLM-as-judge, latency/failure/token metrics) and has experience translating non-technical founder goals into DAPT + fine-tuning plans on curated data.”
Junior Robotics Researcher specializing in vision-based manipulation and learning-based control
“Robotics software candidate with experience spanning simulation (MuJoCo, Gazebo, Webots) and ROS1/ROS2 development, including hardware-oriented work on a hexapod and a Mecademic Meca500 R3 arm. Built a visually guided interactive indoor robot system using a CV pipeline plus POMDP + imitation learning with PPO-based residual RL, and has practical debugging experience improving LiDAR SLAM stability and migrating sensor interfaces from ROS1 to ROS2.”
Entry-level Computer Vision/Autonomy Engineer specializing in perception and object detection
“Robotics software engineer with hands-on ROS2 + Autoware perception experience, focused on building benchmarking infrastructure for object detection models inside a real-time autonomous driving stack. Strong in evaluation rigor (synchronization, deterministic playback, format standardization) and practical ROS2 debugging/validation workflows using RViz and Gazebo.”
Mid-level Generative AI & Machine Learning Engineer specializing in agentic LLM systems
“Built and deployed a production agentic LLM knowledge assistant that answers complex questions over internal documents, APIs, and databases using a RAG architecture (FAISS/Pinecone) and LangChain/LangGraph orchestration. Emphasizes production-grade reliability and hallucination control through grounding, confidence thresholds, validation, retries/fallbacks, and full observability (logging/metrics/traces) with continuous evaluation and feedback loops.”
Intern AI/ML Researcher specializing in computer vision and data engineering
“Built a production-oriented multimodal RAG "Fix Assistant" with FastAPI, Tavily search, BM25 + cross-encoder reranking, and a local Phi-3.5 model, emphasizing strict grounding and fallback/verification modes to prevent hallucinations. Also has hands-on federated learning experience using STADLE to orchestrate edge-node training and aggregation for EV telemetry data, plus experience communicating AI results to non-technical stakeholders (traffic RL/congestion outcomes).”
Junior Robotics Engineer specializing in ROS 2, perception, and motion planning
“Robotics software engineer/researcher (master’s work) who built a human-aware motion planning stack for a UR16/UR16e arm: RGB-D 3D skeleton perception in ROS2, deep-learning-based human motion prediction, and MoveIt2-integrated real-time planning with a Gazebo digital twin. Demonstrated strong real-time optimization (profiling + GPU offload with CuPy/TensorRT) and practical systems skills spanning safety validation, visualization, and low-level comms (CAN/SocketCAN) on embedded deployments (Jetson, Docker, Autoware/Ouster).”
Mid-level Machine Learning Engineer specializing in LLM agents, RAG, and MLOps
“Built production LLM systems including a real-time customer feedback analysis and workflow automation platform using RAG and multi-agent orchestration with confidence-based human escalation, addressing privacy and legacy integration challenges. Also automated ML operations with Airflow/Kubernetes (e.g., daily churn model retraining) cutting retraining time to under 30 minutes, and demonstrates a rigorous testing/monitoring approach plus strong non-technical stakeholder collaboration.”