Job Details

Member of Technical Staff | Robotics (Computer Vision / VLA / ML Infrastructure)

  2026-04-02     Deepreach.ai     Santa Rosa,CA  
Description:

About the Company

DeepReach is building the next-generation data infrastructure for robotics. We help bridge the gap between promising robot models and real-world deployment by building the systems, data pipelines, and learning loops needed to make robots improve in production. We believe robotics progress will be driven not just by better models, but by better data engines: how data is collected, filtered, evaluated, and turned into measurable gains on real tasks. Our team works across robot deployment, teleoperation, data generation, model training, and evaluation, with a strong bias toward hands-on execution and fast iteration.

About the Role

This combined opening covers three core technical tracks for our robotics team:

  • Computer Vision (Perception)
  • VLA & Robot Learning
  • ML Infrastructure

You will be matched to the track aligned with your background after application. All roles focus on applied research, real physical robot deployment, closing the loop between data, models and production performance, and fast iteration in a startup environment.

Responsibilities

Track 1:Member of Technical Staff, Computer Vision

  • Build and improve computer vision pipelines for robotics, including perception for manipulation, scene understanding, tracking, and multi-camera systems
  • Work on camera calibration, synchronization, sensor integration, and data quality improvement for real-world robot setups
  • Develop tools and pipelines for generating, filtering, curating, and validating robotics vision datasets
  • Support data collection and annotation workflows by improving visual quality, consistency, and task relevance
  • Design experiments to measure how perception improvements affect downstream robotic performance
  • Debug perception failures in real environments, including issues caused by lighting, motion blur, occlusion, calibration drift, or sensor noise
  • Read and implement recent vision research, reproduce promising methods, and adapt them to production robotics workflows

Track 2:Member of Technical Staff, VLA

  • Train and fine-tune VLA, diffusion policy, or related robot learning models for real-world tasks
  • Build data and training pipelines that turn deployment and teleoperation data into better policy performance
  • Design experiments to identify what actually improves task success rates on real robots
  • Collaborate with data and deployment teams to close the loop between model failures and data collection strategy
  • Deploy and debug learned policies on physical robot systems, including robot arms, grippers, and multi-camera setups
  • Define internal evaluation frameworks tied to real operational tasks rather than benchmark-only performance
  • Read and implement recent papers, reproduce promising results, and adapt them for our stack and constraints

Track 3:Member of Technical Staff, ML Infrastructure

  • Build and maintain the infrastructure for robotics data processing, model training, evaluation, and experiment management
  • Develop scalable pipelines for ingesting, filtering, curating, versioning, and serving robotics datasets
  • Improve internal tooling for training runs, distributed jobs, checkpointing, dataset management, and metrics tracking
  • Build systems that connect deployment data, teleoperation data, and model evaluation into a fast iteration loop
  • Collaborate with research and deployment teammates to remove bottlenecks in training and evaluation workflows
  • Design internal benchmarks and experiment infrastructure that make model progress measurable and reproducible
  • Read and adapt ideas from research and open-source tooling to improve our internal platform

Qualifications

  • Bachelor's degree or equivalent practical experience in Computer Science, Robotics, Electrical Engineering, or a related field

Required Skills

  • Strong background in computer vision for real-world systems
  • Experience with one or more: multi-view geometry, calibration, visual tracking, segmentation, detection, 3D vision, point clouds, depth sensing, video understanding
  • Strong coding skills in Python and solid experience with PyTorch or related vision tooling
  • Ability to move from research ideas to robust working systems
  • Real-world experience working with physical camera and sensor systems in robotics
  • High ownership mindset and comfort in a fast-paced startup

Preferred Skills

  • Strong hands-on background in robot learning: imitation learning, RL, diffusion policies, VLA, visuomotor policies
  • Strong PyTorch skills and experience with modern model training workflows
  • Ability to move from paper to implementation quickly and independently
  • Strong systems intuition across perception, policy, and control
  • Real-world experience deploying or debugging policies on physical robots
  • High ownership mindset and comfort in an early-stage, fast-changing environment

Pay range and compensation package

Salary reference: $130K - $180K per year

Equal Opportunity Statement

We conduct regular resume screening for applications from both channels simultaneously. If your profile passes the initial review, our team will send an interview invitation via email for further communication. After shortlisting, we will match your background to the suitable track: Computer Vision / VLA / ML Infrastructure, and share the full role details accordingly.

How to Apply

You may apply in two optional ways:

  • Directly submit your resume by applying to this position on the current job platform;
  • Visit our official website talex.ai to find and submit your application for the corresponding combined role.


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search