The Pragmatic Engineer

The Pragmatic Engineer

Share this post

The Pragmatic Engineer
The Pragmatic Engineer
Skills useful to learn for robotics engineering
Deepdives

Skills useful to learn for robotics engineering

Helpful software engineering, AI engineering, and robotics fundamentals to know for getting into robotics. Also: advice about studying this exciting discipline at university

Gergely Orosz's avatar
Sandor Felber's avatar
Gergely Orosz
and
Sandor Felber
Jul 08, 2025
∙ Paid
99

Share this post

The Pragmatic Engineer
The Pragmatic Engineer
Skills useful to learn for robotics engineering
11
Share

Robotics is a very hot industry, and today, the hottest place within it is humanoid robotics. We previously published two deepdives on this topic with Sandor Felber, who’s been a Robot Learning Researcher at MIT, and a Robotics R&D Engineer at Tesla in California, among other roles. The articles cover relevant topics at the intersection of AI, robotics, and software engineering. Since the last deepdive, Sandor has cofounded Nyro Humanoids, an early-stage startup headquartered in San Francisco that aims to build and deploy humanoid robots in the real world.

In the third and final deepdive in this series, we take a close look at skills useful for joining this field, covering:

  1. Software engineering skills

  2. AI skills

  3. Robotics fundamentals

  4. Advice for studying a Master’s in Robotics

  5. Influential perspectives

  6. Why get into robotics?

Previous issues cover:

Robotics basics for software engineers (part 1):

  • Getting into robotics

  • Robotics industry overview

  • Planning a robotics project

  • Development phase

  • Tech stack and tools (and how Python, C, C++, and Rust are popular)

  • Testing, demoing, and shipping

  • Day-to-day as a robot builder

Robotics for software engineers: humanoid robots (part 2):

  • Why the humanoid form?

  • Hardware challenges

  • Software engineering challenges

  • Show me the code! Real-time robotics optimization

  • Real-world optimization techniques

  • How AI is changing robotic optimization

With this, it’s over to Sandor:


We're standing on the threshold of a robotics revolution. Just as OpenAI's ChatGPT “moment” transformed how we think about artificial intelligence (AI), the robotics industry is approaching its own breakthrough. This looks less like a step-like change, and more of a gradual transformation – one that will fundamentally change how we approach physical AI.

At Nyro Humanoids, we're creating the intelligence that powers humanoid systems capable of operating where humans cannot – or should not – go. From disaster response missions to save lives, to potentially dangerous construction sites, and toxic industrial environments that require hazardous activities which can put health at risk, our autonomous humanoid robots represent the cutting edge of what we call ‘physical AI’.

Our mission is to deploy intelligent humanoid robots in high-risk environments to protect human life and expand the boundaries of what's possible. Whether it's navigating collapsed buildings during search and rescue operations, handling hazardous materials, or operating in challenging conditions, we are developing the cognitive capabilities that enable robots to think, adapt, and act autonomously when every second counts.

The same breakthroughs that have revolutionized language models are now being applied to physically embodied intelligence. There are computers with arms and legs – robots! – which can understand their environment, make complex decisions, and execute precise physical actions in real time.

What follows is a comprehensive guide to the skills, technologies, and mindset that I’ve developed on my journey. Whether you're a software engineer looking to make the leap into robotics, a student considering the next move, or you’re simply curious about this rapidly-evolving field, this deepdive is a roadmap for becoming part of the robotics revolution.

The future isn't just about smarter software, it's about intelligence that can move, manipulate, and operate in the physical world. At Nyro Humanoids, we are building it one training run at a time – and we’re also hiring.

Building robots is a multidisciplinary endeavour that blends pragmatic software engineering, AI expertise, and a deep understanding of robotics fundamentals. What follows is a breakdown of the key skills that have proven invaluable to me every day in engineering robotics software and hardware.

1. Software engineering skills

Software, electrical, and mechanical engineering are the backbone of robotics. Let’s consider software engineering, where skills that prioritize performance, scalability, and reliability, are critical catalysts required to build robots that succeed in real-world applications. Depending on the kind of robotics you get into, some areas of interest might be:

Communication protocols, such as:

  • CAN Bus (Controller Area Network Bus)

  • The TCP/IP networking stack and peer-to-peer, or multiple node connections including graph theory. Both are important for designing robust communication systems between single and multi-robot systems, and their hardware and software components.

  • ROS2 middleware: Robot Operating System 2: a middleware acting as a meta-operating system for a robot and its controller, or a remote controller.

  • Performance optimization. Coding algorithms are efficient in power consumption by minimizing CPU, GPU and/or memory usage. One example of the necessity of performance optimization is batteries. With overly resource-intensive software, the CPU could require thermal throttling or else overheat. If your code isn’t efficient and the CPU draws lots of power, then the robot’s battery won’t last as long, and might require additional cooling heat sinks to be installed. Basically, high-performance and efficient code are must haves, not nice-to-haves, for real-world robotics.

Multithreading and multiprocessing: managing parallel processes in C/C++, Python, or Rust, for robotics systems, is crucial. Often, you may want to stream in two, or with high latency sensitivity.

Vectorization: leveraging parallelization in modern CPU/GPU architectures such as NVIDIA’s RTX 5090s graphics card, with GPUs to speed up computationally-heavy tasks. Some pragmatic examples:

  • Using PyTorch or NumPy libraries to parallelize computations for more efficient resource usage

  • Significantly accelerating training and inference processes

  • Visualize trainings in real time to inspect the behavior of half-baked neural networks (often referred to as “policies” in robot learning)

CUDA and cuDNN: CUDA is NVIDIA’s parallel computing platform and API. cuDNN stands for CUDA Deep Neural Network. These frameworks allow for:

  • Using NVIDIA GPUs (the current market leader in the kind of workloads required for robot learning) to accelerate for deep learning use cases.

  • Making SLAM (Simultaneous Localization and Mapping) more efficient. It involves constructing and updating a map of an unknown environment, which is traditionally part of the stack for mobile robots.

  • Real-time robotics using parallel processing, tensor cores, and optimized inference, which is the process of using a trained model to make predictions or decisions based on new, unseen data – which you could think of as generating the next output tokens.

Here’s a plain-english cheat sheet for speeding up robot ML and onboarding:

Rules of thumb:

  1. Start with ONNX (Open Neural Network Exchange) if you want portability

  2. Stick to TorchScript if you’re working fully using PyTorch

  3. Use Apache TVM for weird chips

  4. Use micro stacks like TensorFlow Lite for Microcontrollers (TFL-Micro), microTVM or uTensor for coin-cell robots (coin-cell robots are miniature robots powered by a flat, round battery cell called a coin cell)

Complexity analysis for resource-constrained devices. It’s necessary to ensure the coded algorithms can scale efficiently, as a system’s complexity expands to multiple tasks, or sets of tasks.

For example, if the model-based or learned controller (one that controls a robot using some sort of a neural network) requires 50ms to execute a small subset of potential tasks, it will probably be hard to scale it to process many other tasks, while maintaining a sufficiently high control frequency for agile ones. Control frequency is how often a robot's control system updates or executes its control loop. Being able to maintain control frequency while processing additional tasks is often related to robustness, agility, or speed-related metrics.

2. AI skills

As mentioned above, robotics increasingly intersects with AI, and this is especially true of tasks that require autonomy, perception, and decision making. I personally found online resources from Andrej Karpathy, Pieter Abbeel, and some other greats of robotics to be more useful than many books which quickly become obsolete in this rapidly transforming field – no pun intended. Areas it’s good to be proficient in:

Machine Learning (ML) basics: Core principles for training models and extracting insights from data. For more, check out our deepdive, The machine learning toolset.

Data science and probability theory: both are used to understand uncertainty, and how to calculate and make probabilistic decisions. Much of robotics runs on uncertainty that must be tamed.

Decision-making systems and cognitive science: modelling behaviour, navigation, and task planning. Cognitive science is the study of the mind and its processes, which can be highly relevant, especially when constructing humanoid robots.

Deep learning and representational learning: useful for developing perception systems for vision or audio. Deep learning is a subset of machine learning utilizing neural networks for tasks like classification and regression analysis. Representational learning is the process of extracting meaningful patterns from raw machines. This allows robots to develop useful abstractions for their environments and tasks. A book I enjoyed reading on multi-agent reinforcement learning is “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches”.

Reinforcement learning (RL) and imitation learning: used to teach robots to learn optimal actions through trial and error and via human demonstrations. A good resource on this is Spinning Up by OpenAI.

Diffusion models and multi-agent systems: Leveraging cutting-edge approaches for multi-robot collaboration and planning for more efficient routing and trajectories.

Quantization and pruning: Reducing model size and inference latency by lowering precision (e.g., INT8 quantization) and removing redundant weights for efficient deployment on edge devices. Quantization and pruning complement each other: prune redundant weights, then store the survivors in INT8 to slash model size and latency. Train with quantization-aware training, where every forward and backward pass mimics 8-bit math, so the network learns weight values and activation ranges that hold up after real quantization, giving a compact, edge-friendly model, with almost zero accuracy loss.

Note from Gergely: There are plenty of books and online resources on these topics, while search engines like Perplexity or Kagi can provide recommendations. For example, for the query:

“What are books on Diffusion models and multi-agent systems?”

The search engine returns several suggestions that can be good starting points, if books are your thing. Search by the format you want. Full subscribers to Pragmatic Engineer get a year of Perplexity Pro, and 3 months of Kagi Ultimate for free.

3. Robotics fundamentals

A solid grounding in mathematics, physics, and hands-on engineering, is non-negotiable for designing, building and deploying robots:

  • Advanced mathematics and physics: Kinematics, dynamics, thermodynamics, mechanisms, and electromechanics, energy systems, sensors, biomechanics, structural mechanics, power systems.

  • Realistic simulators: Proficiency in tools like Mujoco, IsaacSim, or PyBullet, to iterate before real-world deployment.

  • Signal processing and real-time filtering: Ensuring accurate sensor data acquisition, filtering, transmission, processing, and interpretation.

  • Systems engineering: Designing and integrating complex hardware-software architectures in a scalable way. Many projects fall victim to improper project management and lack of compartmentalization, which can make the debugger’s life very hard when hardware’s also in play.

  • Human-robot interaction: Building systems that operate effectively alongside humans, with appreciation for – and careful consideration of – how a robot is actually going to be used at deployment, how humans will use and potentially misuse it, and ensuring it’s foolproof.

These varied skills combine to overcome the inherent complexity of robotics. Each contributes to the ultimate goal of creating functional, scalable, and reliable robots that perform effectively in the real world.

4. Advice for studying a Master’s in Robotics

Pursuing a postgraduate degree in robotics is a strategic move for mastering interdisciplinary skills, preparing for this rapidly-evolving field, and unlocking opportunities in academia, industry, and entrepreneurial ventures. Opting for university could be a worthwhile investment if you’re serious about getting involved, regardless of age. If that sounds appealing, I have some tips for making the most of it.

This post is for paid subscribers

Already a paid subscriber? Sign in
A guest post by
Sandor Felber
MIT CSAIL
Subscribe to Sandor
© 2025 Gergely Orosz
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share