Robotics for software engineers
What does it take to build and program robots? A look into the exciting, increasingly popular field of robotics. Guest post by humanoid robot expert, Sandor Felber
Today, there’s an ever-growing number of startups, scaleups, and established companies like Boston Dynamics producing increasingly capable robots, including ones that look humanoid. Tech giants like Tesla have been building humanoid robots, and Meta is expected to invest billions in the technology. These and other factors point to the possibility that, after artificial intelligence, robotics could be the “next big thing” in tech.
But what’s the process of building intelligent robots, and what’s it like as a day-to-day job for the developers who do? It’s not always easy to find this out because robotics startups are famously shrouded in secrecy – and with such cutting-edge technology, it’s unsurprising.
Academia is often a bit more open, so I turned to MIT Robot Learning Researcher, Sandor Felber. He’s a Humanoid Robot Learning Researcher at MIT in Boston, Massachusetts, and previously led a team that built a self-driving race car. Sandor’s also been a robotics intern at Tesla in Palo Alto, California, and a robotics R&D engineer at the Edinburgh Center of Robotics.
Today, he takes us through:
Getting into robotics. From personal interest at high school in electric motors, through studying electrical and mechanical engineering at university, building a driverless race car, interning at Tesla, and researching humanoid robots at MIT.
Robotics industry overview. Industrial robots are becoming more widespread, academia focuses on smaller “long-shot” bests, and industry (the commercial sector) executes on proven concepts.
Planning a robotics project. This is similar to most projects: start with a vision, make a plan, and break it down into steps. It’s always useful to be clear on “critical” vs “nice-to-have” features.
Development phase. Control strategies (model-based control vs learned controllers), simulation and hardware deployment differences, and techniques to make simulations more realistic.
Tech stack and tools. Python, C, C++, and Rust are popular programming languages. A list of tools for experiment tracking and simulation.
Testing, demoing, and shipping. Common reasons why robot deployments fail, an example of deployment of an industrial robot, and why ongoing customer support is a baseline expectation.
Day-to-day as a robot builder. There’s a big difference between academia and industry, and Sandor shares an overview of both.
This topic is intriguing because it combines software, hardware, and cutting-edge tech:

This is a guest post. If you’re interested in writing a deepdive for The Pragmatic Engineer – collaborating with Gergely on it – check here for more details.
With that, it’s over to Sandor. You can follow Sandor on LinkedIn, and learn more about his work on his website.
1. Getting into robotics, a personal account
My interest in robotics began in high school, where I wrote the junior equivalent of a dissertation on characterization methods for how electric motors behave under different conditions. This led me to pursue a degree in electrical and mechanical engineering at the University of Edinburgh, home to numerous renowned robotics and AI researchers.
Building a driverless electric race car was one of my bigger projects. At uni, I joined the Edinburgh University Formula Student (EUFS) team, where we designed, built, and raced a driverless electric race car. From a roboticist’s perspective, this is considered a wheeled mobile robot.

We designed and built several versions of the car. Here’s a later model:
I started working in the electric powertrain team and was responsible for designing and implementing systems that generate and deliver power. It included:
Developing high-voltage battery systems
Integrating traction motors and encoders
Designing power electronics for charging, converting between voltage levels, etc.
Creating cooling systems for batteries, motors, and inverters.
Here’s the high-voltage battery pack I had a hand in designing:

I eventually became the powertrain team’s lead, and upon returning from Tesla the following year, moved on to direct all operations related to the driverless vehicle’s hardware design, and later served as president of the team of around 140 members.

The students in the team worked on the project part-time, and everyone went above and beyond their academic requirements to get hands-on experience. It was a student-led project – and an especially cool one; we built race cars that drove on actual Formula 1 tracks!

Along the way, I discovered a passion for control theory. Control systems engineers tend to concern themselves with crafting control strategies that ensure optimal performance, from spacecraft trajectories to insulin delivery in diabetics.
In robotics, you could think of control theory as the invisible puppeteer of a robotic arm, except that instead of pulling strings, it's using mathematics to orchestrate every joint's motion in real time. As a robotic arm needs to smoothly pick up an egg without crushing it, control theory provides the mathematical "muscle memory" that turns crude motor commands into precise, graceful movements. It does this with constant sensor feedback and adjustments, with the approach adjusted depending on how sensitive the control system is to the various feedback signals.
My interest in control theory and background in batteries, powertrains (vehicle motors), and electrical control systems from previous internships landed me one at Tesla's robotics department. Since then, I’ve worked with academic and industrial stakeholders on projects ranging from quadruped (quad- as in four, and -ped as in legged) dog-like robots, to humanoid systems.
I’m currently at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), focusing on teaching humanoid robots to perform complex, real-world tasks using learned controllers, powered by neural networks. I’m particularly interested in embedding intelligence into robots by leveraging various types of learning, including supervised and unsupervised learning, offline and online deep reinforcement learning, and imitation learning.
I use these approaches for humanoid robot locomotion, manipulation, whole-body control, and teleoperation (remotely controlling the robot).
2. Robotics industry overview
Robotics companies are raising more money than ever before. In the first eight months of 2024, global robotics companies attracted a record amount of $10.8 billion in funding, averaging $1.2 billion per month! What’s all the hype fuelling this surge of capital?
In public debate, there’s no shortage of focus on labor shortages and stagnating productivity, which reduce growth and competitiveness. The US economy is estimated to be missing up to 4.6 million workers annually needed to maintain levels of supply and demand, according to a recent study. That number equates to 2% of the current US population. The same study suggests Germany needs to find an additional 1.6 million workers to keep current economic levels; 3% of its population.
This is one driver of industrial robots’ increased adoption, which shows no sign of slowing down. Sustaining economic growth amid shrinking labor pools is one viable path for robotics which is getting a lot of attention, as evidenced in the increased adoption of industrial robots, with global installations reaching nearly 600,000 units in 2024, as per the World Robotics Report. This figure surpassed previous benchmarks and shows that industries struggling with labor shortages can return to growth.

If robots become ubiquitous, what will humans do? This is the most common concern I hear when I talk about my work. The question is an old one: on the New York subway, I stumbled upon a Holiday Nostalgia Ride, which offers a trip in an old subway car from between the 1930s and 1970s. My coach looked like it was from the 1960s, and inside was an advert about upskilling for “tomorrow’s jobs:”
The world has changed massively since then, thanks in part to automation, without which our standards of comfort wouldn’t be possible or affordable; think of the dishwasher that automates the washing up and helps canteens to offer a cheaper menu, than if they had to hire extra people to wash dishes and pay for excess water (new dishwashers are very efficient).
Acquiring new skills in a quickly-changing world is as least as necessary today as it was back then; as illustrated by that subway ad from the mid-20th century.
Industry vs academia
Approaches to research and development have always differed between industry and academia. Having worked in both environments, here’s how I see them compare in robotics:
Academia: smaller “long-shot” bets, developed on a budget. Many projects take years to mature, due to the limited effort that a couple of post-docs, PhD students, and undergrads can dedicate.
Industry: execute on proven concepts with substantial backing. Industry prioritizes execution on feasible concepts. Once a concept is assumed to work – as usually verified by a proof-of-concept (POC), also known as a “minimum viable product” in startup lingo – industry players can raise vast amounts of money to build it.
Modern robotics may have reached an inflection point, where enough academic “long-shot” bets are delivering results, with feasible paths to building practical robots, including humanoid ones. Examples of bets made on robotic hardware:
Tesla’s Optimus humanoid prototype
Boston Dynamics bidding farewell to hydraulics and welcoming electric actuators on its all-new humanoid platform, Atlas
1X building a humanoid robot, NEO
3. Planning a robotics project
Here’s how I’ve seen robotics projects get done.
Vision for a demo: many robotics projects start with the question: “What should the robot achieve when completed?” Apple co-founder Steve Wozniak’s “coffee test” is one such vision, now frequently referred to as the “New Turing Test.” His definition:
“A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons."
In practice, on most projects I’ve seen, the vision gets watered down or descoped as deadlines approach, and material or human resources become limiting factors. However a demo that passes the test above could well serve as a benchmark for artificial general intelligence (AGI). As part of that test, a physically-embodied machine must demonstrate intelligence by entering a random home and successfully making a cup of coffee with tools available in a previously unseen kitchen.
After the vision is set, the next steps are:
Planning: goals, demoable outcomes, and target applications or target environments. The requirements of a good demo in a target environment can be very high!
Break things down. Translating deliverables into functional requirements. For each functional requirement, one or more features that show successful fulfillment need to be identified. When breaking things down, features are often separated into at least two subsets:
Critical: those that must be present in the final product. If missing, the project is considered a failure. An example is a humanoid robot weighing 50kg that is deployed in warehouse automation, that needs to carry 10kg payload per arm. If the electric motors moving the arms are too weak, or the ankles suddenly overheat, when operating at 68kg in total, then a client won’t be happy with the product or service (RaaS, or Robot-as-a-Service is a thing now). Therefore, this feature likely requires robust engineering all the way to the very left side of the V-model. Identifying such interdependencies and making engineering design calls that aid the design, based on engineering intuition, is challenging and part of what makes for great technical leaders.
Nice-to-have: features that a final project could ship without, such as a higher control frequency in order to achieve smoother, less jittery motion. Note, these cannot be fully neglected but can be considered soft constraints, in contrast to critical features which tend to be hard constraints. A robot that gets the job done, even if not as smoothly, could still be counted as a success with room for improvement.
Critical vs nice-to-have thresholds can change over time. Hard constraints (“critical”) vs soft ones (“nice-to-have”) are often linked, and the difference lies in arbitrary measures of quantity and quality.
Being clear about the “minimum needed” (critical) and “nice-to-have” requirement thresholds is good practice in robotics projects. These can change as the project develops, turning nice-to-haves into critical ones. As long as everyone on the project is notified and on-board when changes happen, then a project should progress without hiccups.
4. Development phase
During development, robotics engineers need to balance proof-of-concept experiments with the scaling of a project, including the supporting infrastructure. For example, one proof-of-concept experiment could be verifying that a parallel jaw gripper (the robot’s hand) can grasp a mug in specific setups.
Scaling can be hard because there are lots of problems in the real world to solve. Scaling the above example of holding a mug can be seen as: