Home/ Technology/ Mechanical Systems

How Does a Robot
Actually Move?

A robot that can see the world is useless if it cannot act on what it sees. Movement is where perception becomes physical — and the engineering required to make it happen is far more interesting than it looks.

Humanoid robot in motion
A humanoid robot during a free-running demonstration. Every movement you see — the push-off, the hip rotation, the landing correction — is the product of electric actuators responding to sensor feedback hundreds of times per second.
The Problem

Moving a human body is something the brain handles automatically, without conscious thought, from early childhood. Reaching for an object, catching a fall, stepping over uneven ground. Each of these involves hundreds of muscles working across dozens of joints, continuously adjusting in real time. No one programs this. It develops through years of physical experience.

Replicating it in a machine is one of the deepest problems in engineering. Physical control turns out to be at least as hard as any software challenge in the field, and in many ways harder, because it cannot be fixed with a patch. A mistake in motion control breaks hardware, or breaks whatever the robot is touching. It has taken engineers decades to get even close.

Making a robot move is not a software problem. It is a physics problem, a materials problem, an energy problem, and a control problem all at once.

The Actuator

Every movement a robot makes starts with an actuator. An actuator is any device that converts energy into physical movement. In robotics, that energy is almost always electrical.

The word covers a wide range of devices, from tiny vibrating motors in a phone to the massive cylinders that lift a car off the ground. In robotics, three types dominate, and choosing between them is one of the most consequential decisions in any robot's design.

Electric motors

The vast majority of robots today are driven by electric motors. An electric motor spins a shaft when current flows through it. That spinning motion passes through a gearbox to a joint, converting fast rotation into slower, more powerful movement. The same way a bicycle gear converts fast pedaling into useful forward motion.

Electric motors are clean, controllable, and relatively cheap. They can start and stop thousands of times per second without wearing out. They produce precise, repeatable motion. Modern brushless electric motors, the kind found in performance drones and electric cars, are efficient and deliver strong force for their size.

Their limitation is peak force. An electric motor of a given size can only produce so much torque. Getting more power requires either a larger motor or a higher gear ratio. Higher gear ratios introduce their own problem: they make the joint rigid, which makes the robot feel stiff and unsafe to work around.

Key concept

Why joint stiffness matters

Push on a joint with a low gear ratio and it gives way smoothly, like a human arm being gently moved. Push on a joint with a high gear ratio and it resists, stiff and unyielding. More gearing gives you more force but locks the joint up in the process. A robot that cannot give way is dangerous to work around and looks unnatural when it moves. A robot that can absorb a shove and keep going is far safer and far more capable in environments where unexpected contact is possible.

Hydraulic actuators

Hydraulic actuators use pressurized fluid to generate force. A pump pushes oil through a circuit of tubes into a cylinder, and the pressure of that oil against a piston creates movement. The same principle you find in a digger arm or a truck's braking system.

Hydraulics can generate extraordinary force from a compact package. Early generations of advanced humanoid robots ran on hydraulics specifically because nothing else could match the power needed for the dynamic movement their teams were attempting.

The cost is complexity. Hydraulic systems need a pump, a reservoir, a network of high-pressure lines, valves, and seals. They leak. They are heavier than electric systems of similar capability. And they are hard to control precisely at low forces, which matters whenever the robot needs to handle anything fragile.

For humanoid robots intended for real-world deployment, hydraulics are largely a dead end. The maintenance overhead alone makes them impractical at scale. They will persist in heavy industrial machinery and some specialized applications where nothing else delivers the required force. For robots that need to operate in human environments, the direction is electric and has been for some time.

Electric vs hydraulic actuator comparison
Left: a brushless electric motor with integrated gearbox from a modern humanoid robot joint. Right: a hydraulic linear actuator of similar force output. The electric unit is lighter and more controllable; the hydraulic generates higher peak force from a smaller cylinder.
From actuator to joint

Degrees of Freedom

A single actuator produces a single motion. A useful robot requires many motions, coordinated across many joints. Engineers describe the range of independent movements available to a system using the term degrees of freedom, abbreviated to DOF. Each degree of freedom is one axis of independent motion. One direction in which something can move or rotate on its own.

Your shoulder joint has three degrees of freedom. You can raise your arm forward, raise it sideways, and rotate it. Your elbow has one. By the time you get to your fingertips, the human arm has somewhere around 27 to 30 degrees of freedom. Which is why human hands are so capable and so difficult to replicate.

A humanoid robot may have 40 to 50 degrees of freedom in total. Each one requires its own actuator, its own sensors, and its own contribution to the control system. That is why building a humanoid robot is not just a mechanical challenge. It is a systems challenge at every level simultaneously.

The Control Problem

Having actuators and joints is the hardware side of the problem. The harder side is the software. Deciding, many hundreds of times per second, exactly how much force to apply to each actuator to produce the desired movement without the robot falling over or breaking something.

Position control tells each joint where to be. Move joint three to 45 degrees. Hold it there. This is how most industrial robots work, following pre-programmed paths with sub-millimeter accuracy. It works well in structured environments where nothing unexpected happens.

Force control tells each joint how hard to push rather than where to go. This is more like how humans move. We specify intentions, and our muscles apply whatever forces are needed. Force control makes robots adaptive. A force-controlled arm pushing down on an uneven surface adjusts continuously to maintain consistent contact. A position-controlled arm doing the same thing will either stall or push too hard.

The most capable robots today use a combination of both. Machine learning increasingly works alongside the traditional control software, not replacing it but handling higher-level decisions: what sequence of movements to attempt, how to adapt to unexpected situations, how to carry a skill from one context to another. Training on simulation data lets a robot learn movement strategies through repeated trial and feedback rather than explicit rules.

The Walking Problem

Of all the things a robot can be asked to do, walking on two legs turns out to be one of the hardest. Not because legs are mechanically complex. Because walking on two legs requires continuous, active balance. You are always falling. Every step is a controlled fall caught by the next footfall in time.

A bicycle is stable when moving but falls over when stationary. A two-legged robot is in a similar position. The moment it stops actively adjusting, the moment its control loop fails to catch an unexpected nudge in time, it falls. This is why early bipedal robots moved so slowly and carefully, pausing between steps to let their systems stabilize. They were not walking. They were standing, lifting a foot, placing it, and standing again.

What changed this was a shift in approach. Rather than pre-calculating a stable path and following it precisely, newer systems continuously predict what will happen over the next fraction of a second given the robot's current state, and adjust the plan as the ground shifts beneath it. The robot is always looking a few steps ahead.

Combine this with fast joint sensing, actuators that absorb unexpected impacts rather than rigidly resisting them, and control loops running at hundreds of times per second, and you get something that for the first time actually looks like walking rather than carefully managed falling.

When a humanoid robot backflips, it is not following a pre-programmed script. It is running a policy trained across millions of simulated attempts, which taught the system how to modulate force, momentum, and balance well enough to execute that one task. Each new movement still needs to be learned separately. But the fact that it can be learned at all is what marks a real change from where the field was ten years ago.

Why This Is Getting Better So Fast

The actuators in robots today are not dramatically better than they were fifteen years ago. The motors are more efficient, the gearboxes more refined. But the physics have not changed. What has changed is everything around them.

Sensors are cheaper and smaller. The chips that tell a robot its precise orientation in space now fit on something smaller than a thumbnail. That used to be a significant hardware cost. Now it is a line item.

The bigger shift is in how movement strategies are developed. Writing control software by hand, specifying rules for every situation a robot might encounter, does not scale. A factory arm doing one task can be hand-coded. A robot that needs to handle unpredictable environments cannot. Machine learning changed this by making it possible to train movement policies in simulation across millions of scenarios, then transfer the result to physical hardware. It does not always work cleanly on the first attempt. Real hardware has friction, flex, and contact dynamics that simulation gets wrong in small but consequential ways. Closing that gap is still an active problem. But the approach works well enough that it has moved from research curiosity to standard practice in a short amount of time.

The range of tasks a physically capable robot can handle keeps growing. That is not a prediction. It is already visible in what is being deployed.

Stay current

The field moves fast. Keep up.

Monthly dispatches on robotics research, industry developments, and educational resources — for curious minds at every level.