Home/ Technology/ Sensing & Perception

How Does a Robot
Actually Feel?

Vision tells a robot what is out there. Touch tells it what is happening right now, at the point of contact — and without it, even the simplest physical task becomes surprisingly difficult.

Robotic hand with tactile sensors
Tactile sensor arrays on a robotic fingertip. Each small cell responds to pressure independently, giving the robot a rough map of force across the contact surface — not unlike the nerve endings in a human fingertip.
The Problem

Most robots are blind at the point of contact. They can see an object, calculate its position, and plan a path toward it. The moment their fingers touch it, that picture goes dark. They are working from a camera view now blocked by their own hand, pressing down with whatever grip force they calculated in advance, hoping it was right.

For rigid objects on flat surfaces, this works well enough. A robot bolting car panels to a frame does not need to feel much. The geometry is fixed, the forces are predictable, and the task was designed around those limitations. Outside that narrow domain, the absence of touch causes real problems. Gripping a soft food item, holding a phone without cracking the screen, passing a component to a human hand. All of these require continuous feedback about what is happening at the contact surface. Without it, the robot either grips too hard or too soft, and either outcome is usually wrong.

Touch matters in robotics not as an optional upgrade but as a prerequisite for a large class of physical tasks.

Vision tells you where the object is. Touch tells you whether you actually have it.

What Touch Actually Measures

Human touch is not a single sense. The skin contains different types of nerve endings that each respond to something different. Light pressure, deep pressure, vibration, temperature, stretch. The brain receives all of these at once and assembles them into what we experience as touch. Alongside this, the body has a separate sense for its own position in space. It knows where the hand is without looking, and it knows how hard the muscles are working. Together, those two systems give a complete picture of physical interaction that no camera can replicate.

Robotic tactile systems try to capture at least some of this. The most important measurements are how hard the grip is squeezing, whether the object is starting to slide, and where exactly contact is being made across the sensor surface.

Normal force is the direct squeeze. The force pressing straight into the sensor. It tells the robot whether the grip is firm enough to hold or so firm it risks damage.

Shear force is the sideways component. When an object starts to slip, the contact point moves sideways before the object actually falls. Catching that movement early is what makes fast grip correction possible.

Contact distribution is a map of where force is being applied across the whole sensor surface. A flat object produces an even pattern. An edge produces a sharp ridge. A cylinder produces a gradient. From this map alone, a robot can work out a surprising amount about object shape and orientation without any visual input.

Key concept

Slip detection

Many tactile sensors are designed specifically to catch the first sign of a slip. When an object starts to fall, it moves sideways at the contact point a fraction of a second before it actually goes. A sensor that catches this in milliseconds can trigger a grip correction before the object has moved at all. Without it, a robot has to choose between gripping too hard and risking damage, or gripping lightly and risking drops.

How Tactile Sensors Work

There is no single dominant approach to robotic touch. Several different physical principles are used, each with different tradeoffs in sensitivity, durability, and how easily the sensor can be shaped to fit a gripper.

The most common approach is resistive sensing. A material that changes its electrical resistance when compressed sits between two conductive layers. Press harder, resistance drops, current increases. Arrange many of these cells in a grid and the result is a pressure map across the sensor surface. These sensors are inexpensive and simple to build. They struggle with very light touches and wear out faster than some alternatives.

Capacitive sensors work on a different principle. Two conductive surfaces face each other with a small gap between them. Pressing the sensor closer together changes how much electrical charge the gap can hold, and that change is measured. These sensors can be made very thin and flexible, which matters when fitting them to curved surfaces like robotic fingertips. They pick up lighter touch than resistive sensors and last longer.

A third approach uses soft structures filled with air or fluid. When the surface is pressed, internal pressure rises and small sensors inside pick up the change. Because the whole structure is soft and deformable, it can wrap around irregular object shapes in a way that a rigid sensor cannot.

Tactile sensor cell types comparison diagram
Three sensor types side by side: resistive (left), capacitive (center), and a soft fluid-based design (right). Each trades off sensitivity, durability, and flexibility differently.

One of the more inventive recent designs places a small camera inside a soft, gel-filled fingertip. The gel surface is marked with a dot pattern. When the fingertip contacts an object, the gel deforms and the dots shift. The camera watches those shifts and works out the shape and forces of the contact from the movement pattern. This produces far richer data than a pressure grid. The tradeoff is complexity. Fitting a camera, lighting, and a soft gel structure into a fingertip-sized package is harder to build and maintain than a simple array of pressure cells.

From sensing to acting

Feeling at the Wrist

Fingertip sensors tell a robot what is happening at the contact surface. A different type of sensor, mounted at the wrist, tells it something broader. How hard is the arm pushing overall, and in which direction.

A force-torque sensor at the wrist measures every push, pull, and twist the hand is experiencing across all six possible directions at once. Three directions of straight-line force and three directions of rotation. A single reading like "four newtons" tells you almost nothing useful. Six readings tell you whether the robot is pushing straight, pushing at an angle, or unknowingly stressing a joint it should not be.

For assembly tasks this matters enormously. When a robot inserts a peg into a hole, it does not need to see the hole to know whether the peg is aligned. If the peg is off-center, the sensor detects an uneven load. The robot shifts its position until the forces even out, at which point the peg drops cleanly in. This approach, called compliance control, handles precise assembly even when the robot's initial positioning is imperfect. It is one of the more underappreciated capabilities in modern industrial robotics, and it works entirely through feel.

Wrist force sensing is also central to robot safety. A robot that detects unexpected resistance can stop or back off before damage occurs. The collaborative robots deployed alongside factory workers rely on this, alongside mechanical compliance, joint torque limits, and certified safety systems that cap speed and force. Force sensing is not the whole picture. But without it, none of the rest is enough.

The Gap to Human Touch

A human fingertip has tens of thousands of nerve endings per square centimeter. It can feel a surface feature finer than a fingerprint ridge. It processes signals from all five fingers at once, knows where each one is without looking, and adjusts grip continuously. All of this happens faster than conscious thought.

No robotic tactile system is close to this. The best current sensors achieve a few hundred sensing cells per square centimeter. That is enough for gripping, slip detection, and basic texture work. It is not enough for threading a needle, handling wet paper, or feeling for a hairline crack in a surface. Those tasks remain out of reach, and that limits where robots can actually be useful.

This is changing. Tactile sensing spent years as a research topic that never quite translated into hardware robust enough to deploy. Sensor arrays are now appearing in commercial grippers for food handling, electronics assembly, and surgical assistance. The hardware is getting smaller and cheaper. The software that interprets the sensor output is getting better at turning raw pressure readings into useful decisions quickly. The field is moving.

Touch is the last of the classical senses to arrive in robotics in any meaningful form. When it does arrive fully, not as a lab demo but as something reliable enough to trust in the field, it will quietly expand what robots can do more than any camera improvement has managed.

Stay current

The field moves fast. Keep up.

Monthly dispatches on robotics research, industry developments, and educational resources — for curious minds at every level.