topicsfeedssearch

Is the Invisible Hand an Agent?

LessWrong Curated · Feb 18, 16:26 · original ↗
Published on February 18, 2026 4:26 PM GMT

This is a full repost of my Hidden Agent Substack post 

Adam Smith’s Invisible Hand is usually treated as a metaphor. A poetic way of saying “markets work,” or a historical curiosity from a time before equilibrium proofs and welfare theorems. Serious people nod at it politely and move on.

Yet the metaphor refuses to die.

We use it when markets do something uncomfortable: when they resist control, when they adapt to suppression, when outcomes reappear in new forms after we thought we had eliminated the mechanism. We say “the market reacted,” or “prices found another way,” and insist that this is a mindless economic process.

The market refuses to die.

We don’t think of the market as a living being. It is just a mechanism like gravity or traffic. It doesn’t have a body or mind. Yes, it seems to push back. So:

Is the Invisible Hand an agent?

Not rhetorically. Not philosophically. Can we answer this? Do we understand the concept of agency well enough to come back with a clear Yes or No or Mu?


Pushback without a face

A price cap is introduced. Prices stop moving, as intended. But shortages appear. Queues form. Quality degrades. Access becomes conditional. Side payments emerge. The price, supposedly removed, reappears measured in time, risk, or connections.

Or take trade bans. Exchange does not vanish. It reroutes through willing intermediaries. Informal markets appear. Enforcement costs rise. The visible surface changes; the underlying allocation pressure does not.

A currency in a failed state collapses. Money becomes unstable. Exchange continues anyway, now denominated in goods, foreign currency, or favors. The unit dies; the function persists.

Across centuries, regimes, and ideologies, the same pattern repeats. When constrained in one dimension, allocation shifts into another. When suppressed in one form, it reappears in another. This does not look like passive failure. It looks like a response.

The market pushes back.


But is this agency?

At the same time, calling this an agent feels wrong.

There is no body. No headquarters. No moment of decision. No statement of intent.

What we observe instead is millions of local actions, each justified by local reasons. Buyers respond to shortages. Sellers respond to margins. Intermediaries respond to incentives. Everyone can explain themselves without invoking anything global.

From the perspective of each market participant, it looks like individual choice. From the outside, it looks coordinated. Calling this “equilibrium” names the pattern, but does not explain why it survives so many different attempts to suppress it.

Does the market act?

If there is agency here, it is not visible at the level where we usually look for it.


Designed control

I worked with and designed complex systems that reliably control outcomes. Ad revenue, provider traffic, project allocation, underwriting volumes... Much of what is stabilized cannot be found in the org chart. In distributed infrastructure, “no one is in charge” doesn’t mean there is no control. The system is designed to be stable, which means it has control loops. Instances are started as needed; traffic is balanced; resources are allocated based on demand; budget targets are met by reducing expenses, etc.

My experience makes me suspicious of assumptions of human control. It also makes me suspicious of conversation stoppers: “it’s just the environment.”

So when I see systems that persist, adapt, and reconstitute themselves after disruption, I want to know what kind of dynamic is at work.


Sharpening agency

When we think about agents, such as humans, we think about pursuing goals and having beliefs. Markets do not think. They do not form explicit goals that they follow deliberately. We routinely attribute agency to animals based on avoidance, adaptation, and persistence, not on explicit goals.

So explicit goals is the wrong criteria. The relevant question here is not whether markets think, but whether they satisfy other, more fundamental criteria for agency. Criteria that can be observed. Criteria we can also test in artificial systems such as LLMs. I want to test:

None of these require consciousness. None require centralization. All are methodically observable.

Before reading on, ask yourself how markets score on these.

Is the market alive?


Where is the agent hiding?

If the Invisible Hand is an agent, it is hidden in ways that defeat our intuitions.

It does not act through discrete choices, but through invariants. What persists is not a particular price, but a relationship between scarcity and allocation.

It does not store memory internally, but externally. Inventories, contracts, balance sheets, and expectations all carry state forward in time.

It does not choose actors, but selects among behaviors. Strategies that align with constraints persist. Others exit.

It does not maintain a fixed shape. When blocked in one dimension, it changes direction. Price becomes time. Money becomes risk. Exchange becomes access.

This kind of agency is easy to miss because it is not located where we expect it. It lives in constraints, not commands. In selection, not intention. In persistence, not visibility.

The market is shapeless.


Why start here?

I am not trying to grant personhood to the Invisible Hand, nor to smuggle ideology in through metaphor. I am using it as a stress test.

If our concept of agency cannot handle this case, it is probably too narrow. If it accepts it too quickly, it is probably too loose.

The Invisible Hand is a borderline case where a theory of agency has to prove its precision.


Hidden Agent

The Substack, Hidden Agent, is about agents and agency in complex systems. Systems where we do not a-priori know where the agents are. Where agents cooperate and form larger agents or a hierarchy of agents. Where agents have no physically locatable body, but live virtually in a computer, such as a bot in a botnet. Or where they are even distributed across multiple systems.

And about properties of these agents. Where the incentives of individual agents do not aggregate nicely into the incentives of the overall system (as we see with markets). Why do agents try to obscure that they exist or limit information about themselves? When do agents appear, and when do they fall apart?

The Market is one case. Bureaucracies, software systems, and other artificial systems are other I want to look into.

Is the Invisible Hand an agent? I do not have a confident answer, but I plan to come back to it - with sharper tools.

When the stakes are high, you should know if something could be an agent - it could try to evade or outsmart you.



Discuss