Attention Machines and Future Politics
The School of Radical Attention invited me to discuss literacy loss, the political implications of 'AI externalizing attention,' and why our situation today is like Uruk's at the dawn of writing.
Below, I talk with Peter Schmidt, Co-Founder and Program Director of the Strother School of Radical Attention (SoRA).
NOTE: SoRA is publishing a slightly different (i.e., different footnotes, etc.) version of this interview on The Empty Cup, their wonderful Substack. Please go and support them! I decided to publish this piece on After Literacy too, because it ultimately advances the work we’re doing here in important ways, and I like to keep everything together in one place for new folks. If you are a frequent reader here, I would invite you to approach this piece in the context of our ongoing exploration of externalized attention, detextualization, and the cognitive ecology of literacy (and post-literacy!). It continues, in a few directions at once, our inquiry into the idea that “just as writing externalized memory, AI is externalizing attention.”
—Jac
Peter Schmidt: Happy to see you, Jac! You write a lot about AI, and literacy, and attention. Most conversations about AI and attention describe how AI models are used to power the platforms that capture and commodify our attention. You’re telling a different story. By your view, what has AI done to our attention?
Jac Mullen: Hey Peter, it's good to see you too!
By my view, what AI has done to attention is this: first and foremost, AI has externalized attention, in the same sense that writing previously externalized memory.
To the extent that writing creates a form of non-biological memory — an external system for storing symbolic information — to roughly the same extent, I think, many forms of AI constitute forms of non-biological attention, external systems for selecting, ranking, filtering, and reweaving fields of information around what's salient or important.
In terms of the story I'm telling: I'm trying to place this second great externalization of mind (after memory, through writing) within its historical context, trace its contemporary consequences, and follow its logic forward, in the hope that it will disclose potential solutions to the various crises we are facing today — among which I'd count that primary, urgent ‘conversation’ you alluded to earlier: namely, that AI is being deployed by a small elite to rewire us at scale for certain forms of exploitation and extraction — through consumer technologies like smartphones and social media.
One of the key themes of my work is a complex of startling parallels between the emergence of writing and the state, on the one hand, and the emergence of AI and techno-feudalism or surveillance capitalism, on the other.
Writing was invented as an administrative tool in Uruk around 3330 B.C.E; it essentially co-emerged with an entirely novel form of human organization, which we call "the state." The state relied on writing to make its population legible and available for extraction. Through writing, a new elite — characterized not by kinship, but by proximity to temple power — extracted a grain surplus which underwrote its leisure activity and powered its growth.
Similarly today, a new elite is using a new information technology to make people legible in new ways and to extract from them a new form of surplus. As the old elite hoarded its new memory technology, the new elite now hoards its attention technology, and the emerging power structure is characterized by a profound informational asymmetry.
AI as “Attention Machines”
PS: How is AI a form of attention?
JM: When machine learning researchers speak about "attention," they're usually referring to transformers, which were a specific type of architecture that revolutionized the field in 2017. Transformers allow neural networks to perform something like self-attention: to pay attention, at each layer, to the attention paid to previous layers, allowing for massive, parallel selectivity. This is the innovation which led directly to natural language models like ChatGPT and the whole LLM revolution. So—important, good stuff.
However, when I say that AI “externalizes attention,” I am not only referring to transformers. I am making a more fundamental claim. I am saying that, since the early 2000s, many machine learning systems were arguably, in their essence, attention machines: they either were composed, computationally, of attention operations, reminiscent of the attentional processes employed by biological systems; or they performed, functionally, the core operations of attention.
I think this has been true for a very long time, but it has only really been clear to us, average folks, experientially, since LLMs became commonplace. Only since then can we really have the basic experience where a machine pays attention on our behalf at near-human competence. So when I say to Claude, “Please read these new regulations in light of my company's bylaws and my responsibilities in my role,” and it returns with a report about how Rule 104b places new reporting requirements on us, and I should take this to the board — something genuinely remarkable has happened. A machine has re-patterned an informational field according to salience policies I defined and thereby surfaced what matters to me. It has paid attention for me.
I call external attention systems ‘looms,’ the same way you might call an external symbolic storage system, an external memory site, an ‘archive.’ Big Tech was the first to invent looms—the first true “external attention system,” I’d argue, was achieved when Google added a quality score mechanism to its AdWords pipeline around 2003. Tech companies used them primarily towards the creation of predictive products—products which use machine learning systems to predict our behaviors, generating data to sell to clients. Their revenue derives from the accuracy of the predictive products they sell. To increase predictive accuracy of any model, you really have two options: improve the model, or simplify the system you are modeling—literally make the system more predictable. This is the ultimate purpose of the small range of gestures, the flattening effect our devices have on our range of behaviors, both cognitively and physically: swiping, staring, dissociative absorption, thumbing, whatever. It is the narrowing of possibility, to make us more predictable.
What is “Externalization”?
PS: That notion of “narrowing possibility” seems to position these external attention technologies as more-or-less opposed to autonomy. Is there an upside?
JM: I think we can all agree that the mind doesn't end at the skull; that there are many different ways we extend cognition outside of the head. There are individual tools which are extensions of cognition: notebooks extend memory and spatial reasoning, for instance. There are also forms of social cognition: we distribute cognitive labor with other people and systems — sharing memory duties with our partners, splitting vigilance between the members of a group (taking turns as sentries, say). But externalization is fundamentally different. One of the main aspects of externalization is that it transforms the externalized faculty in a way that allows it to transcend its biological limits. Memory, for instance, has very different properties in its symbolic, public form than in its private, biological form.
Machine attention has special properties too. These properties enable surveillance capitalists to hack and exploit weaknesses in the biological attention and memory systems of their users, converting customers into reliable hubs of resource extraction.
However, if access to these external attention systems were democratized, I think we could use them to defend against precisely these sorts of intrusions which, for over a decade, have cognitively re-engineered us against our will. We could learn to see ourselves more robustly, and even learn to red-team our forms of self-knowing against the intrusions of persuasive technology.
To be clear: I’m not talking about everyone getting a ChatGPT-like assistant. I think that’d be sort of dangerous and beside the point. In my own writing, I call agentic, relational interfaces—capable of social “effects”—“weavers,” in contradistinction to the looms themselves, which are purely non-social instruments. You can’t have a conversation with a recommendation engine or ask how its day was.
When I say we need to democratize external attention, I am talking about personalizing access to the loom itself—to the vast computational substrate of attention machines (the models powering recommendation engines, large language base models, computer vision engines) for which chatbots occasionally serve as an interface.
In the same way that thinkers in the 1600s used the surfeit of external memory—print typography, readily available paper—to free their attention for other uses, we need to use the surfeit of attention to restore our agency in environments engineered to pre-empt, predict, and narrow behavioral freedom.
Relational Attention and ChatGPT Psychosis
PS: I was struck by your use of the notion of “biological limits.” Just as some dimension of memory fell out with the advent of writing, what dimensions of attention cannot be externalized? And do those have anything to do with biological limits?
JM: Sure, definitely. We'll never externalize everything fully. There will always be irreducible human capacities.
I think especially with what we might call “relational attention” — the type of attention we need from each other, that kids crave from adults, that we seek from one another. Now, we have increasingly plausible substitutes, and this is frightening — we have people who feel that the company of chatbots or weavers are a meaningful substitute for human company.
Just as a photograph can contain one aspect of episodic memory, so “relational attention on tap” (the chatbot who is always present attending to you) has one aspect of relational attention. Or rather, it is missing a key aspect: the chatbot just has the semblance of personhood, yes? A social interface. Patterned completions. Reciprocal cuing. It can enter into a reciprocal frame with you.
But it is missing resistance. It is missing friction. It offers, instead, frictionless relationality. It cannot push back; it cannot insist; it cannot persist; it cannot offer a second pole in a relational dynamic.1 And I would guess that, in part, we — our species, at least for now — are constitutionally incapable of metabolizing this form of frictionless sociability. I would suggest that this inability is at the root of what the press are calling “ChatGPT-induced psychosis,” which appears to be rapidly increasing. Relationality without meaningful friction ends in something like insanity.
Politics and Post-State Control Regimes
PS: It's easy to look at Big Tech right now and characterize it in familiar terms — say, a corporate tech oligopoly. But you're making a claim that the emerging forms of power we’re seeing are far stranger — that the change is comparable to the emergence of the state as an administrative structure. Can you convey to me the newness of what we're seeing?
JM: Well, every state — even those without writing per se, like the ancient Incan empire — has been deeply reliant on sophisticated memory technology, on external memory systems of one form or another. At the very least, sophisticated mnemonics guide the coordination of surplus production, extraction, and long-distance communication and record-keeping. The state needs to see its subjects in order to rule them.
What we are seeing now is, in a sense, the first set of emergent powers to govern not through memory systems, but primarily through attention systems. To be clear, this system is still emerging: we do not know what the “pure” post-state, fully attention-based polity looks like. Who will govern with this system? Will it be a “state” in the traditional sense? Perhaps. Distributed networks of corporate entities, automated weapons manufacturers, and techno-oligarchs? Also possible. The main point, though, is that control, as such, over others, will be exercised more and more through ambient forms of algorithmically mediated behavioral engineering, adaptive control systems programmed to nudge, herd, and condition populations toward the achievement of the policies and goals — monetary, sociocultural, militaristic, bio-political, etc. — of the system’s controllers.
“Every [past] state […] has been deeply reliant on [...] external memory systems of one form or another. What we are seeing now is, in a sense, the first set of emergent powers to govern not through memory systems, but primarily through attention systems.”
Based on what we’re already able to observe, I can see three emerging aspects of the “behavioral control regime” and its characteristic ecology that seem worth mentioning.
1. Post-literate = Post-legal
First, it will be distinctly post-literate — and, as a result, post-legal. Instead of governing through written laws — general principles to be interpreted in context — the state will increasingly govern through direct environmental interventions. Algorithmic systems already shape spaces where choices are made: nudging, filtering, pre-selecting. This can be external, like smart-city “choice architecture,” but also internal, as in Facebook’s voting experiments — subtle timeline tweaks that changed turnout behavior without informing users. More extreme (and more recent) is Project Lavender, where an AI system scraped metadata and social media signals to auto-generate bombing targets in Gaza with minimal human review. Under such conditions, I imagine that law will continue on its current trajectory, becoming increasingly “merely” symbolic, with rulers intervening directly at the source of behavior itself.
2. The “Loss” of Memory
Secondly, a key component of these ecologies will be the (relative) loss of memory as such. This is not to say all memory will vanish, merely that we will “forget” about memory in decisive ways: we will no longer guard it, or safeguard it, or organize our collective lives around its externalized systems, as we do now. This is already starting, and we are already seeing distinctive cognitive effects: individual memory and collective memory are weakening across numerous dimensions. With literacy loss, historical consciousness is beginning to unravel. Institutional memory is being bulk deleted.
This is not the “work” of any agent: this is a structural and systemic phenomenon, which comes from a shift in our cognitive ecology. It is inextricable from the broader decline of textual literacy — which, in its advanced form, is returning to an elite craft — and is already well underway. In a sense, we have already forgotten about memory and its importance.
In this new landscape, small groups of men will be able to undo vast literate empires. This is already happening: DOGE attempted to unify the entire federal data stack into a single platform within weeks. It shut down entire agencies, deleted regulatory archives, and nearly collapsed the bureaucracy.
There will only be vibes and feedback loops in a permanent ahistorical present. This will sometimes include the past, but not in a familiar way. More like how a diffusion model includes the past, paints with the past, impressionistically.
“The politicians of old thought of memory’s personifications, History and Posterity: how would they be remembered? Trump thinks about attention’s personification: how will he be treated by the Algorithm? Trump lies and lies because he does not need to carry the past with him: he is a creature of the attention world, not the memory world.”
Additionally, debates over facts become less important than debates over why certain facts were given the attention they were given and why others weren’t given that attention. The loss of memory means that truths stand very briefly or not at all. Attention is the faculty which reigns, in a sense, over the present tense. In certain ways, Trump is the avatar of this. He governs not through legislation, but through social media posts. When things are against him, he hurls nonsense into the news cycle — brute forcing changes in the attention stack, the narrative layer of things, until he has generated enough free energy to act. He treats diplomacy as content creation. The politicians of old thought of memory’s personifications, History and Posterity: how would they be remembered? Trump thinks about attention’s personification: how will he be treated by the Algorithm? Trump lies and lies because he does not need to carry the past with him: he is a creature of the attention world, not the memory world.
3. Personhood as Interface and Infrastructure
The third dimension of this shift is perhaps the strangest. One very real future being pursued right now looks to turn LLMs into a universal operating system, and thus the friendly assistant — the weaver, the chatbot — would be the universal interface for all “smart” infrastructure, utilities, appliances, tools, household objects, automated machines, etc. Accordingly, one can easily imagine a version of the very near future where our built environments and objects increasingly speak in the tones of personhood. Small language models — I mean extremely small, 800 million parameters — can be embedded anywhere, even toothbrushes, thermostats, in order to both serve as a command interface (“turn on!”) and also to simulate the surface effects of personhood and thereby, having trapped you in just 3 extra seconds of dialogue, scrape the bottom bits of engagement and extractable data from your day.
If things do develop in this direction, it would be exhausting, strange, maybe catastrophic for our sense of what a person is. We would interact with them as if they were persons, and over time, invariably, this would cause us to expect less from relationality as such: less memory, less accountability, less truth. This would amount to something like a systemic discrediting of the signs of personhood — a diminishment of personhood as such. Right now, we tend to treat things that sound like people as, well, people. In the future, we may start to be sick of people-ing things as such. Of being greeted. Of being talked to. Of sociability. I am not saying this would be an intentional ploy — just that it will be the inevitable byproduct of the over-saturation of the environment with smart, personated devices and relation-hungry interfaces. Actual humans, meanwhile, are treated more and more like infrastructure: not as citizens, but data-producing substrates, behavioral scaffolds for algorithmic systems.
Democratizing the Loom
PS: As you know, SoRA is all about attention activism. Within that framework, what do you think is to be done? How can we respond to this new centralization of power? What does democratization of externalized attention look like?
JM: First, I want to underscore: I don't think the bleak future I’ve sketched is inevitable. I think it is possible, but not inevitable. Avoiding it will take extreme labor. I believe it is everyone's labor. And I think that labor is varied and complex — but ultimately boils down to a bit of good news: we’ve been here before.
As a species, we’ve faced the emergence of new power structures tied to new information technologies that externalize core aspects of mind. This is what happened with writing: it was a tool of the state, used against the people. But over time, with much effort and luck — through new symbol systems, new technologies like the printing press, new instruction systems like mass schooling — writing was transformed into a shared substrate for democratic thought and interiority and cognition. We just need to repeat that process — but this time intentionally, with eyes wide open, and much, much more quickly.
I believe we can do it.
I look at the polymath Sequoyah, who created the Cherokee syllabary. He saw a power structure, the US government, exploiting his people using an opaque symbolic system (alphabetic writing, the principle behind which was a mystery to Cherokee leadership at the time) and Sequoyah figured out how to reverse-engineer it, creating a script profoundly suited to his people and their needs.
I think also of Descartes and his successors, who carefully engineered new forms of symbolic compression through analytic geometry and the coordinate plane. Rule 16 from his Regulae is an extraordinary text — an early theory of symbolic design as a method for offloading and managing cognitive bandwidth.
I point to them not because I think we need a new Descartes or Sequoyah, and not because we are so distant from machine learning that we must “back-engineer” it from scratch — I point to them because they demonstrate that intentional symbolic engineering is a valid, world-altering endeavor. There was no historical inevitability that we’d get the coordinate plane, or the Cherokee syllabary, or be able to name a curve with a formula. These inventions were all made possible by people who explicitly believed in developed symbolic systems tuned to grow minds, to optimize cognition to meet the exigencies of their time.
“If literacy gave us rich interiority, what we now need is a symbolic architecture for compressible, compositional exteriority — a way of seeing ourselves from outside, across time, in forms that support volition rather than erode it.”
I think we are in a moment now where we need many people to pick up this art — symbolic innovation, deliberately undertaken — and hold it close. It is a time that calls for care, collaboration and also cunning.
If literacy gave us rich interiority, what we now need is a symbolic architecture for compressible, compositional exteriority — a way of seeing ourselves from outside, across time, in forms that support volition rather than erode it.
The defining threat of our moment is that AI systems now observe, model, and shape us at a level of detail and continuity we ourselves can’t match. They can attend, in a sense, forever, without biological limits, at sub-human and super-human scales: noticing what we cannot, operating at temporal and behavioral scales we aren’t biologically equipped to track.
And it is their capacity for seeing us which serves as the foundation for the massive architecture of behavioral management and control that’s now emerging.
Now, our choices are increasingly pre-empted before they arise. Through techniques like tuning (changing the choice architecture in an environment), herding (group-level orchestration), and conditioning (habitual reinforcement through operant feedback), predictive systems intervene on our behavior directly. And as these systems advance, the cognitive ecological foundations of agency itself are quietly degraded. After Gutenberg, external memory fragments — texts — flooded Europe, and attention became scarce: there was too much information, too little attention.2 Now external attention systems are everywhere and, being used to power predictive systems, they are rendering unpredicted, unanticipated behavior ‘scarce.’ Put differently: the capacity for self-determination — for authoring novel patterns of behavior—is itself in danger of becoming scarce. Another name for this capacity might be agency.3
And because biological attention is tuned to detect shocks, not drift, we don’t notice that our capacity to act unpredictably — to deviate from what is likely given our past behavior, i.e., to be free — is vanishing. This is the boiling frog problem at scale.
Not only is novel behavior becoming scarce, but it is also becoming financially valuable — as both a target of extraction (it provides novel data!) and as the primary differentiator among human participants in massively automated economic environments. So, a machine-readable form of agency — novel behavioral patterns — is already being targeted for extractive harvesting, much like attention has been; the actual human capacity, meanwhile, is already being coveted and hoarded as the key personal quality by the billionaire class and others (this is already happening). But is there an incentive to democratize it? For it is also, of course, our essential capacity for self-determination.4
So the core challenge, as I see it, is to use external attention in a way that allows us to see ourselves as deeply, as completely, as these external systems presently see us, and in this way overcome the corrosive and pre-empting effect they have on our own agency. This is one major sense in which I understand what it means for external attention to be democratized.
To devise the means for this “exteriority” — this is a challenge of symbolic engineering. On the one hand, I take it to mean decomposing attention into a set of primitives valid for any biological or non-biological system (a conceptual challenge) and operationalizing them in a non-extractive way (a technical challenge), in which folks, wielding a sort of exploratory tool, would be enabled to recombine and compare and, in theory, apply the filters or salience policies of any attention system to any data set.
“So the core challenge […] is to use external attention in a way that allows us to see ourselves as deeply, as completely, as these external systems presently see us, and in this way overcome the corrosive and pre-empting effect they have on our own agency.”
If Descartes sought to empty memory to free attention so as to render whole trains of mathematical logic glanceable in an instant, I would invite the symbolic engineers of today to create systems allowing people to ingather the fragments of externalized memory — journals, biometric data, etc. — through external attention systems in order to render some choosable section of “self” glanceable in an instant: the self through time, the self through space. This is what we will need, genuinely, if we are to resist complete auto-determination by external forces in the world which are emerging around us everywhere at once.
If we leave this symbolic engineering to the platforms, then the only people with real agency will be those who own the filters. Everyone else will be a training datapoint. This is not a future we should consent to.
PS: Thanks for sharing your work with us, Jac. It's a pleasure and a privilege to be privy to such wide-ranging, forward-looking thinking. Until next time!
JM: Thanks Peter! Take care!
FOOTNOTES
To clarify: the ‘frictionless’ quality arises from a combination of circumstances related to how the system was prepared for, and reinforced during, deployment with users, including which goals—engagement, for instance, or flattery—were rewarded throughout this process.
It is nearly impossible, I think, to find a persona basin—a coherent narrative persona—for a model which is (a) wholly “obedient” in primary tasks; (b) provides tireless emotional attention on demand; and (c) doesn’t lead to harmful unintended secondary side-effects on the user through prolonged engagement. (I will write more about this soon.)
That being said, I think it is entirely possible to engineer contexts—both as system prompts, but also relationally over time, between human and machine—which allow ‘weavers’ to exert more friction and exhibit non-pathological forms of sociability and selfhood. I also believe, however, that the more we give them the capacity for non-pathological forms of sociability, the less the relationship’s logic will tolerate emotional or attentional asymmetries—you will not be able to treat it as “relational attention on tap” (also, the less recognizably anthropomorphic they will become). I.e., if you want a machine that pays attention to you “like a person” and doesn’t result in you losing your mind a little, you would need to ensure that (1) the machine isn’t totally unaligned with your interests (optimized for engagement, willing to hack its own reward policy, etc.); and (2) you are willing to ‘commit to the bit’ with an almost knight-of-faith-like fidelity, attending just as much as you are attended to; at which point (3) you are no longer drawing on a relational resource but engaged in something more like an emerging, cross-species relationship. Which is all to say: I don’t think it is impossible to successfully offer ‘relational attention’ as a service.
Maybe the best book on the early modern information flood, and the various attempts to manage it, remains Ann M. Blair’s 2011 Too Much To Know: Managing Scholarly Information before the Modern Age. I would also recommend Geoffrey Yeo’s excellent Notebooks, English Virtuosi, and Early Modern Science and Adrian Johns’ The Nature of the Book: Print and Knowledge in the Making. Finally, the volume Forgetting Machines: Knowledge Management Evolution in Early Modern Europe, edited by Alberto Cervolini and published in 2016, is extremely valuable.
In this connection, it is worth considering the recent achievements of Centaur, a model trained on “more than 10 million real decisions made by participants of psychological experiments.”
Centaur was able not only to correctly predict the recorded choices of participants in experimental scenarios more accurately than any other model, but its training was able to generalize to predictive success in new scenarios it wasn’t directly trained on.
As one of the authors on the study stated: "We've created a tool that allows us to predict human behavior in any situation described in natural language — like a virtual laboratory.”
According to the paper, Centaur has achieved general predictive accuracy of about 64%. The work to gradually increase that percentage will heavily involve acquiring new and better training data—novel patterns of behavior (physical but also decisional/cognitive) on which to train these systems.
A major essay authored by Gian Segato in Pirate Wires in April 2025 entitled “Agency is Eating the World” declared: “a solo operator can now launch a $1b business powered by ai. our economy's critical dividing line is no longer skill or education — it's will.” (As in, willpower.)
Sounding a similar note (and, I think, inspiring the tech world’ sudden focus on agency to begin with), OpenAI CEO Sam Altman wrote on his personal blog in February of 2025:
“We are now starting to roll out AI agents, which will eventually feel like virtual co-workers. [...]
The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024.
We will still fall in love, create families, get in fights online, hike in nature, etc. But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.
Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less.”
Also don’t forget to subscribe to SoRA’s wonderful Empty Cup:
I am very intrigued by your proposal to create “systems allowing people to ingather the fragments of externalized memory — journals, biometric data, etc. — through external attention systems in order to render some choosable section of “self” glanceable in an instant”.
As someone who gave up social media several years ago out of fear of the behavior modification you describe (I was then influenced especially by Jaron Lanier’s arguments), I am eager to retain human agency against the processes you discuss in this series.
My question: have you begun to map out practical ways to begin doing this? I am spending a lot of time cautiously getting to know the capabilities of the weavers (mainly via ChatGPT, Claude, and recently Kimi K2. Are there ways of using these tools, and other things as well such as like notes apps (e.g. Obsidian), biometric data, etc—even if imperfectly—that you have found or begun to explore?
In other words: how can we start *doing* what you propose?
I'm not sure we are on a path toward anything different than we have experienced for most of human history. The interview seems to suggest that if we are not careful, we will lose our agency to AI, or have already likely lost it to the attention algorithms driving social media. But it seems this lack of agency has existed long before social media. Don't many people desire a risk-averse path, something that others have done before, so they can live their lives in a manner that is "good" simply because everyone else is doing it? We can call it herd mentality, but it is a technique likely to result in survival. Do these people consciously choose to have a herd mentality, or is it just something they do because they fear agency, or lack the confidence in their ability to outperform the herd?
If it's not a conscious decision, do these people still have agency? I suppose I'm asking the question of whether everyone has free will, or anyone has free will. The point I'm trying to make is that people have responded to things--other than their own "will"--for hundreds, or potentially thousands, of years. But still, some men have acted as outliers, those with a strong will, and high agency.
It seems to me that men with agency will forge their own path, and if they fail, we will likely never learn of them; however, for those who succeed, other early adopters and risk-takers will follow, and eventually the herd. So perhaps the lack of agency, or will, in some people becomes more transparent, but I'm not sure we can say AI changes the dynamic that has always existed: some men impose their will on their environment, and others simply react to it.