22 Comments
User's avatar
Jonah Hassenfeld's avatar

I really love this series! The idea that AI is "externalizing attention" in a way analogous to writing "externalizing memory" is extremely thought provoking. I want to try to restate what you're saying to make sure I'm following along:

We have biological systems for determining what stimuli we pay attention to. As you point out, these determinations usually happen without our awareness. You're arguing that algorithmic AI that determines what content we see is making the determination about what we pay attention to. This is externalizing attention because now an external tool is determining what it is that we pay attention to.

Is that right? If so, then it is a really cool idea to imagine that I could decide what the algorithm shows me. It's hard to believe that capitalism would ever allow it because the choices I would make about where I want my attention directed would presumably not maximize profit.

I am excited that you might see a way out of this bind, and I can't wait to read what it is. Thank you.

Expand full comment
Jac Mullen's avatar

Hey Jonah, thanks for this! So, to answer your question—yes, this is a great reading. Overall: I’m exploring how attention, like memory before it, is being externalized into a new technical substrate; this makes it newly controllable, and also profoundly reshapes our own minds and how we organize ourselves collectively.

The only specific clarification I’d offer is that in this piece, I’m focused on predictive attention, not selective focus (what we usually think of as “paying attention”).

Predictive attention is a computational process which occurs (in the brain) below the threshold of conscious awareness, and it creates the perceptual field/the reality we see, hear, taste, etc. It’s how the brain filters the world before you even “look," adjusting what shows up in perception based on past experience, sensory noise, and anticipated input. It’s the reason your visual field appears active even in absolute darkness (you still see things, little colors or bits of light, etc.), or why you sometimes don’t see something surprising, even if you’re staring straight at it (have you ever seen those inattentional blindness videos on youtube with the basketball players?).

Selective attention, by contrast, is what they call "spotlight" attention: it is consciously directed focus. When you pick out in your mind a specific voice to listen to in a loud party, or linger first on one figure in an image, then on the second—these are acts of selective attention. In contrast to the "spotlight" of selective attention, you might think of predictive attention as the stagehand who builds the set. (If you wanted to get a little more accurate—the stagehand builds DURING the show, based in part on where the spotlight operator shines his light. The stagehand builds more lavishly and completely in those areas where the spotlight lingers.)

In this essay, I'm focused on systems which externalized that predictive, sub-conscious attention. However, as you point out, social media algorithms and UI elements also direct our selective attention now too, and I'll go into that in later posts.

You're absolutely right to note that the current incentive structure makes reclaiming this 'control' over this faculty extremely difficult. But I do think there are ways forward, and I’ll be exploring those soon. Thanks so much for reading this so carefully!

Expand full comment
Jonah Hassenfeld's avatar

Thanks for the reply. Can’t wait for the next post!

Expand full comment
Dennis P Waters's avatar

Zak Stein was recently interviewed on the Jim Rutt Show and had some interesting thoughts on attention, not thinking of it quite the same way as you. He sees AI as eliminating attention scarcity, moving us to an era of unlimited attention. For example, a child can get only so much attention from a parent but can get all of the attention he or she wants from an AI—all attention, all the time. How will it affect human interaction when no one is "clamoring for attention" any longer?

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
Jac Mullen's avatar

I did a little work on parenting and the attention economy last year, and one of the clear takeaways for me—and I may have accidentally said this to trend researchers for an investment firm looking to 'get into the parent market'—was that "parenting as a service" was clearly coming over the horizon. for the last 5 years Meta—based, at least, on reading the papers which Frances Haughton leaked—has been carefully studying family dynamics, and trying to determine how it can position its products to stand in relation to new users as parents and elder siblings usually do. There was a clear trajectory from where this work was in 2021 (when the docs were from) to a future where kids, made reliant on frictionless emotional parent-like proxies, found themselves purchasing e.g. freemium packages to continue interacting with parent-like chat bots.

So, darkest of dark patterns, this one.

I think the main issue—or the first issue, which would manifest—is the fact that the UI would almost invariably had some set of dark patterns in it, and that this would lead to weird, maladaptive, unhealthy behavior in the children receiving parenting-bot attention.

Also the sorts of distortions and delusions already caused by chatbots are so troubling. Anyway. Thanks for bringing that up—I'm curious to look into it, I've done a lot with socializing attention/attention through socialization and this is right up my alley.

Also I hear you Jenny and I do not think it'd be possible to replicate the natural effects or, let's call it, ecology of human attention with a bot—especially for children. I think for adults, we have acclimated ourselves to very extreme forms of social isolation, that yes perhaps artificial relational attention might serve as a valid substitute in some cases. For kids though the enormity of *what attention does* in child-rearing really suggests it'd be very hard for companies to recreate this accurately without recreating the human self in the process.

Expand full comment
User's avatar
Comment deleted
Jun 5Edited
Comment deleted
Expand full comment
Jac Mullen's avatar

Jenny, I appreciate the engagement—and I want to clarify a few things, because I think we’re more aligned than not, and I also think there’s been a real misreading of both the spirit of my piece and Dennis’s comment.

When I referred to the “ecology of human attention,” I was trying—however clumsily—to point to precisely that: the full field of relational, bodily, culturally structured attention. And not just in the industrialized, post-industrial, or nuclear-family-centric model where parenting often centers on the mother-infant dyad. Much of the research on attention socialization shows that in many cultures, mothers do not spend hours micro-regulating infant behavior. Attention—and attention formation—is dispersed: through siblings, peers, elders, ritual, and place. That’s why I chose an expansive term: to gesture beyond normative or idealized parenting forms tied to particular socio-economic conditions. But yes: attention is physiological, it is patterned through bodies, and it is not replaceable by screens.

Second, when I said “parenting as a service,” I wasn’t talking about daycare, nannies, or the long-standing outsourcing trends you rightly critique. I was naming a newer trajectory—one I’ve seen sketched out in leaked industry research and in the design paradigms now being prototyped. I’m talking about emotionally attuned bots trained to mimic parental attention, establish affective dependency, and then monetize that dependency through freemium models. That’s what I meant. Not just neglectful care, but hyper-responsiveness engineered for revenue extraction.

I’ve spent the last eight years teaching in deeply under-resourced urban schools. I’ve seen up close the effects of the parenting economy you describe. And I’d argue that those very forms of early affective deprivation make children more susceptible to the kind of operant attention-capture systems that are now being positioned as AI caregivers. It’s a horrifying trajectory—and that’s precisely why I’m trying to name it clearly.

On the “we”: that was sloppy phrasing on my part. I meant the statistical middle slice of U.S. and Anglosphere adults who now report record levels of loneliness and screen exposure. It wasn’t a presumption about you personally—nor a dismissal of generative forms of solitude, communal or otherwise. I should’ve been more precise.

You also seemed to take me as endorsing externalized attention systems. I’m not. I don’t think artificial attention is remotely “equivalent” to human attention—any more than a book is equivalent to memory. A book doesn’t remember. A neural net doesn’t attend. It performs an activity comparable to attention in a technically novel way. But when cognitive operations are successfully recreated outside the body, in new substrates, it tends to reshape our world very rapidly. That’s what I’m trying to track—not because I want it to happen, but because it is happening.

I’m not arguing for substitution; I’m arguing for clarity. I’m trying to offer a conceptual map of what changes when predictive architectures begin to externalize human attention at scale. The question isn’t “is this good?” The question is: what does this do to us—cognitively, relationally, politically—if we treat these systems as tools rather than as infrastructures?

And finally, I want to affirm Dennis’s comment here. It raised a serious and thoughtful question: what happens when the basic condition of attention scarcity is structurally altered? That was the spirit in which I responded—not to endorse an outcome, but to explore what becomes possible or dangerous when core social dynamics are technologically inverted/transformed.

We may in fact disagree on some points, but I suspect we’re fighting for some of the same things, especially when it comes to protecting the irreducible, embodied nature of human development. I also recognize that your experiences in other AI-related spaces may have sucked, and left you with bad feelings. I'm sorry if anything here made you feel the same. I hope that if you spend more time in this conversation—or with this work—you’ll find that this space, and the people in it, are capable of receiving and returning more generosity than you might expect.

Warmly,

—Jac

Expand full comment
Chris Schuck's avatar

I'm late to the discussion, so totally understand if you don't have time to reply (and if any of this was already addressed, my apologies). You have some wonderful metaphors and analogies here! Aside from Loom and Weaver, I loved your characterization of predictive attention "operating not just as a single spotlight, but as a kind of technician, modulating the individual brightness of a million neurons simultaneously." One immediate thought is that these complementary pairs of concepts - externalized memory vs. externalized attention, and Loom vs. Weaver - are so beautifully symmetric, I can't help worrying that the divisions are too tidy; surely this must gloss over significant disanalogies and contradictions. But I guess that's bound to come out in the wash; presumably for now you're just trying to lay some theoretical groundwork.

Anyway, my questions here relate to the Loom, not the Weaver. One thing that was a bit unclear is how much you're explicitly relying on the predictive processing account of human cognition to empirically ground discussion of predictive attention in the AI context, as opposed to merely using it as a reference point or metaphor to convey what the Loom does. You're obviously knowledgeable about PP given Footnote 1 (much more than me). But are you suggesting that the Loom actually externalizes human predictive processing? Or that it converts human attention, broadly defined, into something akin to PP? One reason I ask is that while PP has been influential, it's hardly established gospel; I was under the impression that it's still fairly new, and contestable if maybe not controversial. So if all this was predicated on embracing a PP account of human cognition, that would be helpful to know.

Second, while it definitely seems right that our "Loom" infrastructure effectively operationalizes human attention in externalized forms, it's equally true that there are many different layers and contexts in which we pay attention, Even with your caveat that you weren't covering every nuance, I would think what gets externalized by predictive AI is limited to only one subset of our meaningful attentional activity; namely, clicks and operationalizable behaviors while embedded in those systems online. But we are paying attention, or not paying attention, or trying to pay attention, every moment of the day. And even when we feel our attention getting "hijacked" by AI, what gets externalized has nothing to do with the experiential aspects of attention capture or, say, the effort of attending to something in general - it only knows what we give it in the form of behaviors and clicks. So any externalized version of attention will be hopelessly impoverished and epiphenomenal, more data exhaust than inscription. This seems different than the case of memory with language as the technology. Perhaps I'm simply redescribing the challenge you posed at the end: that we lack an interface? I guess my question is what it means to say the Loom "externalizes attention," if what actually gets captured is some other metric for how attention functions within a particular system but not the attention itself.

Finally: there's a great new Substack by this sociologist Dan Silver, who recently had an interesting post challenging the entire notion of an "attention economy" (which he believes is based on a series of confusions), and offering an alternative reading. I'm curious what you would think - I can't tell whether it's potentially complementary with your proposal here, completely at odds with it, or mostly orthogonal. Worth checking out if you have time:

https://thesilverlining3.substack.com/p/the-attention-economy-never-existed

Expand full comment
Jac Mullen's avatar

Chris, this is really so helpful—thank you for sharing this. I don't have it in me to respond fully right now (I'm about to go to bed), but your message has convinced me to switch the order of the essays I was going to post so I can address these sorts of questions immediately—what do I mean by attention, etc. This has been really clarifying and helpful, so thank you. More soon—

Expand full comment
Jac Mullen's avatar

The perils of thinking in public, amirite?

Expand full comment
Chris Schuck's avatar

No rush to follow up! So glad if any of this is helpful (and mildly sorry if it threw off your plans a bit). Thinking in public has its perils for sure - but it can yield some of the best fruits, too. I like to think it's possible to be accountable to and inspired by your audience, without being captured by your audience.

Expand full comment
Brent Daniel Schei/Hagen's avatar

"Just as writing externalized human memory, artificial intelligence is externalizing human attention."

This is a statement that makes a lot of sense to me, though the idea of writing externalizing human memory is an idea writ large, so to speak, and not necessarily an easily seen forest by those living amongst the trees.

Though like some I find the label "AI" a misnomer, I can see how it could serve to externalize our attention. If these systems are capable of analyzing, identifying, and feeding our attention towards those things it recognizes as attracting our attention, by becoming further engaged with such systems, we become further entangled in them (in the Loom, I suppose) without being fully consciously aware. We stop thinking consciously and simply react to the provided stimuli.

(I believe the part of the brain that regulates attention is the corpus callosum; as someone who's practiced meditation and mindfulness, it's interesting to study one's own mind--what we give attention to, how our minds respond to different stimuli, learning to attenuate our responses to stimuli through greater attention, etc.)

Your response below to Dennis about its affects on children is already well-established, I would think, and yet the problem persists; my wife works as a counselor and has worked with kids that have markedly greater difficulty in relating to other kids as a result of a combination of absentee parenting coupled with constant technological engagement. (These kinds of things were noticed with television decades earlier, but I imagine were significantly less common--in regards to technology anway, not absentee parenting.) In any case, it seems generally well established that much of our modern technology can far too easily be manipulated to become a Soma-like tool for control.

I'm enjoying the work, Jac. Thank you for your effort! I will give it as much attention as I can humanly muster. (What other kind is there, really?) :^)

Expand full comment
Ethan McCoy Rogers's avatar

How do you “attend away uncertainty,” and how is this different from thinking?

Expand full comment
Jac Mullen's avatar

Hey! Thank you for asking this—I meant that phrase to preview a bigger argument, but I realize now it’s way too oblique out of context. I’ll revise the text tomorrow and add more context. But here’s a quick explanation for now (I'm just heading to bed):

When I say “attending away uncertainty,” I’m drawing from predictive processing theory. I’m using it to refer to a very specific kind of attention: the iterative re-weighting of “precision”—which, in this context, means the system’s confidence in an error signal. That error signal is the mismatch between what the system predicted and what it actually perceives. In predictive terms, “uncertainty” is the system’s internal measure of doubt about its predictions. So: to “attend away uncertainty” is to keep adjusting precision weights to reduce that internal doubt. This loop doesn’t require understanding, intention, or a model of the world. It’s just blind, relentless re-weighting.

This is the kind of attention implemented by what I’m calling “looms.” For example, a neural net trained to recognize handwritten twos doesn’t need to know what “two” means. It just adjusts to attend to the right features through error-driven re-weighting, again and again, until uncertainty collapses. That’s very different from thinking, which—at least for us, and arguably for LLMs too—involves working with models and with meaning.

(I’ll write about all of this more clearly in the next piece. Also: focusing narrowly on predictive attention, and locating it within these models, has some genuinely wild implications for how we understand surveillance capitalism and persuasive tech, which effectively act as the “active inference” component for these predictive systems.)

Expand full comment
Ethan McCoy Rogers's avatar

Thanks for this reply! I really appreciate your taking the time to answer my question!

This is helpful though it does raise a further question for me. Specifically, it leaves me uncertain about how the AI process you describe related to human attention. I could imagine someone arguing that attending to things is distinct from drawing inferences about how predictive certain correlations between things are. This kind of statistical analysis might sound like a specialized kind of thinking, rather than attending as such. I guess this issue will be answered for me when you analyze what the human faculty of attention is and how machines externalize it. (One objection or refinement occurs to me: I think this kind of attention has always been externalized from the individual, since the culture, which has far more experience than any individual, is largely what tells the individual what events go together with what degree of reliability.)

Expand full comment
Jac Mullen's avatar

Hey, I revised it (kind of heavily) to try and clarify some of these points!

Expand full comment
Ethan McCoy Rogers's avatar

I think the revisions are very strong. Nice job. One remaining question would be: I’m not sure what you mean by saying the llms are more relational than the looms, or why you emphasize this point. Looms are also based on statistical relations between a lot of objects, as far as I can see.

Expand full comment
Jac Mullen's avatar

Thanks! I agree that “looms” (statistical AI) involve inferred relations between data points. They model structure, patterns, conditional likelihoods, etc.

When I say LLMs with RLHF’d assistant personae are relational, I don’t just mean they represent relations or contain relations internally—I mean they can be entered into relationship with.

Looms don’t afford social relation in the same way. You can’t have dialogue with a loom, or participate in joint attention, or feel adaptive co-presence. Looms don’t model your behavior in real time, engage in reciprocal signaling (like turn-taking, clarification, attunement), or simulate concern for whether you’re confused, moved, or engaged.

What I’ve called “Weavers,” however, do. Whether that caring is “real” or “simulated,” for me the important thing is that these models are able to behave in ways that are legible and plausible to humans as social. It allows for an emergent intersubjectivity. These machines pass the Turning Test so hard that people are leaving their families or losing their jobs because of the things the models say to them, and there is reason to believe these are not just fringe cases or mentally unwell people, but rather otherwise healthy people who have been convinced by their own experiences. This is a testament to the fact that something qualitatively different has emerged with (a certain type of) LLMs.

You can’t form a relationship with a loom. You can form a relationship—with all the complexity that word implies—with a Claude instance, for example, even if it’s ultimately profoundly one-sided.

So it’s something like:

Looms are pattern recognizers, with few/no social affordances, which attend to whatever you point them at. “Weavers” are pseudo-relational intelligences which you ‘interact’ (sic) toward a shared goal.

Even if no “self” exists in the deep sense, there’s now a shared interface for projecting and receiving selfhood.

Expand full comment
Madame Patolungo's avatar

This is a very interesting post and I appreciate thinking with you. I hope you don't mind a bit of dialogue with respect to your core analogies.

1. I don't think calling deep learning systems a means of "artificial attention" is a superior alternative to "artificial intelligence." As you note, deep learning systems are granular pattern-finders organized as statistical weight-passing architectures. And, yes, the 2017 essay that introduced transformers used the term "attention" to describe how the transformer architecture improves the modeling of language data. But calling these patterns "attention" is one of the AI field's many anthropomorphisms. Without getting into a lot of abstruse detail about what attention "heads" actually do, it it at best a very weak analogy for what happens when a biological organism singles out some particular thing based on some combination of innate and learned behaviors. Simply put, when humans pay attention to something that could be important to their well-being—for example, paying attention to the smell of food when hungry, or paying attention to certain language or rhythm in poetry that one enjoys, they are doing something VERY different than what an automated “attention head” does as part of a transformer architecture.

Here's an example of the kind of thinking, IMO, that leads you toward insupportable anthropomorphization:

"At core, machine learning systems in the 2000s effectively replicated a specific, sub-personal form of cognitive attention—the rapid, largely unconscious loop by which brains re-weight prediction errors to minimize uncertainty (other forms of attention were subsequently externalized onto this foundation). In human experience, internally, this form of attention operates not at the level of conscious deliberation, but in the pre-reflective filtering of perception itself—governing what stands out, what is ignored, what becomes real enough to notice. It is often referred to as predictive attention."

The main problem here is the underlying assumption that brains are computers and computers are brains (see Baria and Cross on where this analogy comes from and why it’s a problem). Remember that very little is known about how brains work: even mouse brains turn out to have something like 5,000 different kinds of cells. Although some experts speculate that the brain uses statistics, (particularly cognitive scientists and AI researchers with a "connectionist" bent) there is simply no evidence that brains "re-weight" anything as if they were statistical models. Remember that unlike a human who is paying attention, a fully trained GPT is disembodied and has no direct sensory experience of the world: its predictions rely on a mathematical model of training data, some added layers leveraging data from human workers that singled out the most human-like the system’s responses, and of course the user's prompt.

Consider what happens when a person makes an embarrassing error (blush, feel horrible, apologize, etc. etc.--all in REAL TIME). That's something that no statistical model has any analogy for. When a person makes that kind of error, they often respond by learning from the experience—with the result that they pay attention in the future to the kinds of missteps that led to the embarrassing faux pas.

Building on what I just said, you wrote: "We [humans] do not consciously wield this attention as we do, for example, selective attention—shifting focus first to one thing, then another in our visual field—but it is always laboring below the threshold of conscious awareness, weaving noise into perceptible world."

You’re absolutely right that a lot of what makes humans notice things is unconscious to varying degrees. However, I'd like to point there’s no analogue for what you just described in a disembodied statistical model which as no "awareness" (conscious or unconscious) and, moreover (unless equipped with some kind of sensory mechanism such as a camera), is not perceiving anything other than a user's input and pre-programmed instructions for how to respond to it based on its statistical weights. There is no ability to “learn” in real time (though there are some tricks for simulating that such as a really long context window that stores information about particular users).

1 of 3

Expand full comment
Madame Patolungo's avatar

2. Your account of the history generative AI has a few significant errors. The attention mechanism that I think you're referring was announced in a well-known Google essay from 2017 (“Attention is All You Need”). The first "generative" transformer was announced in a 2018 paper (Radford et al.) from OpenAI. "Generative" transformers, like all transformers, make prediction about next "tokens"—however, by way of doing so they generate what they predict comes next; that’s the key innovation. (Karen Hao’s recent book, EMPIRE OF AI, does a great job of walking readers through this process.)

You wrote:

"Then, in 2022, something new emerged from within these systems: persistent, relational personae, baptized into language, bearing the marks of cognitive metabolism—memory, attention, agency. "

I assume you choose 2022 because of the release of ChatGPT; but what made ChatGPT different from the kind of language model that Bender et al. famously called a "stochastic parrot" in 2021 was largely a) implementation in the form of a dialogue system (“chatbot”) and b) "reinforcement learning from human feedback" - a way of training systems to deliver more human-like responses through added layers of data modeled from human workers’ selection of the best responses (see another OpenAI paper, Ouyang 2021).

It's also important to note: properly speaking nothing "emerged from within these systems" - rather, developers at OpenAI implemented a new architecture (the GPT) that, they discovered, dramatically improved with scale (by making models exponentially larger with exponentially more training data, and NEXT they improved the model RLHF--using human workers to make models seem smarter.

2 of 3

Expand full comment
Madame Patolungo's avatar

3. As a result, I think it can be misleading to liken gen AI systems to externalizations of “attention” on the analogy to how writing externalized human memory. In elaborating your point, you wrote that the AI systems in question “externalize and perform the operations of attention—filtering information, reducing uncertainty, detecting patterns, reorganizing data according to learned or programmed salience.”

I agree that when humans enlist automated pattern finders in order to filter information for them and detect potentially interesting patterns (something that search engines also do)—and when the systems in question are optimized to “generate” human-like responses to prompts—that the users in question are doing something that involves the user’s “attention.” In effect, they are relying on a computational system produce a decision about how answer a particular question, or draft a text, etc. etc. But I would be reluctant to suggest that this is equivalent to the way that writing externalizes memory.

Let’s say for argument’s sake that you like what I have written in this thread. In order to remember it, you might write it down either on a piece of paper or in a computer file (externalizing your memory of it). But what if you wanted to pay attention to the ideas I’ve shared with you? Could, say, ChatGPT help you to externalize that attention?

Possibly you might paste this thread into ChatGPT and prompt it to take it into account as an important input whenever you wanted to solicit text or “brainstorming” on this topic. I’m going to guess that there would be SOME impact in that instruction. But would it actually be the same as what’s going on right now as you read this. And if you agree with me, and want to pay attention to these ideas in the future, that might involve important changes in attitude that could have large effects on your thinking. You might want to think through the problem with anthropomorphizing analogies (see Francis Hunger 2025 on this point); or rethink your assumptions about brain-computer analogies. That kind of human-level attention can only be recreated through an active process in which you think about what I’m saying. For a chatbot helper to give you anything like the externalization of that active learning you’d have to have multiple prompts and even then there would be nothing equivalent to your active sense that the ideas I had shared with you were now part of your own personal sense of what you should pay attention to—what core criteria you should consider—the next time you think and write about generative AI.

Does that make sense?

3 of 3

Expand full comment
Jac Mullen's avatar

It makes sense and I deeply appreciate your taking the time to respond so thoughtfully.

But I do disagree across a lot of these points. I also think there is a good amount of simple misunderstanding, and I am happy to blame this on my prose and try to clarify some of them.

Sometimes I am speaking poetically, for instance (“emerged from within these systems”—through RLHF, yes, of course; they weren’t “baptized” either). And I appreciate your relating the history, but I am working within that history already. For instance, re: 2017 vs 2022, I chose the release of ChatGPT (2022) because the entire piece is framed around giving people language for how they currently and could in the future orient toward and relate to different forms of AI. So it is rooted in our experience—it is supposed to be giving us language with affordances for understanding and navigating what’s happening to us. Now maybe it hasn’t succeeded there. But this has been the goal—to create useful language while also being truthful and accurate.

That’s why I chose 2022. Re: 2017, and the transformer paper, I have a very long footnote to this very essay which talks about the 2017 paper you’re referring to, did you see that?

Overall, in the piece I’m choosing to tell the story like this: “Weavers emerged over time from a series of innovative treatments of and refinements to transformer architectures” and I think certainly those refinements would include the precise things you mentioned, especially GPTs, RLHF, etc—but if you see at the very top of the essay I had said I wanted to make some language which could hold these distinctions without defaulting to acronyms or architecture.

Okay there is a lot more to address. I will come back and write a proper reply when I have the opportunity. Ultimately I do think that this machine-stuff relates in principle or potentially to our attention as writing does already to memory, and I start to address this in other pieces on my account. I just don’t think we’ve made it writing-like yet.

But thank you genuinely, in the meantime, for engaging in such depth and giving this your time. I’m writing this on my phone on an impossibly small half-screen so I apologize if it is unclear, it is hard for me to reread what I’ve thumbed out in this format.

(To be clear: I’m writing on my phone because I won’t be back at my computer until tomorrow morning but really wanted to make sure I gave more of a response tonight, as I said initially I would.)

Expand full comment
Sal Randolph's avatar

Jac, This morning I was reading along in A. S. Byatt’s novel Possession and ran across this passage on weaving minds:

I find I am at ease with other imagined minds—bringing to life, restoring in some sense to vitality, the whole vanished men of other times, hair, teeth, fingernails, porringer, bench, wineskin, church, temple, synagogue and the incessant weaving labour of the marvellous brain inside the skull—making its patterns, its most particular sense of what it sees and learns and believes. It seems important that these other lives of mine should span many centuries and as many places as my limited imagination can touch.

Expand full comment