Discussion about this post

User's avatar
R McC's avatar

I am very intrigued by your proposal to create “systems allowing people to ingather the fragments of externalized memory — journals, biometric data, etc. — through external attention systems in order to render some choosable section of “self” glanceable in an instant”.

As someone who gave up social media several years ago out of fear of the behavior modification you describe (I was then influenced especially by Jaron Lanier’s arguments), I am eager to retain human agency against the processes you discuss in this series.

My question: have you begun to map out practical ways to begin doing this? I am spending a lot of time cautiously getting to know the capabilities of the weavers (mainly via ChatGPT, Claude, and recently Kimi K2. Are there ways of using these tools, and other things as well such as like notes apps (e.g. Obsidian), biometric data, etc—even if imperfectly—that you have found or begun to explore?

In other words: how can we start *doing* what you propose?

Expand full comment
Argo's avatar

I'm not sure we are on a path toward anything different than we have experienced for most of human history. The interview seems to suggest that if we are not careful, we will lose our agency to AI, or have already likely lost it to the attention algorithms driving social media. But it seems this lack of agency has existed long before social media. Don't many people desire a risk-averse path, something that others have done before, so they can live their lives in a manner that is "good" simply because everyone else is doing it? We can call it herd mentality, but it is a technique likely to result in survival. Do these people consciously choose to have a herd mentality, or is it just something they do because they fear agency, or lack the confidence in their ability to outperform the herd?

If it's not a conscious decision, do these people still have agency? I suppose I'm asking the question of whether everyone has free will, or anyone has free will. The point I'm trying to make is that people have responded to things--other than their own "will"--for hundreds, or potentially thousands, of years. But still, some men have acted as outliers, those with a strong will, and high agency.

It seems to me that men with agency will forge their own path, and if they fail, we will likely never learn of them; however, for those who succeed, other early adopters and risk-takers will follow, and eventually the herd. So perhaps the lack of agency, or will, in some people becomes more transparent, but I'm not sure we can say AI changes the dynamic that has always existed: some men impose their will on their environment, and others simply react to it.

Expand full comment

No posts