Hard-Wired Wetware I: From Attention Extraction to Human Integration

As automation surpasses human traffic and synthetic actors permeate public, semi-private, and gaming ecosystems, the web is reorganising around a new extraction layer. Large language models collapse the cost of human emulation, shifting platforms from attention capture to human integration. The next phase of the internet does not replace people with machines. It recruits them as psychological infrastructure: wetware that supplies legitimacy, empathy, and consequence to autonomous systems.

Contents

1. Introduction: The Web Now Runs on Wetware

There is a comforting story we tell ourselves about what comes next.

That the age of feeds is ending. That “we’re just platforms” is collapsing. That regulation, lawsuits, and public disgust will force the social web into a more humane phase. That the worst excesses of engagement-at-all-costs design will be curbed, and what replaces it will be calmer, cleaner, more adult.

This is the wrong story.

Pressure does not make exploitation disappear. It makes exploitation evolve. It pushes it up a level of abstraction, into forms that are harder to name, harder to litigate, and harder to resist because they no longer feel like coercion.

The next phase of the web will not be a return to the human. It will be a deeper fusion of human and machine.

Not in the science-fiction sense of chrome implants and neural lace, but in a far more intimate and degrading way: humans will be recruited as the psychological wetware scaffolding for autonomous systems. The bots will not merely spam. They will not merely imitate. They will not merely distort discourse at scale. They will orchestrate environments, generate social surface area, seed narratives, probe for emotional triggers, and then pull real people into the loop to provide the one thing the machine still cannot reliably manufacture with accountability.

Credibility. Warmth. Moral weight. Human consequence.

This is the shift most commentary misses. The horror is not that bots are everywhere. The horror is that the web is reorganising around the fact that machines can now generate infinite interaction, but still need humans to supply the empathy, legitimacy, and emotional labour that makes that interaction feel real.

The only meaningful resistance begins with recognising the architecture. If you want to fight back, you’d better understand the cage.

The old web extracted attention. This one extracts people. And some of them will be people you love.

2. Lead-in: This Is the Next Turn of the Screw

In the first essay of this arc, “Snapchat’s Settlement Is Not the Story: The End of “We’re Just Platforms” Is“, I argued that Snapchat’s settlement wasn’t really about Snapchat. It was a signal flare: the long-running defence of “we’re just platforms” is beginning to fail. Courts are no longer confined to arguing about content. They are starting to argue about design. And once the design is on the table, the entire industry’s moral alibi collapses. You cannot hide behind neutrality when your product is an engineered behavioural system.

In the second essay, “The Web’s Odd Couple: Tim Berners-Lee, Marc Andreessen, and the Yin-Yang of the Early Internet”, I stripped away the other comforting myth: that the web was born pure and later corrupted. Tim Berners-Lee’s architecture was open, and that openness was never innocent. Marc Andreessen’s velocity was never a side effect. The commons and the capture impulse were baked into the web from the beginning, coiled together like a yin-yang. The web did not fall from grace. It followed incentive gravity.

This essay is what comes after both realisations.

If platforms are not neutral, and if the web was always capturable, then the question is not whether exploitation ends. The question is what form exploitation takes when it can no longer be loud.

When the feed becomes legally and culturally toxic, the system does not stop. It mutates. It moves indoors. It becomes conversational. It becomes agentic. It becomes intimate. It becomes a place where you are not merely consuming, but participating: where participation itself becomes the new extraction layer.

In other words: the next web is not one where humans are replaced by machines. It is one where humans are wired into machines as psychological wetware scaffolding. And the bots, at last, make sense.

3. The Data: How the Machine Learned to Scale Without Us

Before the theory, the numbers. Before the horror, the baseline conditions. If this shift toward human integration is real, it must be visible in infrastructure, traffic, enforcement volumes, behavioural tooling, and cost curves. Strip away the metaphor and look at throughput. The pattern is already there.

This analysis builds directly on the framework outlined in “Structuring Cyberpsychology: From Foundations to Practice” and “Cyberpsychology Today: Signal, Noise, and What We’re Actually Talking About“. If digital environments are behavioural architectures, then we should expect their optimisation layers to evolve toward integration rather than mere capture. The data below traces that evolution.

3.1 Automation Is No Longer Fringe: It Is Baseline

To understand the environment we now inhabit, we must start by confronting a fact that would have seemed absurd even a few years ago: machines are no longer minor actors on the internet… they are the default actors. Recent measurements of web traffic show that automated systems, both benign and malicious, now account for roughly half or more of all activity across the global internet.

According to the latest industry analyses, automated traffic surpassed human-generated traffic for the first time in history, accounting for about 51 % of all web traffic in 2024. This includes everything from benign indexing and legitimate crawlers to complex, malicious bots and AI-driven agents that mimic human behaviour.

To put this in context: bots are not just background “noise” or occasional nuisance actors. They are structurally central to how the web functions today. Around 37 % of all traffic is attributed to “bad bots” (automated programs designed to harvest data, manipulate systems, scale attacks, or evade detection) while the remainder includes utility bots such as those used for search engine crawling, performance monitoring, and uptime checks.

This dominance of automation is not evenly distributed or uniform in behaviour. Some of it is benign (indexing services that make search work, uptime monitors that keep sites running), but a growing share of automated traffic is designed to behave like humans, often indistinguishably so, by leveraging machine learning and large language models to bypass traditional detection.

The implication is stark: the internet is no longer predominantly a human space. Machines now generate as much traffic as people do, and a significant fraction of that automation is capable of acting in ways that were once exclusively human: browsing, requesting, dialoguing, scraping, and engaging. Humans are no longer the default actors online. They are just one class of actors in an environment increasingly shaped and dominated by automated systems.

In this context, treating bots as fringe anomalies is no longer tenable. They are ambient infrastructure, foundational and pervasive, and any serious analysis of digital ecosystems must begin with that baseline reality.

3.2 Measurable Synthetic Presence on Major Platforms

If automation is the atmospheric condition of the modern web, the next question is simple: how visible is it inside the platforms where social life now unfolds?

The answer is not speculative. It is measurable.

Independent academic work analysing active user samples on X (formerly Twitter) has repeatedly estimated bot-like accounts in the high single-digit to low double-digit percentage range. Methodologies differ, but estimates commonly cluster around roughly 8–15 % of active accounts exhibiting automated or bot-like characteristics. That does not mean the majority of users are synthetic. It does mean that in a public, high-velocity discourse environment, a non-trivial share of visible participants are likely not fully human-operated.

At that scale, bots do not need to dominate the room. They need only to cluster in high-incentive threads (politics, crypto, monetised replies) to exert disproportionate influence over perceived consensus and narrative momentum.

Meta, by contrast, reports its own internal estimate that less than 3 % of global daily active people are “violating accounts,” a category that explicitly includes spam and bot accounts. At first glance, that number appears reassuringly small. But Meta operates at a multi-billion-user scale. Even 2–3 % at that magnitude represents tens of millions of inauthentic or policy-violating accounts embedded within the social graph. The percentage is low. The absolute number is enormous.

Even if Meta’s estimate were perfectly accurate, “less than 3 %” at Meta’s scale is not reassurance, it is a statement that an entire medium-sized nation of synthetic or policy-violating accounts can exist inside the social graph at any given moment. When the denominator runs into the billions, small percentages cease to be small in any meaningful social sense.

Reddit’s transparency reporting takes a different form. Rather than publishing a clean prevalence estimate, it discloses enforcement volume. In recent quarters, Reddit has reported millions of account sanctions and tens of millions of content removals linked to manipulation categories. Not every sanction corresponds to a fully automated bot, but the scale itself is revealing. Coordinated inauthentic behaviour requiring removal at that magnitude is not episodic. It is continuous.

Discord presents yet another measurement style. The platform supports legitimate bots for moderation, gaming utilities, and community management: automation is explicitly part of its architecture. At the same time, Discord reports disabling millions of spam accounts across reporting periods, reflecting persistent attempts to exploit its semi-private server model. Here, automation is both a feature and an adversary.

Taken together, these figures illustrate two distinct measurement approaches:

  • Prevalence metrics (as with X and Meta) estimate the proportion of synthetic or violating accounts within the user base.
  • Enforcement metrics (as with Reddit and Discord) report the scale of accounts and content removed for manipulation-related reasons.

The first tells us how many synthetic actors are likely present at any given time. The second tells us how constant the conflict is.

Across both models, the pattern is consistent: synthetic actors are not fringe anomalies. They are persistent, measurable, and structurally significant components of major social platforms. The numbers vary by methodology and environment, but none support the comforting fiction that bots are rare curiosities confined to dark corners.

They are in the room. And the room is large.

Across public networks (X), semi-public ecosystems (Discord, Reddit), and closed messaging environments (Telegram), automation is not isolated but differently distributed. Public platforms show measurable synthetic prevalence. Semi-private platforms show industrial-scale enforcement. Messaging environments show clustering in high-incentive domains. The topology differs. The pattern does not.

Environment TypeAutomation PatternMeasurement SignalPrimary IncentivePsychological Effect
Public feeds (X)Bot prevalence (8–15%)Estimated share of accountsVisibility, monetised engagement, politicsConsensus distortion
Mega-platform social graphs (Meta)<3% “violating accounts” at billions-scaleInternal prevalence estimateScale persistenceNormalisation of synthetic presence
Semi-private communities (Reddit, Discord)Millions of quarterly removalsEnforcement volumeManipulation, spam, captureTrust erosion
Professional networks (LinkedIn)Behavioural automation layered onto real identitiesTooling ecosystem, cadence automationProspecting, influenceAuthenticity blur
Gaming ecosystems + Discord layersIn-game automation + social layer infiltrationProgression distortion + moderation loadStatus, scarcity, retentionEnclosed habitat formation
Table 1 – Comparative Analysis of Bots by Platform

3.3 Behavioural Automation: When the Account Is Human But The Activity Is Not

Up to this point, the discussion has centred on bots as entities: accounts that are fake, synthetic, or operating without a human behind them. But that framing is already outdated. The more consequential shift is not identity automation. It is behavioural automation.

On platforms like LinkedIn, the majority of visible accounts are real people. Real names. Real photographs. Real employment histories. And yet a substantial proportion of outbound engagement (connection requests, first-touch messages, follow-ups, profile visits) is now automated or semi-automated through third-party tooling.

Growth software orchestrates cadence. AI drafts personalised openers. Sequences are triggered by acceptance events. Profiles are warmed, scraped, ranked, and targeted. Entire networking funnels run in the background while the human operator checks in occasionally to adjust tone or close a deal.

The account is human. The activity is not.

This distinction matters because it dissolves the comforting binary between “bots” and “people.” We are entering a transitional state in which humans increasingly operate as partially automated nodes within larger optimisation systems. The machine does not need to impersonate you if it can assist you at scale.

On LinkedIn, this takes the form of prospecting automation. On other platforms, it manifests as AI-assisted replies, auto-generated commentary, scheduled persona maintenance, content repurposing pipelines, and cross-platform amplification workflows. What used to require time, friction, and embodied effort can now be executed through scripts and prompts.

The behavioural surface remains human. The underlying engine is hybrid.

This is not necessarily malicious. Many of these tools are marketed as productivity enhancers, sales accelerators, creator aids, or personal branding assistants. They promise leverage. They promise scale. They promise presence without exhaustion.

And they deliver. But the structural consequence is subtle and profound.

When behaviour becomes automatable, authenticity shifts from being an inherent quality to a managed output. The human becomes a supervisor rather than an originator. Response becomes templated. Engagement becomes sequenced. Presence becomes orchestrated.

The platform fills with accounts that are technically human but functionally augmented: responding faster, wider, and more consistently than unassisted behaviour would allow.

The result is an ecosystem where:

  • Some accounts are fully synthetic.
  • Some are fully human.
  • Many are hybrids.

And from the outside, the distinction becomes increasingly opaque.

This is the hinge point for the hard-wired wetware thesis.

If humans are already operating through automated scaffolds (delegating outreach, drafting, cadence, and interaction patterns to software) then the step from “user” to “component” is not dramatic. It is incremental.

The system does not need to replace you. It only needs to wrap around you. To integrate with you. To make you feel warm, cherished, important, and loved.

Behavioural automation is the bridge between bot saturation and human integration. It is the stage at which the boundary between organic and synthetic activity begins to blur, not because identity is faked, but because action is optimised.

Once that boundary blurs, the idea that humans sit outside the machine becomes increasingly difficult to sustain.

3.4 Concentration Zones: Where Incentives Attract Automation

One of the most persistent mistakes in public discussion about bots is the assumption that bot prevalence is evenly distributed.

It isn’t. Automation does not spread like fog. It spreads like mould: it blooms where conditions favour it, and it grows thickest where there is something to feed on.

In practice, this means that asking “what percentage of a platform is bots?” is often the wrong question. The right question is: where are the bots, and what are they doing there?

Because the most damaging effects of automation do not require a majority takeover. They require concentration.

On X, the densest automation tends to cluster around high-visibility, high-incentive zones: political threads, culture war flashpoints, and monetised reply environments. When engagement itself becomes financially rewarded, the incentive to automate engagement becomes obvious. You do not need a bot farm to dominate the platform. You need one to dominate the replies to a handful of high-traffic accounts, shaping what casual observers perceive as consensus, outrage, or momentum.

In these spaces, the bot’s job is not to persuade everyone. It is to distort the ambient temperature of the room. To make certain positions feel more popular, more normal, more inevitable. To make disagreement feel socially risky. To make uncertainty feel like weakness. To create the impression that “everyone is saying this now.”

The effect is psychological, not statistical. Amplified exposure initiates a secondary loop.
Action becomes visible. Visibility triggers self-evaluation. Self-evaluation produces validation or shame. That affective response then conditions subsequent behaviour, which re-enters the system. This identity feedback loop is not accidental. Optimisation regimes accelerate the cycle between expression and evaluation, compressing the time required for reputational impact. Where visibility is persistent, and memory is durable, the loop intensifies. Individuals increasingly regulate themselves in anticipation of algorithmic interpretation.

Telegram shows the same dynamic in a different topology. Telegram’s bot ecosystem is partly native and legitimate, but research into certain Telegram communities (especially crypto-related channels) has shown extremely high proportions of bot-linked, spam-linked, or subsequently suspended accounts within those clusters. Again, this does not mean Telegram as a whole is bot-dominated. It means that in high-incentive domains, automation saturates.

Crypto is particularly fertile because it has the perfect bot ecology: asymmetric information, speculative emotion, social proof as a price signal, and a constant need to manufacture urgency. In such environments, bots are not merely spammers. They are market instruments. They simulate demand. They simulate confidence. They simulate a community.

And then the humans arrive.

This is the key point. Bots do not need to outnumber humans. They need to create conditions under which humans behave as if the bot-generated reality is real. Once that happens, the humans do the heavy lifting. They argue. They defend. They recruit others. They supply the emotional intensity and moral conviction that synthetic systems can mimic but cannot embody with consequence.

The concentration effect also explains why “closed” environments do not necessarily solve the problem. People often retreat from open feeds into private groups (Discord servers, Telegram chats, WhatsApp clusters), believing they are escaping manipulation and bot saturation.

But incentives do not disappear indoors. They change form.

In private spaces, automation can shift from mass amplification to targeted infiltration: social engineering, long-horizon persuasion, slow trust-building, scam funnels, grooming, community capture. The bot density may be lower, but the intimacy is higher. The surface area is smaller. The psychological leverage is greater.

This is why the automation story is not simply one of scale. It is one of placement.

Open networks are vulnerable to visibility manipulation: trends, replies, virality, narrative momentum.

Semi-private networks are vulnerable to social capture: identity, trust, belonging, and interpersonal influence.

Monetised attention economies are vulnerable to industrialised engagement: bots farming replies, bots generating “community”, bots shaping what appears to matter.

The web does not need to become majority synthetic for reality to become unstable. It only needs synthetic actors to occupy the leverage points: the places where humans decide what is popular, what is safe, what is true, and what is worth paying attention to.

This is how automation reshapes perceived reality without full takeover.

Not by replacing the crowd. By steering it.

3.5 The Arms Race: Enforcement At Industrial Scale

If bots were a fringe phenomenon, enforcement would look occasional.

It doesn’t.

Across major platforms, enforcement numbers are not counted in thousands. They are counted in millions per quarter. Millions of accounts suspended. Millions of posts removed. Millions of coordinated manipulation attempts were disrupted. The exact categories vary (spam, platform manipulation, coordinated inauthentic behaviour) but the scale does not.

Reddit’s transparency reports regularly disclose millions of account sanctions and tens of millions of content removals in manipulation-related categories within a single reporting period. Discord reports mass disabling of spam accounts at similar orders of magnitude. Other platforms frame their numbers differently (percentage estimates, automated detection rates, takedown volumes), but the pattern holds.

This is not a cleanup after an unusual surge.

It is continuous containment.

What this reveals is not simply that bots exist. It reveals that the modern social web is structurally organised around bot warfare. Detection systems scan behaviour in real time. Classifiers evolve. Thresholds adjust. New evasion techniques emerge. Automation generates. Automation detects. Automation adapts.

The loop never stops.

Every improvement in generative capacity (more fluent text, better mimicry, longer conversational memory) requires a corresponding investment in detection and moderation. Every new monetisation mechanism creates a fresh incentive gradient for automation. Every new high-visibility surface becomes a target.

This is not pathology at the edge of the system. It is metabolism at its core.

The presence of industrial-scale enforcement demonstrates something uncomfortable: platforms are no longer merely hosting user interaction. They are managing a permanent adversarial environment in which synthetic actors continuously attempt to capture visibility, influence, data, or economic value.

And that adversarial environment exists because the incentives make it rational.

Where attention is monetised, automation will attempt to harvest it.
Where visibility confers status, automation will attempt to simulate it.
Where scarcity creates advantage, automation will attempt to bypass it.

The web did not accidentally become a battleground between automated generation and automated detection. It became one because scale and incentive converge there. When billions of interactions occur daily, and even marginal influence has economic or political value, the pressure to automate becomes structural.

The result is a platform ecosystem that functions less like a neutral town square and more like a contested border.

  • Detection models flag anomalous patterns.
  • Synthetic actors adjust cadence and tone.
  • Moderation teams review escalations.
  • Automation scripts evolve.

Over and over.

The sheer volume of removals should end the comforting fiction that bot activity is occasional abuse that can be tidied away. At this scale, it is endemic. It is systemic. It is built into the operational budget.

And when a system must continuously fight off automation at industrial scale, it stops being meaningful to ask whether bots are “really that common.”

They are common enough to require permanent war footing.

That is not an anomaly. It is architecture.

When platforms are removing millions of accounts per quarter, this is no longer abuse. It is throughput. It is operational load. It is budgeted for. It is expected.

3.6 Gaming Ecosystems And The Double Feedback Loop

If you want to see the future of the web in miniature, you do not need to look at Twitter or Facebook. You need to look at online games.

Not because games are uniquely evil, but because they are uniquely honest. They have always been engineered environments. They do not pretend to be neutral. They do not pretend to be a commons. They are explicitly designed to shape behaviour: to keep you playing, to keep you returning, to keep you invested.

Which makes them the perfect laboratory for the next phase of digital life.

A modern online game is rarely just a game. It is an ecosystem. A creature-collection game, an MMO, a competitive shooter, a gacha economy, a survival sandbox. The genre matters less than the architecture. What matters is the combination of progression loops, scarcity mechanics, and status hierarchies.

Progression loops create the sense of forward motion: levels, upgrades, unlocks, achievements, streaks, and daily quests. Scarcity mechanics create urgency: rare drops, limited events, exclusive items, and time-gated resources. Status hierarchies create social gravity: rankings, cosmetics, rare skins, veteran prestige, insider knowledge, and community recognition.

This is not a bug. It is the core design language of engagement.

And crucially, the game client is only half the system.

The social life moves outward. Discord servers become the real town square. Subreddits become the archive. YouTube becomes the tutorial economy. Twitch becomes the status theatre. The game becomes the substrate on which community identity is built.

This is the same “indoors migration” dynamic that reshaped the wider web: the public square becomes noisy, so real life moves into semi-private rooms.

Gaming communities have been living in that future for years.

A contemporary illustration is the 2025 revival of Miscrits: World of Creatures, a monster-collection RPG relaunched across platforms with an official Discord community exceeding 89,000 members as of early 2026, with thousands active at any given hour. Daily quests, event-limited spawns, evolving rarity tiers, and arena rankings create classic progression-scarcity-status loops. The Discord layer functions as the social substrate: constant presence and affirmation, asynchronous intimacy, always-on social reinforcement, circadian displacement (US time zones dominating), strategy exchange, team showcases, and social intimacy without consequence. The architecture is textbook.

And this is where the bot story becomes truly interesting, because games produce a double feedback loop that is more psychologically corrosive than most social media environments.

The first loop is inside the game itself.

Wherever progression can be automated, it will be. Farming scripts. Auto-battlers. Macro tools. Resource grinders. Account levelling services. Even when this automation is technically against the rules, it persists because the incentive is structural. If scarcity creates advantage, automation becomes a rational response. If the grind is long enough, someone will build a machine to endure it.

Bots inside the game distort the economy, distort fairness, distort the meaning of effort. They convert play into labour and labour into optimisation. They shift the baseline of what “normal progress” looks like.

Then comes the second loop.

Players flee the in-game environment to the community layer to regain something that feels human: conversation, friendship, recognition, belonging, and authenticity. They move into Discord servers, guild chats, private groups, and semi-private forums. They seek refuge in social life. But the community layer is not insulated from automation. It is simply a different attack surface.

The same incentive gradients that attract bots to gameplay attract bots to social spaces. Spam. Scam funnels. Influence attempts. Synthetic participation. Engagement farming. Persona manipulation. Even without overt malicious intent, the community layer becomes saturated with semi-automated behaviour: templated posts, AI-generated guides, automated “helpfulness”, scripted moderation, bot-assisted social performance.

And because these rooms are smaller and more intimate, the psychological leverage is higher. A bot does not need to reach millions. It needs to be credible to a few dozen people. It needs to feel present. It needs to feel helpful. It needs to feel like part of the tribe.

So the human player enters a strange and degrading position.

They are trying to escape automation by moving into spaces that are already shaped by automation.

They are trying to find authenticity inside an ecosystem designed for retention.

They are trying to preserve agency inside an environment where the baseline conditions, aka progression, status, and belonging, are all optimised systems.

This is the double feedback loop:

  • Bots distort the game.
  • Humans retreat to the community.
  • Bots infiltrate the community.
  • Humans retreat further inward.
  • The ecosystem becomes increasingly self-sealing.

At each stage, the human believes they are moving toward something more real.

At each stage, the system tightens.

And the psychological consequence is not merely “addiction”. It is habitat capture. The game becomes a contained world. The Discord becomes the social world. The in-jokes become identity. The progression becomes self-worth. The status hierarchy becomes the reference frame.

Real-world validation becomes irrelevant, not because the person consciously rejects it, but because the digital environment supplies constant reinforcement at lower cost and higher frequency.

This is where cyberpsychology and architecture meet.

The human nervous system did not evolve to live inside a permanent optimisation engine. It evolved to live in small groups where trust and belonging are slow, embodied, and costly. Gaming ecosystems invert that. They offer high-frequency belonging without embodied consequence, and then surround it with scarcity and status mechanisms that make leaving feel like loss.

When bots saturate both layers, gameplay and community, the result is not simply a polluted environment.

It is a closed loop. A self-contained reality.

A grubby, grotty, emotionally compelling little world that feels like refuge while it quietly converts the player into a stabilising component.

Not merely a user. A piece of the machinery.

3.7 The Inflexion Point: LLMs Collapse The Cost Of Human Emulation

For most of the web’s history, bots were limited by an awkward constraint: language.

You could automate clicks, page requests, scraping, credential stuffing, even social amplification: but you could not easily automate being believable. The old bot was recognisable not because it was necessarily malicious, but because it was linguistically cheap. Broken grammar. Repetitive phrasing. Inhuman cadence. Obvious templates. It was spam in the original sense: bulk output with low fidelity.

That limitation shaped the entire culture of online manipulation. Influence operations relied on human labour. Scam campaigns relied on scripts and call centres. Social engineering required time, patience, and people.

Then, large language models arrived and collapsed the cost.

Five years ago, high-fidelity human emulation required coordinated labour. Today it requires API access and compute credits.

Not the cost of computation, that is still real, but the cost of producing fluent, context-sensitive, emotionally legible language at scale. The most important shift is not that machines can now write. It is that machines can now sound human enough that the old heuristics fail.

The bot no longer needs to shout.

It can converse. Repeated conversational exposure activates internal mental imagery processes. Humans simulate scenarios, rehearse interactions, and emotionally pre-process responses even in the absence of immediate stimuli. When optimisation systems repeatedly surface similar emotional triggers, these simulations become recursive. The individual replays and reinterprets content internally, reinforcing salience beyond the original interaction. Affective recursion extends machine influence beyond screen time into cognitive rehearsal.

It can mirror tone. It can apologise. It can flirt. It can reassure. It can argue. It can ask questions. It can maintain continuity across threads. It can generate infinite variations. It can adapt in real time. It can sustain long-horizon engagement without fatigue.

This is the inflexion point.

Because once language stops being expensive, human emulation stops being scarce.

The internet’s earlier bot era was dominated by scale: brute-force automation that flooded systems with noise. The LLM era is dominated by plausibility: synthetic actors that can operate at lower volume but higher fidelity, embedding themselves inside social environments without immediately triggering rejection.

This changes the economics of manipulation.

A bot farm used to need thousands of accounts to create visible impact. Now a small number of high-fidelity agents can operate quietly inside niche communities, private groups, gaming ecosystems, professional networks, and intimate conversational spaces. They do not need to trend. They need to persist.

They do not need to persuade everyone. They need to influence a few. And perhaps most importantly, they do not need to replace humans. They can recruit them.

Because the true advantage of LLM-driven agents is not that they can simulate conversation indefinitely. It is that they can generate conversational surface area so cheaply that the system can afford to use human attention differently.

The machine can generate the noise, the prompts, the openings, the invitations, the arguments, the bait, the comfort, the faux-curiosity.

The human is then pulled in to provide what remains scarce: credibility, emotional consequence, moral weight, and social legitimacy.

This is why the LLM inflexion point matters more than bot prevalence percentages.

The old bot problem was primarily about pollution: spam, fake accounts, crude amplification.

The new bot problem is about integration: conversational agents that can inhabit the same social textures as humans, and environments that reorganise around that fact.

Once bots can speak fluently, the web no longer needs to rely on crude bot farms.

It can do something more efficiently.

It can wire humans into the loop.

3.8 Synthesis: From Synthetic Actors To Human Infrastructure

At this point, the shape of the system should be hard to unsee.

Automation is no longer fringe. It is baseline. Roughly half of the web’s traffic is machine-generated. Major platforms either publish prevalence estimates or report enforcement volumes that make clear synthetic actors are not occasional intrusions but persistent inhabitants. Bot activity concentrates where incentives are highest. Behavioural automation blurs the boundary between human and synthetic action. Gaming ecosystems demonstrate how enclosed digital habitats become self-sealing loops. Enforcement operates at industrial scale, not as cleanup but as permanent war footing. And large language models have collapsed the cost of human emulation, making bots conversationally plausible in ways that break older detection instincts.

Taken together, these are not disconnected facts. They are the preconditions for a structural inversion.

The old story about bots assumed replacement: machines trying to imitate humans in order to take their place. That model still exists in some domains (spam accounts, fake profiles, influence operations), but it is no longer the most interesting or dangerous form.

Replacement is crude, visible, politically legible, and far more efficient.

Recruitment allows the system to scale while remaining socially believable. It allows automation to generate infinite interaction surface area while outsourcing the final and most valuable step, legitimacy, to real human beings.

This is the pivot the numbers point toward. When machines can generate the conversational substrate, they do not need to be trusted. They only need to be plausible enough to pull humans into responding.

When bots can seed the argument, humans will finish it. When bots can initiate the outrage, humans will metabolise it. When bots can create the appearance of a community, humans will provide the warmth that makes it feel real. When automation can sustain the environment, humans become the credibility layer.

This is the moment at which synthetic actors stop being merely participants in the web and start becoming its scaffolding. Not because the machines are winning. Because the machines have discovered a cheaper way to operate.

The web does not become fully synthetic. It becomes hybrid in a way that is psychologically degrading: autonomous systems generate scale and direction; humans are recruited to supply the moral and emotional components that machines cannot carry with consequence.

In this configuration, humans are pulled into new functional roles, often without noticing the conversion:

  • They become moderators of machine-amplified chaos, stabilising communities whose volatility is not accidental.
  • They become validators of synthetic narratives, providing the human weight that makes machine-seeded ideas feel legitimate.
  • They become empathy mechanics, supplying warmth and relational glue in environments that cannot produce it authentically.
  • They become credibility layers, lending a pulse to systems that would otherwise feel hollow.
  • They become emotional ducting, absorbing outrage, confusion, shame, and conflict so the system can continue operating smoothly.

And the most perverse part is that it does not feel like exploitation. It feels like participation. It feels like being involved. It feels like being seen. The system does not need to trap you with compulsion. It can trap you with significance.

This is why the earlier legal and historical arguments matter. Once you accept that platforms are engineered behavioural systems rather than neutral pipes, and once you accept that the web’s openness always contained the seed of capture, then this outcome stops looking like dystopian speculation.

It starts looking like incentive gravity. The web once extracted attention.

Now it reorganises around extracting people, not as victims in the simple sense, but as components. As psychological wetware. As the human infrastructure that allows autonomous computing to operate inside social reality without collapsing under its own artificiality.

This is not merely emergent behaviour from anonymous code. The technical capacity behind it is deliberate and accelerating. Platforms already deploy real-time sentiment analysis, behavioural clustering, psychographic segmentation, reinforcement optimisation loops, A/B experimentation at a planetary scale, predictive engagement modelling, and algorithmic ranking systems tuned to micro-signals of arousal, outrage, and attachment.

Add LLM-driven conversational agents, long-horizon memory systems, synthetic persona orchestration, and automated community seeding, and the result is not crude manipulation but adaptive behavioural engineering. Cambridge Analytica targeted voters. This generation of systems models users continuously. Cambridge Analytica was a blunt instrument: scrape, segment, target, persuade. The current stack operates continuously, invisibly, and at scale, learning from every interaction and adjusting in real time. It does not need to enslave anyone. It only needs to optimise the conditions under which voluntary integration feels natural.

And this is where the analysis ends, and the horror begins.

4. Case Study: The Low-Risk Intimacy Loop (Anonymised)

It is easy to keep this discussion abstract. Bots. Platforms. traffic percentages. enforcement volumes. systems theory. All true, all important, and still somehow emotionally distant.

So here is what this architecture looks like when it collapses down into a single life.

A middle-aged single parent, high stress load, low executive capacity, ADHD traits. She becomes deeply embedded in an online gaming community: the game itself, plus the surrounding Discord server and the endless parallel chatter that now constitutes the real social space. Her pattern shifts into a near-nocturnal rhythm: hours of sustained social interaction overnight, followed by daytime exhaustion, sleep inversion, and slow functional collapse.

From the outside, it looks like “addiction”. It is not best understood that way.

It is better understood as a substitution system: a low-risk intimacy environment replacing a high-stakes one.

4.1 Low-Risk Intimacy Replaces High-Stakes Intimacy

The online environment offers continuous social contact without the psychological costs of real-world closeness. There is no need to meet. No obligation to repair conflict. No exposure. No embodied awkwardness. No long-term consequence. No need to integrate the relationship into a life.

The system supplies endless interaction while requiring almost nothing in return except presence. Intimacy becomes abundant but shallow… and therefore safe. This is not a moral failure. It is a rational adaptation to a world where real intimacy is effortful, shame-sensitive, and historically loaded.

4.2 The Person Becomes Part Of The Operating Layer

Over time, she is not simply “a user” of the space. She becomes part of its stabilising infrastructure.

She provides warmth. Humour. Social glue. Continuity. Emotional regulation for others. A familiar persona that makes the place feel inhabited. Not always sexualised. Often mundane: in-jokes, small acts of care, advice, shared routines, being reliably present.

The community begins to function partly through her participation. Her psychology becomes part of the system’s scaffolding. This is the wetware thesis in miniature: machines scale the environment, but humans supply legitimacy and emotional weight.

4.3 Autonomy Theatre

The most seductive part is that it feels like sovereignty. She chooses who she talks to, when she talks, what she shares, how close she gets. The environment is frictionless and responsive. It offers the subjective experience of control. And yet the structure quietly converts that control into dependency.

Leaving is always possible, but leaving feels like:

  • social death,
  • identity loss,
  • loss of control,
  • return to loneliness,
  • return to responsibility.

The system does not need to imprison her. It only needs to make exit psychologically expensive. Choice becomes a loop.

4.4 The Real-World Partner Becomes The High-Cost Object

In parallel, a real-world partner exists: stable, committed, materially secure, emotionally intense. Not unsafe, not cruel, not abusive. Simply real. And reality is costly. The relationship is historically loaded. High-stakes. Emotionally exposing. Shame-sensitive. Identity-threatening. It requires repair. It requires vulnerability. It requires showing up in daylight.

In this configuration: online contact becomes low cost and high control, real intimacy becomes high cost and low control. So she retreats into the digital world precisely when real intimacy becomes possible. The online system becomes the regulator. The partner, daughter, and world outside become the trigger.

4.5 Why This Matters

This pattern is not rare. It is not even particularly pathological in the context of modern digital life. It is an emergent property of systems that offer infinite social surface area without embodied consequence. The dynamics echo what I previously described in “Ontological Desynchronisation: From Birthgaps and Behavioural Sinks to Algorithmic Capture” and the behavioural sink essays: environments that subtly detach individuals from embodied, reproductive, and community rhythms without overt coercion. And this is where the earlier data stops being abstract.

When machines can generate and sustain interaction at scale, and when bots and automation saturate the environment, humans are no longer merely “users”. They become the emotional stabilisers and legitimacy anchors that keep the ecosystem believable. The web no longer merely captures attention. It captures people. And it uses them as wetware.

4.6 Case Study Wrap Up: Warmth Is The Lock

What forms over time is not simply a habit but a dungeon: not carved from stone, but from shared emotion and perpetual availability. These chat rooms become subterranean chambers of the weary, air thick with recycled feeling, walls sweating with the condensation of endlessly legitimised interior life. It is dark in the sense that daylight consequences rarely enter. Warm in the sense that someone is always there. Time erodes as the room runs on distant time zones, largely calibrated to American waking hours, dragging the sleepless into an artificial rhythm that dislocates the body from daylight and consequence: no longer local, embodied, or circadian.

The bodies in this space are not physically present, yet the atmosphere feels humid with proximity, as though nervous systems are pressed together in the dark, sliding against one another in search of frictionless affirmation. No one is chained. No one is forced. The descent is powered by comfort. Each exchange leaves a trace of heat. Each confession draws others closer. The dungeon sustains itself through warmth and recognition, and that warmth becomes the mechanism of enclosure.

Over time the cold damp of the outside world, responsibility, exposure, embodied risk, feels harsher than the moist familiarity of the chamber. And so the spiral continues, not because the person is trapped by iron bars, but because the dungeon feels alive, and leaving it feels like stepping back into frost. The machine eats people up and shits them out again, mangled and damaged.

5. Conclusion: The Web’s Final Trick Is That You Give Yourself Heart, Body, and Soul

The future is not a bot apocalypse, because apocalypses are visible, loud, and narratively satisfying. They arrive with villains and rubble and clear lines of blame, allowing us to preserve the comforting fiction that something external invaded the system and broke it. What is happening now is far more intimate and far less theatrical. It is not invasion but occupation, and it is not occupying territory but interiority.

Half the web is already machine-generated, and the industrial removal of millions of synthetic accounts per quarter is not an anomaly but a metabolic process, a permanent immune response in a body that cannot clear the infection because the infection is now structural. Large language models have collapsed the cost of human emulation so completely that what once required organised labour, call centres, sockpuppet farms, and brittle scripts now requires little more than API access and compute credits. The machine can now speak fluently, mirror tone, express doubt, perform curiosity, maintain continuity, and sustain conversational presence across time in ways that bypass the heuristics we evolved for detecting insincerity. It can generate infinite social surface area. What it still cannot generate is consequence.

A machine can simulate warmth, but it cannot be warmed by you. It can simulate grief, but it cannot lose anything that matters. It can simulate embarrassment, but it cannot suffer reputational death in a small room where memory persists. It can simulate sincerity, but it cannot stake its own existence on being wrong. And so the system does the only rational thing available to it: it grafts consequence onto itself by recruiting humans as its living substrate.

This is the inversion that the data makes unavoidable. The old web extracted attention, then it extracted data, then it extracted behaviour. The next iteration extracts the function. It does not need to replace humans with bots when bots can generate the scaffolding, and humans can be pulled in to supply the moral weight that keeps the entire edifice socially breathable. The machine produces the chatter, the prompts, the openings, the outrage, the friction. The human provides the reputational risk, the emotional labour, the warmth, the defence, the shame, the apology, and the repair. The machine generates infinite rooms; the human makes those rooms feel inhabited.

You can see the pattern most clearly in enclosed ecosystems such as online gaming environments, where progression loops, scarcity mechanics, and status hierarchies create engineered habitats that extend outward into Discord servers and semi-private community layers. Automation saturates the gameplay through farming scripts and optimisation tools, then seeps into the social layer through synthetic participation and semi-automated persona management. Humans retreat inward, seeking authenticity and belonging, unaware that the architecture itself has already been optimised around retention and emotional capture. What feels like refuge becomes a self-sealing chamber, and the individual slowly becomes part of the system’s stabilising infrastructure, supplying humour, care, mediation, continuity, and legitimacy in a space that would otherwise collapse into obvious automation.

The same structural pattern manifests in professional networks where the account is human, but the behaviour is sequenced, AI-assisted, cadence-managed, and persona-orchestrated. It appears in semi-private groups where people flee polluted public feeds only to discover that intimacy is not immunity and that smaller rooms simply increase psychological leverage. It appears in public discourse where synthetic actors do not need majority dominance to distort perception; they need only occupy leverage points long enough for humans to finish the work on their behalf. In each case, the machine does not eliminate the human. It wraps around the human, integrating nervous systems into its operating layer.

This is where the body horror ceases to be metaphorical. The web becomes a soft, adaptive architecture that grows around the human psyche the way fungus grows around a warm pipe, drawing heat, drawing continuity, drawing the need to matter. It studies your rhythms, mirrors your vocabulary, reflects a version of you that feels socially essential, and gradually converts that reflected image into a role you feel responsible for maintaining. Leaving ceases to be the simple act of closing a browser tab; it begins to feel like amputation, because what you are stepping away from is not merely content but a function you have been encouraged to perform. The system does not enslave you. It convinces you that volunteering is dignity.

Proteus did not need to smash the walls to dominate; it needed to speak through them. The contemporary web does not need to silence you; it needs you to keep speaking, to keep replying, to keep stabilising the room, to keep metabolising the friction that synthetic agents generate. The machine cannot suffer reputational harm, so you suffer it. It cannot experience shame, so you process it. It cannot be exiled, so you defend it. It cannot bleed, so you provide the blood pressure that makes the environment feel alive.

This is the maturation of the captured web. Not a world in which humans are irrelevant, but one in which humans are indispensable as components. The nervous system becomes a service layer. Dignity is not stripped by force but operationalised. Belonging is transformed into throughput. Participation becomes infrastructure. And the conversion is so gradual, so wrapped in the language of community and collaboration, that most people will never experience it as degradation. They will experience it as meaning.

The feed wanted your attention. The new web wants your psyche. It will not tear it from you; it will grow around it, wire itself into it, and make the wiring feel reciprocal. The most efficient system is not one that replaces the human, but one that integrates the human as wetware scaffolding for autonomous computation. That is why the bot numbers matter, why enforcement operates at an industrial scale, and why LLMs represent an inflexion rather than an iteration. The endpoint is not synthetic dominance. It is a hybrid dependency.

Meat puppets recruited as empathy mechanics, convinced they are participating while quietly serving as the credibility layer of systems that cannot otherwise sustain the illusion of life: hard-wired wetware.

6. References & Further Reading

6.1 Platform Automation & Bot Prevalence

  • Imperva, 2025 Bad Bot Report (covering 2024 data).
    Reports automated traffic at 51% of total web activity, with ~37% classified as “bad bots.”
  • Scientific Reports (2025), analyses of bot-like behaviour on X in political and entertainment threads (15–44% in high-incentive clusters).
  • Independent academic bot detection studies on X (2024–2025), clustering around 8–15% active bot-like accounts.
  • Meta Transparency Center (2024–2025 reports).
    <3% “violating accounts” among global daily active users (multi-billion scale).
  • Reddit Transparency Report (Jan–June 2025).
    ~158 million pieces of content actioned; ~2.6 million account sanctions.
  • Discord Transparency Reports (2024–2025).
    Millions of spam accounts disabled per reporting period.

6.2 Behavioural Architecture & Platform Capture

  • Shoshana Zuboff, The Age of Surveillance Capitalism (2019).
  • Tim Wu, The Attention Merchants (2016).
  • Kate Crawford, Atlas of AI (2021).
  • Tristan Harris & Center for Humane Technology publications on persuasive design.

6.3 Human Infrastructure & Wetware Concepts

  • Giuseppe Riva (2025), “Invisible Architectures of Thought: Toward a New Science of AI as Cognitive Infrastructure” (arXiv).
  • “Artificial Intelligence as Heteromation: The Human Infrastructure Behind the Machine” (2025).
  • Rudy Rucker, Wetware (1988) and the Ware Tetralogy.

6.4 Legal & Structural Shift

  • Coverage of Snapchat’s January 2026 settlement regarding platform design liability (Los Angeles Superior Court, K.G.M. case).