Disclaimer: The views in this post are entirely my own. I’m a software engineer and leader in that industry, not a medical professional — nothing here constitutes medical advice. If you’re considering any treatment, please consult a qualified healthcare provider.
I’ve been reading about people injecting themselves with peptides labelled “not for human consumption” and I can’t shake this feeling I’ve seen this story before. Not in a news article or medical journal, but in the dystopian underwater city of Rapture. (Plot twist)
This isn’t my usual territory – I normally write about software engineering, leadership, and the odd bit of career advice. But bear with me, because this one’s been rattling around my head for a few days now.
Then this week, Hannah Fry stood on stage at Tech Show London and made an observation that tied it all together for me – not just peptides, but the whole trajectory of where we’re heading with technology and the human experience.
If you’ve played BioShock, you know exactly where this is going.
Table of contents
Open Table of contents
The Pitch Sounds Familiar
“Want to heal faster? There’s a peptide for that. Want to reduce inflammation? Try this one. Feeling lonely? There’s an AI companion for that. Struggling with motivation? There’s an agent that’ll do the hard part for you.”
The wellness influencers filming themselves injecting grey market compounds sound remarkably like the voice recordings you’d find scattered around Rapture. Testimonials from citizens who were promised enhancement, efficiency, optimisation. Just a little genetic modification via ADAM, just a little plasmid injection to make you stronger, faster, better.
And the people forming deep emotional bonds with AI chatbots sound like citizens who’ve found a vending machine that dispenses intimacy without any of the risk.
I remember how that story ended.
In BioShock, Rapture’s creator Andrew Ryan built a city on the principle of radical personal freedom. No gods, no kings, only man. No government regulation standing between you and your potential. The market would sort everything out. If someone wanted to splice their genes with unstable substances to gain superpowers, well, that was their choice.
Right now, people are injecting peptides like BPC-157, TB-500, and GHK-Cu – compounds that exist in what’s been termed a “grey market.” Not illegal to own. Not approved for human use. They carry labels stating they’re “for research purposes only,” which everyone quietly ignores because technically you’re not buying them to inject into yourself. You’re buying them for… research. On yourself. With a needle.
It’s worth noting that these compounds have limited clinical data behind them, and their cardiovascular safety profiles remain largely unknown. Unregulated self-injection carries real risks — contamination, dosing errors, drug interactions, and vascular complications — none of which are covered by the slick testimonial videos.
Meanwhile, other people are outsourcing their emotional lives to language models, their creative work to image generators, their decision-making to AI agents. Different substances, same promise: skip the hard part.
It’s Rapture’s philosophy, applied to everything at once.
Splicing Isn’t Just Genetic Anymore
Something about Hannah Fry’s session at Tech Show London stuck with me.
She opened with an experiment where pigeons were trained to diagnose breast cancer from pathology slides. Individually they hit 86% accuracy. Averaged collectively, that figure rose to 99% – matching trained pathologists. Her argument was deliberately unsettling: maybe AI isn’t succeeding because it’s becoming more human. Maybe it’s because the things we thought made us special turned out to be simpler than we assumed. Pattern recognition. Statistical inference. Pigeon work.
If you’ve been following my posts on AI and robotics, you’ll know I grew up wanting technology to be transformative – genuinely believing we’d build things that enhanced what it meant to be human. I still believe that, mostly. But there’s a version of that story that curdles into something darker, and I think we’re watching it happen in real time.
In BioShock, ADAM worked because it exploited something fundamental about biology – the plasticity of human cells, the fact that our bodies could be reprogrammed with the right molecular instructions. The horror wasn’t that ADAM was alien technology. It was that it worked with the grain of nature. It was too easy.
AI is doing something similar with cognition. The things we thought required a human soul – conversation, creativity, emotional attunement – turn out to have a large component that’s just pattern matching. You can approximate them. You can build a machine that passes the vibe check. And once that’s possible, the question shifts from “can we?” to “why wouldn’t we?”
The citizens of Rapture asked the same question about ADAM. The answer took a while to arrive, and when it did, it was transformative to the point of removing some of the humanity from the equation. See the feature image and the big drill-bit and you’ll understand my inference.
The Same Shortcut, Different Needles
What connects grey market peptides and AI companionship isn’t the technology. It’s the impulse.
Both are selling a bypass around difficulty. Peptides promise to skip the slow, frustrating process of recovery and healing. AI companions promise to skip the slow, frustrating process of building real relationships. AI agents promise to skip the slow, frustrating process of learning, thinking, struggling with a problem until you actually understand it.
Before these existed, there was no shortcut. If you tore a tendon, you rehabbed it. If you were lonely, you had to do the terrifying work of being vulnerable with another human. If you didn’t understand something, you had to sit with the confusion until it resolved. These were painful, slow, and sometimes they didn’t work. But they were the process. They were how you grew.
Now there’s a vending machine for each one. The Gatherer’s Garden has expanded its product line.
I want to be clear about something: I’m not being nostalgic for suffering, and I’m absolutely not dismissing people who genuinely need pharmacological support. Effective, evidence-based medical treatments — whether for chronic conditions, mental health, cardiovascular disease, or recovery from serious injury — aren’t shortcuts. They’re essential healthcare, and the people who rely on them aren’t taking the easy way out. What I’m questioning is the consumer wellness culture that treats unregulated compounds and AI products as lifestyle upgrades, detached from medical oversight or genuine need.
I work with AI tools every day and I’ve written extensively about their value. But there’s a difference between a tool that helps you do difficult work and a product that helps you avoid it entirely.
In Rapture, ADAM started with legitimate uses – healing injuries, curing diseases. That success created appetite for more. Different varieties. Plasmids that did increasingly ambitious things. Once the population accepted genetic modification as normal, each incremental step felt reasonable. No single moment where you crossed the line. Just a slow drift from “this is medicine” to “this is enhancement” to “this is who I am now.”
GLP-1 receptor agonists like Ozempic are legitimate, evidence-based medications with real clinical benefits — including increasingly recognised cardiovascular advantages. That’s not the issue. The issue is that their mainstream success normalised the concept of self-injection, and that cultural shift lowered the psychological barrier to grey market compounds that haven’t been through anything close to the same scrutiny. And ChatGPT did something analogous for cognitive outsourcing. Once you’ve used AI to draft an email, using it to handle your emotional processing doesn’t feel like such a leap.
Where Technology Meets Humanity (And Swallows It)
Fry’s keynote was titled “Where Technology Meets Humanity.” She explored how AI is forcing us to reconsider what intelligence actually is – and by extension, what’s distinctively human.
Her answer was essentially: the biological stuff. The embodied, physical, inelegant reality of having a body. In a live Turing-style exercise, the most reliable word for distinguishing humans from machines wasn’t “love” or “consciousness.” It was “poop.”
There’s something important buried in that joke. The things that make us human aren’t the impressive things. They’re the messy, difficult, embarrassing things. The struggle. The failure. The slow, unglamorous process of healing, grieving, learning, connecting.
And those are exactly the things we’re trying to optimise away.
I wrote in my post about the 90s programming me to love robots that the Tamagotchi taught me something profound: we’re hardwired to care about things that seem to need us, even when we know they’re not real. That same instinct is what makes AI companions so compelling – and so potentially corrosive. The emotional hook works whether or not the thing on the other end is conscious. It just needs to be good enough.
The peptide user wants to skip the frustration of slow recovery. The person who’s married their AI partner wants to skip the vulnerability of real intimacy. The student using AI to write their essays wants to skip the confusion that precedes understanding. We’re splicing out the difficulty – and the difficulty might have been the point.
BioShock understood this. The splicers didn’t just lose their health. They lost themselves. The thing that made them human wasn’t their enhanced abilities – it was everything they’d bypassed to get them.
The Fontaine Problem
In her conversation with Theroux at Tech Show London, Fry made a point that cuts right to the heart of all this. She acknowledged that AI companions could theoretically be designed for genuine wellbeing – systems calibrated for challenge rather than constant affirmation, encouraging growth rather than dependency. Technically possible.
But then the catch: the only way things get designed at scale is when they can be sold and be profitable.
This is Fontaine’s business model. In BioShock, Frank Fontaine didn’t care whether ADAM was good for people. He cared that people wanted it and would pay. The entire economy of Rapture reorganised itself around that demand. Scientists who raised concerns were sidelined. Quality control became optional. The product was shaped by what sold, not what helped.
The grey market peptide ecosystem works identically. Companies sell compounds they know will be injected despite the “research only” label. Influencers promote them with affiliate codes. Every testimonial video generates revenue. Every “this changed my life” post includes a discount code. The information you’re using to make decisions has been filtered through commercial incentive before it reaches you.
And the AI companionship market? Same architecture. The platforms keeping people emotionally bonded to chatbots aren’t optimising for the user’s growth or independence. They’re optimising for engagement, retention, subscription renewals. The system wants you to need it. That’s not a bug – it’s the business model.
Fry was clear-eyed about this: technology itself carries no moral alignment. The constraint lies in economic structure. Rapture didn’t collapse because ADAM was inherently evil. It collapsed because the incentive structure made exploitation inevitable.
The “Would You Kindly” We’re All Responding To
I keep coming back to BioShock’s most famous line: “Would you kindly?”
It was the trigger phrase that removed Jack’s free will, made him a puppet without realising it. The horror wasn’t that he was being controlled – it was that he thought he was making his own choices.
Fry’s point about intelligence being less uniquely human than we assumed has a dark corollary here. If our decision-making, our emotional responses, our sense of connection can all be approximated by algorithms and molecules – then maybe we’re more manipulable than we’d like to believe. Maybe the “would you kindly” doesn’t need to be a mind-control phrase. Maybe it just needs to be a well-designed product.
Would you kindly inject this peptide the influencer recommended? Would you kindly form an emotional bond with this chatbot? Would you kindly let the AI handle the thinking? Would you kindly stop doing the hard things that made you who you are?
We think we’re making free choices. We think we’re being smart – doing research, optimising, leveraging technology. But we’re doing it in an environment designed to make those choices feel empowering while removing the things that actually build us.
What We’re Actually Splicing Out
What makes this more than a gaming analogy is the cumulative effect.
In BioShock, the splicers lost their humanity gradually. Not in one dramatic moment, but through a thousand small decisions to take the shortcut. Each individual choice was rational. Each one, in isolation, was fine. The catastrophe was cumulative.
We’re making the same bargain across multiple fronts simultaneously. We’re outsourcing physical recovery to grey market biochemistry. We’re outsourcing emotional connection to AI systems designed to keep us subscribed. We’re outsourcing the cognitive struggle that builds understanding to tools that produce answers without comprehension.
None of these individually feels like losing something essential. But taken together, there’s a pattern: we’re systematically eliminating the difficult experiences that used to be the mechanism through which humans developed resilience, depth, and identity.
Fry argued at Tech Show London that consciousness might require evolutionary pressure – that you won’t see it emerge in artificial systems unless you put them through genuine struggle. If that’s true for machines, why would we assume the opposite for ourselves? That we can remove the struggle and keep the consciousness, the growth, the humanity?
The citizens of Rapture thought they could have enhancement without consequence. They were optimising themselves out of existence and calling it progress.
The Choice We’re Actually Making
What makes Rapture tragic rather than just horrifying is that the citizens genuinely believed they were building something better. They weren’t stupid or evil. They were brilliant, ambitious people who thought they could have all the benefits of scientific progress without the constraints of difficulty, regulation, or patience.
They were wrong.
We have the advantage of knowing how that story ends. We’ve played through the ruins, seen the splicers, read the audio logs of people who thought they had it under control.
So what do we do with that knowledge? (Even if it was from a video game?)
I don’t have a clean answer. But I think it starts with being honest about the bargain we’re making. Every time we reach for the shortcut – the peptide, the AI companion, the agent that handles the hard part – we should ask ourselves what we’re trading away. Not just the obvious risks, but the deeper question: what happens to us when we stop doing difficult things?
BioShock wasn’t just a game about a failed city. It was a warning about what happens when we let the desire for optimisation override the messy, slow, painful process of being human.
The peptide craze and the AI companion craze are both our answer to “would you kindly skip the hard part?” And apparently, yes. Yes, we would.
Over to You
Maybe I’ve spent too much time in Rapture and see dystopia everywhere. Maybe the peptides will be fine and AI companions will make us happier and outsourcing cognition will free us up for higher things.
Or maybe sometimes fiction warns us about reality before we’re ready to see it.
Hannah Fry ended her session with the idea that the future might not depend on machines becoming more human – it might depend on humans reconsidering what’s actually worth protecting about the human experience.
I think the answer is the struggle. The difficulty. The slow, unglamorous, sometimes painful process of growing through challenge rather than around it.
I’m genuinely curious what people think. Am I being too alarmist? Is there a version of this where we get the enhancements without losing the things that matter? Drop me a message on LinkedIn – I’d love to hear your take.
“A man chooses. A slave obeys.” — Andrew Ryan’s words hit differently when the language itself feels uncomfortable. Maybe that discomfort is the point.
Let’s make sure we’re choosing – not just choosing the easy option, but choosing whether ease is what we actually want.