We (also) need to talk about relationships
AI is quietly but surely redefining friendship, intimacy, and sexuality for a generation
Preface: I started working on this Substack on Sunday, and had just put the finishing touches and sent to my editor on Tuesday when I came across this twitter post by @arikuschnir with a video short titled “We can talk!” that he created with Veo 3, Google’s latest AI Filmmaking tool.
I’m going to remember what it felt like to watch that video for the rest of my life. The goosebumps and hair standing on the back of my neck. It’s… absolutely wild and intense and frankly a bit overwhelming.
I went back and forth about whether to re-write this whole Substack, as it makes one of my central predictions about the future out of date. It seems that the future is, in fact, here.
But I spent a lot of time on this and despite my prolific use of the em dash, I’m sorry to say that I have largely failed to successfully use AI tools to emulate my writing style. If I’m being honest, it’s also a cheeky opportunity for me to get in another “I told you so!”
I probably will end up writing something dedicated to AI video, a follow up to my piece from February about the stunning progress in synthetic media, but there’s nothing I could write that substitutes watching the demos for yourself. An AI-generated video is worth (probably a lot more than) 1,000 words….
Seriously, before reading this piece, take a couple minutes to watch some of the Veo 3 demos. Like this footage from a car show. Or this YouTuber playing a video game.
It’s worth noting that Google has been intentional about building explicit limits to the use of their Gemini model for teens (it’s the only model to receive a “low” risk score from Common Sense Media), and Veo 3 is not designed nor marketed for use with AI companions. But there are plenty of other GenAI companies, like OmniHuman, HailuoAI and Goku, that aren’t focused on safety. The cat is out of the bag.
I won’t open every single Substack with a movie reference, but it’s impossible to ignore the relevance of Her to a discussion of the impact of AI on human relationships. The 2013 movie is startlingly prescient depiction of hyper-realistic AI girlfriends and boyfriends. Under our noses, these synthetic companions are becoming central to our lives.
Directed by Spike Jonze, Her follows Theodore Twombly, played by Joaquin Phoenix, a lonely professional letter writer in Los Angeles who purchases a new AI companion system that introduces itself as “Samantha,” voiced by Scarlett Johansson. Designed to learn and adapt, Samantha quickly becomes an engaging conversational partner—curious, funny, and deeply attentive. As their interactions grow more intimate, Theodore and Samantha develop a genuine romantic relationship.
Spoiler alert: Theodore turns out alright. The film ends happily with Samantha and all of the AI companion systems leaving the physical world behind, transcending into a god-like collective consciousness. Theodore decides to focus on human relationships and rekindles things with his ex-wife.
The film left me feeling deeply unsettled. If you set aside the literal use of deus ex machina at the end, Her is now becoming our reality.
From ‘emerging technology’ to present day capabilities
Americans have had broad access to chatbots since late 2022. One of the first stories about LLMs to go viral was Kevin Roose’s bizarre interaction with Microsoft’s Bing AI tool. Roose recounted his experience talking to the chatbot, a persona he names 'Sydney,’ which he described as less like a search tool and “more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.” Roose wrote:
As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
You can read the full transcript of his conversation here.
Roose was the first journalist to recount his experience being (non-consensually) romanticized by a chatbot, and many have followed. LLMs are highly effective storytellers, especially when taking on a persona in the first person, and millions of people have experimented with and explored whether that functionality could extend to deeply personal friendships and romantic relationships.
Text conversations have proven more than enough to hook people (I’ll get into the extent and implications of that later), but however cool chatbots may be, static chats are still a far cry from the totally fluid lifelike conversations depicted in Her.
Enter Sesame, a company that recently emerged from stealth mode with funding from Andreessen Horowitz, Spark Capital, and Matrix Partners. Oculus co-founder and former CEO Brendan Iribe runs the company alongside senior execs from Meta Reality Labs and Ubiquity6. Sesame debuted two voice demos that showcase just how close we are to realistic, conversational AI that sound like humans with emotional inflections, natural pauses, and other subtleties that drove the senior editor of PCWorld to write, “I was so freaked out by talking to this AI that I had to leave.”
Sesame may still be rough around the edges, but capital is flowing into a growing cadre of companies working tirelessly to improve AI generated voice capabilities. Companies like OpenAI and ElevanLabs have debuted their own synthetic voice tools, and Microsoft recently teased a key milestone in the development of their VALL-E 2 text-to-speech generator that is supposedly so convincing that it cannot be safely released to the public. This is a big market, $3Bn today by some estimates, and set to grow exponentially to $40Bn by 2032.
That’s all to say, we’re already very close to and rapidly approaching a world where anyone can boot up their computer and be greeted by their own version of Scarlett Johansson’s soothing voice asking “how was your day, honey?”
AI companions are already mainstream
People certainly aren’t waiting around for synthetic voice to build relationships with AI companions. Over the past two years, talking to AI chatbots has become one of the most popular things to do online.
This isn’t just casual engagement—people are forming incredibly deep, intimate relationships with AI chatbots. Replika markets itself as “the AI companion who cares: Always here to listen and talk.” Users can interact with their Replika for free, with paid features including voice calls, virtual reality interactions, and the ability to define the relationship type—such as “friend,” “partner,” “spouse,” or “mentor.” Users can even exchange vows and role play married life.

In 2023, the Italian Data Protection Authority targeted Replika for being a risk to children, stating in a directive that “first and foremost, [children] are served replies which are absolutely inappropriate to their age. During account creation, the platform merely requests a user’s name, email account and gender.”
Shortly after, users began noticing changes to their Replikas. In response to the Italian directive, the company globally removed its erotic role-play (ERP) feature, a subscription component that had previously allowed users to engage in sexual interactions like flirting and “sexting” with their AI companions. Before this update, the AI would reciprocate such advances; but afterward, Replikas began abruptly shutting down attempts to sext with phrases like “let’s change the subject.” Users were suddenly being curved by their own AI.
The impact was severe. Many were caught off guard, experiencing what they described as “sudden sexual rejection” and “heartbreak.” Online communities, particularly on Reddit, were flooded with users expressing their distress, with some reporting being in crisis. The emotional fallout was so intense that Reddit moderators felt compelled to post links to suicide prevention hotlines. Research analyzing Reddit posts before and after the ERP removal showed a significant spike in negative sentiment, with expressions of sadness, anger, fear, and disgust becoming much more prevalent. There was also a notable increase in posts mentioning mental health concerns, as users grappled with the perceived loss of their AI companion's established personality and the intimate connection they had formed.
Replika’s traumatic experience revoking ERP shows that people are developing relationships with AI companions that mirror many of the dynamics typical of human relationships.
Replika, and similar platforms have quietly become among the most trafficked destinations on the web. The numbers are staggering. Replika reports 30 million users (active users “in the millions”). Character.ai, another AI companion platform, reports 28 million monthly active users. JanitorAI, one of the most flexible and advanced AI sexting chatbot platforms, has amassed 1.6 million monthly active users. For context, regular season NBA games garner about 1.5 million viewers.
The popularity of AI companions comes alongside changing attitudes; according to one study, 25% of young adults believe AI partners could eventually replace traditional romantic relationships.
What does this mean for young people?
The implications of AI companions replacing human relationships are no doubt profound, in particular for younger generations who are still developing their social and emotional skills. While the party line seems to be that AI companions will help to solve loneliness and give people more outlets for connection, I believe they also risk undermining the process of learning how to navigate the complexities of human relationships if we aren’t intentional.
Humans are inherently messy, frustrating, unpredictable, and at times difficult and outright hostile. There’s no escaping it, and efforts to shield young people from this reality simply postpones the formative experiences that everyone will have to have one way or another. When you tell a joke to a friend and it falls flat, you learn to read the room better next time (and, maybe, come up with a better joke!)

When someone rejects your attempt to ask them out on a date, you develop resilience. These small social failures and recoveries are crucial to the process of growing up.
In-depth research on the topic is still in the early days, but initial studies evaluating the impact that AI models are worrying.
Last year, the Parkview Center for Research and Innovation published a study of parents and children and found that participants reported “moderate attachment-like behavior,” concluding that higher frequency of engagement with AI companions is associated with higher levels of attachment-like behaviors.
A University of Cambridge study of AI chatbots exposed potential risks of interactions between AI and children, finding that despite LLM’s remarkable language abilities, they handle abstract, emotional aspects of conversation poorly. This “empathy gap” is particularly troublesome in conversations with children, “who are still developing linguistically and often use unusual speech patterns or ambiguous phrases.” The study also noted that “children are also often more inclined than adults to confide sensitive personal information.”
The University of Edinburgh evaluated children’s understanding of AI through interactions with smart speakers like Alexa, and found that “Most children overestimated the CAs' intelligence and were uncertain about the systems' feelings or agency. They also lacked accurate understanding of data privacy and security aspects, and believed it was wrong to be rude to conversational assistants.”
In March, MIT published the results of a four-week randomized, controlled, IRB-approved experiment to investigate how different modalities of AI companions (text, neutral voice, engaging voice) and conversation types (open-ended, non-personal, and personal) influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI, and problematic usage. The results showed that “while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot.”
And that’s just research focused specifically on chatbots. There is already a multitude of studies showing negative impacts of AI algorithms and“deepfake bullying,” to name just two of the other dimensions at the intersection of AI and well being.
What does the future hold?
We’re still in the early innings of a multi-decades long technology cycle. Based on the trajectory of technology and our knowledge about how past innovation has played out, here are two informed predictions:
AI companions become central to many people’s social lives
This is more description than prediction. It’s worth underscoring that we are already on a path towards a world where a significant component of our social lives involves engaging with AI personas in some form or another. But once again, it’s early innings. Facebook clocked 20 million users in 2007. Today, the platform has more than 3 billion users. No one knows what the growth curve will look like for AI companions, but even conservative observers would be hard pressed to argue for anything short of significant increases in uptake. Depending on your perspective, that could be worrying or exciting.
Meta CEO Mark Zuckerberg recently gushed about the potential for AI to fight the loneliness epidemic in an interview with podcaster Dwarkesh Patel:
“There's the stat that I always think is crazy, the average American, I think, has fewer than three friends, and the average person has demand for meaningfully more, I think it's like 15 friends or something, right? The average person wants more connectivity, connection, than they have.”

Whether or not 80% of our friends ultimately end up being AI, it’s telling when the CEO of the largest social network makes it clear that fostering relationships with AI aren’t just a potential impact, but a stated goal for his company. To that end, Meta recently rolled out an array of AI products, including a tool that helps users create AI characters on Instagram and Facebook. The company’s VP of product for generative AI told the Financial Times, “we expect these AIs to actually exist on our platforms in the same way that [human] accounts do.”
AI pornography blazes the trail
Talking about porn may not be “safe for work,” but we can’t ignore the potential for LLMs, AI companions, synthetic media, and VR/AR to revolutionize the lurid corners of the internet. After all, pornography is one of the most popular uses for the internet today, making up an estimated 13-20% of all web searches. Pornhub, the largest aggregator of pornography, is among the top ten most popular website in the world. According to some estimates, the adult entertainment market is valued at nearly $290Bn — more than 10X bigger than the global music industry. Consumption of pornography has also markedly increased over the last two decades, with an estimated 300% increase between 2004 and 2016.
Porn is already shown to negatively impact sexual development and intimate relationships, and it’s about to get a lot more addictive.
The adult entertainment industry has historically been a first-mover in adopting frontier technologies. Thirty years ago, a Dutch porn company called Red Light District pioneered one of the first ever internet-based video streaming services, blazing the trail for companies like CNN, YouTube, and Netflix that would follow years later. Before major online ad networks existed, internet porn producers spearheaded the development of online payment options, including the first real-time credit card verification software, which was initially used to sell the Pamela Anderson and Tommy Lee sex tape.
AI is ripe for adoption to generate sexually explicit content. Even though companies like Google, OpenAI, and Anthropic work hard to ensure their products are “safe for work,” there are plenty of open source models that unlock the wild world of AI porn. Chatbots came first. OnlyFans creators reportedly make prolific use of AI chatbots. Janitor.ai, mentioned above, published a press release in 2023 the reads as follows:
“In the current digital landscape, if you're tired of constant NSFW filters interfering with your AI character interactions, or perhaps you're seeking an unrestricted NSFW AI chat platform that can offer a distinctive AI chat experience, Janitor AI Pro could be your ideal destination.
Introducing Janitor AI Pro, it's not just another chat platform, but the most popular and extensively used NSFW AI chat platform. Also, it's the fastest-growing one in the industry, astonishingly amassing 1 million users in just 11 days.
Janitor AI Pro distinguishes itself from other AI chatbots with its distinctive features and dedicated support for NSFW AI chats. Breaking away from the traditional security consensus, it has rapidly gained popularity among many users.”

Synthetic media is already well on its way to mature adoption. VR/AR is right around the corner. There are still a number of challenges preventing widespread adoption (just ask the Apple Vision Pro team), from aesthetics, to comfort, to hardware cost, but companies are plowing capital into investments to solve these issues (the global AR market is estimated to grow by $460Bn over the next few years).
Stepping back, it’s clear that the convergence of these technologies have the makings of an incredibly engrossing and addictive cocktail just a few clicks away, in particular for the 25% of American men aged 15 to 34 who report being lonely a lot of the time.
In the near future, people will be able to eschew the trials and tribulations of the dating scene in favor of hyper-realistic AI pornography depicting (and perhaps inspiring) their precise sexual tastes and fantasies. And for those who can afford it, in virtual reality. And I don’t have the willpower to go down the sex doll rabbit hole, but let’s just say it’s another burgeoning industry that is surely paying close attention to the AI space.
Doing the research for this section was icky, and I don’t recommend delving into the NSFW communities on Reddit, where a lot of this information lives, out of sight of mainstream news. That’s why this space is a blind spot—normal people, that is, the parents and educators with responsibility to guide the next generation through this mess, are certain to be caught off guard.
Our imperative: championing humanity
Like the jobs issue, we are on a collision course with AI and it’s impact on social emotional well being. Burying our heads in the sand until it becomes a crisis obviously puts us in the worst possible position to mitigate the downsides. Unlike the risk to jobs, I do think that there is a fair amount of coverage in the media and “thought leadership” on the topic of AI and relationships, but awareness alone won’t be enough.
We learned the hard way with social media, which certainly generated copious amounts of naval gazing, research, and think tanks dedicated to understanding the issue—but insufficient meaningful action to change behaviors.
The pace of technology is unknowable but at this point, we should assume there’s precious little time for us to mobilize at the speed and scale required.
So, what’s to be done?
We can’t effectively dictate to parents what they should or shouldn’t let their kids do at home, even if prohibition were effective (there’s lots of research that suggests it isn’t).
There’s a role for regulation in some form to protect children. This week, President Trump signed the bipartisan TAKE IT DOWN Act, banning nonconsensual online publication of sexually explicit images and videos, including AI-generated ones. Regulatory policy and enforcement isn’t my area of expertise, but I’m not confident that we can rely on regulation alone to solve this problem. After all, child pornography has long been illegal, but 70 million images and videos sexually exploiting children were reported to law enforcement in 2019 alone.
I believe the education system, while admittedly a challenging and slow moving system with which to drive change, is our best shot. School is the one place where we can ensure that students are having the real-life human experiences that build the critical skills and dispositions they will need. Yes, that means expanding and updating “digital literacy” to cover AI literacy and provide students with the agency and critical mindsets to navigate the age of AI. But it also means centering project-based learning, group activities, and other engaging learning practices in the classroom to build durable and social-emotional skills like critical thinking, communication, collaboration, and self-regulation alongside core subject competencies.
This effort won’t be successful if we aren’t intentional about empowering youth to be the protagonists—as opposed to the victims—in this story. Kids today are already miles ahead of us in understanding what is happening on the bleeding edges of technology. They are already broadly aware about the prevalence of deepfakes, likely to a far greater extent than their parents comprehend. And they would be right to reject patronizing attempts to mandate behavior if it doesn’t honor them with the agency to help us chart the way forward.
Empowering young people isn’t a pipe dream. Michelle Culver, the former Executive Director of the Reinvention Lab at Teach for America, last year founded The Rithm Project to invest in youth-led research on AI’s impact on social connection and activate leadership through powerful live experiences that drive dialogue, urgency, and clarity in the field. aiEDU is partnering with Rithm’s team to co-develop a set of classroom activities to help teachers integrate exploration and discussion about AI companions into core subjects like English and social studies.
The Rithm Project offers a helpful metaphor for distinguishing between healthy and harmful engagement: is AI a vitamin or a Vicodin? Used well, AI companions can supplement a young person’s social ecosystem and build skills like self-reflection and communication. Used in excess or as a replacement, they risk dependency, social skill atrophy, and deepened isolation. Helping young people draw that line—and designing tools that keep them on the healthy side of it—is essential.
Young people have a complicated, challenging road ahead, and they are also our best hope if we want to chart a course to a future where AI is a force for good.