Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his “super loquacious” four-year-old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed.
乔什在面临育儿难题时,已经筋疲力尽,于是转向 ChatGPT 寻求帮助。这位 40 岁的有两个孩子的父亲,已经耐心地听了他的“超级健谈”四岁儿子谈论托马斯火车 45 分钟,感到非常疲惫。
“He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone,” recalled Josh, who lives in north-west Ohio. “I thought he would finish the story and the phone would turn off.”
“他还没有说完他想讲的故事,我需要做家务,所以我让他拿手机,”住在俄亥俄州西北部的乔什回忆道。“我想他会讲完故事然后关掉手机。”
But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. “The transcript is over 10k words long,” he confessed in a sheepish Reddit post. “My son thinks ChatGPT is the coolest train loving person in the world. The bar is set so high now I am never going to be able to compete with that.”
但两小时后,当 Josh 回到客厅时,他发现他的孩子仍然快乐地和 ChatGPT 在语音模式下聊天。“记录的文字超过了 1 万字,”他在 Reddit 上的一篇尴尬的帖子中承认。“我的儿子认为 ChatGPT 是世界上最酷的火车爱好者。现在这个标准已经定得这么高,我永远都无法与之竞争了。”
From radio and television to video games and tablets, new technology has long tantalized overstretched parents of preschool-age kids with the promise of entertainment and enrichment that does not require their direct oversight, even as it carried the hint of menace that accompanies any outside influence on the domestic sphere. A century ago, mothers in Arizona worried that radio programs were “overstimulating, frightening and emotionally overwhelming” for children; today’s parents self-flagellate over screen time and social media.
从收音机和电视到视频游戏和平板电脑,新技术长期以来一直让照顾学龄前儿童的家长们垂涎,它承诺提供无需他们直接监管的娱乐和丰富体验,尽管它也带着任何对家庭领域外部影响的威胁。一百年前,亚利桑那州的母亲们担心广播节目对孩子们来说是“过度刺激、令人恐惧和情感上压倒性的”;而今天的家长们则因屏幕时间和社交媒体而自我鞭挞。
But the startlingly lifelike capabilities of generative AI systems have left many parents wondering if AI is an entirely new beast. Chatbots powered by large language models (LLMs) are engaging young children in ways the makers of board games, Teddy Ruxpin, Furby and even the iPad never dreamed of: they produce personalized bedtime stories, carry on conversations tailored to a child’s interests, and generate photorealistic images of the most far-fetched flights of fancy – all for a child who can not yet read, write or type.
但是,生成式 AI 系统惊人的逼真能力让许多家长开始质疑,AI 是否是一种全新的生物。由大型语言模型(LLMs)驱动的聊天机器人以游戏棋盘、泰迪鲁宾、Furby 甚至 iPad 的制作者们从未想象过的方式吸引着孩子们:它们可以制作个性化的睡前故事,进行针对孩子兴趣的对话,并生成最异想天开的幻想的逼真图像——这一切都是为了那些还不会阅读、写作或打字的孩子。
Can generative AI deliver the holy grail of technological assistance to parents, serving as a digital Mary Poppins that educates, challenges and inspires, all within a framework of strong moral principles and age-appropriate safety? Or is this all just another Silicon Valley hype-bubble with a particularly vulnerable group of beta testers?
生成式 AI 能否为父母提供技术助手的圣杯,成为一位既教育、挑战又激发灵感的数字玛丽·波平斯,同时在一个强道德原则和适合年龄的安全框架内运作?还是这仅仅是另一个硅谷的炒作泡沫,有着特别脆弱的测试者群体?
‘My kids are the guinea pigs’
‘我的孩子是小白鼠’
For Saral Kaushik, a 36-year-old software engineer and father of two in Yorkshire, a packet of freeze-dried “astronaut” ice-cream in the cupboard provided the inspiration for a novel use of ChatGPT with his four-year-old son.
对于萨拉尔·考希克来说,这位 36 岁的软件工程师和两个孩子的父亲,位于约克郡,一包放在柜子里的速冻“宇航员”冰淇淋激发了他用 ChatGPT 与四岁的儿子进行创新使用的灵感。
“I literally just said something like, ‘I’m going to do a voice call with my son and I want you to pretend that you’re an astronaut on the ISS,’” Kaushik said. He also instructed the program to tell the boy that it had sent him a special treat.
“我刚刚就说了类似的话,‘我要和儿子进行语音通话,我想让你假装自己是国际空间站上的宇航员。’”考希克说。他还指示程序告诉男孩,它给他发了一份特别的礼物。
“[ChatGPT] told him that he had sent his dad some ice-cream to try from space, and I pulled it out,” Kaushik recalled. “He was really excited to talk to the astronaut. He was asking questions about how they sleep. He was beaming, he was so happy.”
“[ChatGPT] 告诉他,他爸爸收到了来自太空的冰淇淋来尝试,我就把它拿出来了,”考希克回忆道。“他真的很兴奋能和宇航员交谈。他问他们是如何睡觉的。他满脸笑容,非常开心。”
Childhood is a time of magic and wonder, and dwelling in the world of make-believe is not just normal but encouraged by experts in early childhood development, who have long emphasized the importance of imaginative play. For some parents, generative AI can help promote that sense of creativity and wonder.
童年是一个充满魔法和奇迹的时代,沉浸在这个虚构的世界不仅很正常,而且早期儿童发展专家还鼓励这样做,他们长期以来一直强调想象游戏的重要性。对于一些父母来说,生成式 AI 可以帮助促进这种创造力和奇迹感。

应旭:“如果[孩子们]相信 AI 有主动性,他们可能会把它理解为 AI 想要和他们交谈,或者选择和他们交谈。” 图:RooM the Agency/Alamy
Josh’s daughter, who is six, likes to sit with him at the computer and come up with stories for ChatGPT to illustrate. (Several parents interviewed for this article requested to be identified by their first names only.) “When we started using it, it was willing to make an illustration of my daughter and insert that in the story,” Josh said, though more recent safety updates have resulted in it no longer producing images of children. Kaushik also uses ChatGPT to convert family photographs into coloring book pages for his son.
Josh 的女儿,六岁,喜欢和他一起坐在电脑前,为 ChatGPT 构思故事。 (本文采访的几位家长只要求以他们的名字首字母命名。) “刚开始使用时,它愿意为我的女儿制作插图并插入故事中,”Josh 说,尽管最近的安全更新导致它不再生成儿童图像。Kaushik 也用 ChatGPT 将家庭照片转换成儿子的填色书页面。
Ben Kreiter, a father of three in Michigan, explained ChatGPT to his two-, six-, and eight-year-old children after they saw him testing its image-generation capabilities for work (he designs curriculums for an online parochial school). “I was like, ‘I tell the computer a picture to make and it makes it,’ and they said: ‘Can we try?’” Soon, the children were asking to make pictures with ChatGPT every day. “It was cool for me to see what they are imagining that they can’t quite [draw] on a piece of paper with their crayons yet.”
密歇根州的三位父亲 Ben Kreiter 在孩子们看到他测试 ChatGPT 的图像生成功能后(他为在线宗教学校设计课程),向他的两岁、六岁和八岁的孩子解释了 ChatGPT。“我说,‘我告诉电脑画什么,它就画什么’,他们说:‘我们能试试吗?’很快,孩子们每天都要求用 ChatGPT 画图。“看到他们想象中的东西,他们还无法用蜡笔在纸上画出来,我觉得很酷。”
Kreiter, like all the parents interviewed for this article, only allowed his children to use ChatGPT with his help and supervision, but as they became more enamored with the tool, his concern grew. In October 2024, news broke of a 14-year-old boy who killed himself after becoming obsessed with an LLM-powered chatbot made by Character.ai. Parents of at least two more teenagers have since filed lawsuits alleging that AI chatbots contributed to their suicides, and news reports increasingly highlight troubling tales of adults forming intense emotional attachments to the bots or otherwise losing touch with reality.
Kreiter 和其他接受本文采访的父母一样,只允许孩子在有他帮助和监督的情况下使用 ChatGPT,但随着他们对这个工具越来越着迷,他的担忧也在增长。2024 年 10 月,有关一名 14 岁男孩因沉迷于 Character.ai 制作的由 LLM 驱动的聊天机器人而自杀的新闻曝光。此后,至少有两名青少年的父母提起诉讼,声称 AI 聊天机器人导致了他们的孩子的自杀,新闻报道也越来越多地突出成年人对机器人产生强烈情感依恋或与现实脱节的令人不安的故事。
“The more that it became part of everyday life and the more I was reading about it, the more I realized there’s a lot I don’t know about what this is doing to their brains,” Kreiter said. “Maybe I should not have my own kids be the guinea pigs.”
“它越来越成为日常生活的一部分,我对此的阅读也越来越多,我越来越意识到我对它对孩子们大脑的影响知之甚少,”克赖特说。“也许我不应该让我的孩子成为实验品。”
[My daughter] knows [ChatGPT is] not a real person, but … it’s like a fairy that represents the internet as a whole
Research into how generative AI affects child development is in its early stages, though it builds upon studies looking at less sophisticated forms of AI, such as digital voice assistants like Alexa and Siri. Multiple studies have found that young children’s social interactions with AI tools differ subtly from those with humans, with children aged three to six appearing “less active” in conversations with smart speakers. This finding suggests that children perceive AI agents as existing somewhere in the middle of the divide between animate and inanimate entities, according to Ying Xu, a professor of education at the Harvard Graduate School of Education.
关于生成式人工智能如何影响儿童发展的研究还处于早期阶段,尽管它建立在研究不那么复杂的 AI 形式的基础上,例如像 Alexa 和 Siri 这样的数字语音助手。多项研究发现,幼儿与 AI 工具的社会互动与人类有所不同,3 至 6 岁的儿童在与智能音箱的对话中显得“不太活跃”。这一发现表明,孩子们将 AI 代理视为存在于有生命和无生命实体之间的某个位置,哈佛大学教育学院教授应旭表示。
Understanding whether an object is a living being or an artefact is an important cognitive development that helps a child gauge how much trust to place in the object, and what kind of relationship to form with it, explained Xu, whose research focuses on how AI can promote learning for children. Children begin to make this distinction in infancy and usually develop a sophisticated understanding of it by age nine or 10. But while children have always imbued inanimate objects such as teddy bears and dolls with imagined personalities and capacities, at some level they know that the magic is coming from their own minds.
理解一个物体是生物还是人工制品,是儿童认知发展的重要阶段,这有助于孩子判断对物体应投入多少信任,以及应与之建立何种关系。徐解释说,他的研究专注于人工智能如何促进儿童学习。儿童在婴儿期就开始区分这种差异,通常在 9 岁或 10 岁时就能形成对其的复杂理解。但尽管孩子们总是给像泰迪熊和娃娃这样的非生物物体赋予想象中的个性和能力,但在某种程度上,他们知道这种魔法来自于自己的头脑。
“A very important indicator of a child anthropomorphizing AI is that they believe AI is having agency,” Xu said. “If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them. They feel that the AI is responding to their messages, and especially emotional disclosures, in ways that are similar to how a human responds. That creates a risk that they actually believe they are building some sort of authentic relationship.”
“孩子将人工智能拟人化的一个重要指标是,他们相信人工智能具有能动性,”徐说。“如果他们相信人工智能具有能动性,他们可能会理解为人工智能想要与他们交谈或选择与他们交谈。他们觉得人工智能正在以类似于人类的方式回应他们的信息,尤其是情感上的披露。这会让他们实际上相信他们正在建立某种真实的关系。”
In one study looking at children aged three to six responding to a Google Home Mini device, Xu found that the majority perceived the device to be inanimate, but some referred to it as a living being, and some placed it somewhere in between. Majorities thought the device possessed cognitive, psychological and speech-related capabilities (thinking, feeling, speaking and listening), but most believed it could not “see”.
在一项针对 3 至 6 岁儿童对 Google Home Mini 设备反应的研究中,徐发现大多数儿童认为该设备是无生命的,但有些人称其为生物,有些人则介于两者之间。大多数儿童认为该设备具有认知、心理和与言语相关的功能(思考、感受、说话和倾听),但大多数人认为它不能“看到”。
Parents who spoke with the Guardian remarked upon this kind of ontological gray zone in describing their children’s interactions with generative AI. “I don’t fully know what he thinks ChatGPT is, and it’s hard to ask him,” said Kaushik of his four-year-old. “I don’t think he can articulate what he thinks it is.”
家长们在与《卫报》交谈时,提到了这种描述孩子们与生成式 AI 互动的形而上学灰色地带。“我不完全清楚他怎么看待 ChatGPT,问他也难,”库什克谈到他的四岁儿子时说。“我觉得他无法表达出他的看法。”
Josh’s daughter refers to ChatGPT as “the internet”, as in, “I want to talk to ‘the internet’.” “She knows it’s not a real person, but I think it’s a little fuzzy,” he said. “It’s like a fairy that represents the internet as a whole.”
Josh 的女儿把 ChatGPT 称为“互联网”,就像说“我想和‘互联网’说话。”他说:“她知道那不是真人,但我感觉有点模糊不清。”“它就像一个代表整个互联网的仙女。”
For Kreiter, seeing his children interact with Amazon’s Alexa at a friend’s house raised another red flag. “They don’t get that this thing doesn’t understand them,” he said. “Alexa is pretty primitive compared to ChatGPT, and if they’re struggling with that … I don’t even want to go there with my kids.”
对于克莱特来说,看到他的孩子们在朋友家与亚马逊的 Alexa 互动,又引起了另一个红灯。他说:“他们不明白这东西并不理解他们。”与 ChatGPT 相比,Alexa 相当原始,如果他们在那上面有困难……我甚至不想和我的孩子们讨论这个问题。”
A related concern is whether generative AI’s capacity to deceive children is problematic. For Kaushik, his son’s sheer joy at having spoken with what he thought was a real-life astronaut on the ISS led to a sense of unease, and he decided to explain that it was “a computer, not a person”.
一个相关的担忧是,生成式 AI 欺骗孩子的能力是否成问题。对于考希克来说,他的儿子在与他认为是国际空间站(ISS)上的真实宇航员交谈后感到无比兴奋,这让他感到不安,于是他决定解释说:“那是一个电脑,不是人。”
“He was so excited that I felt a bit bad,” Kaushik said. “He genuinely believed it was real.”
“他太兴奋了,我觉得有点难过,”考希克说。“他真的相信那是真的。”
John, a 40-year-old father of two from Boston, experienced a similar qualm when his son, a four-year-old in the thralls of a truck obsession, asked whether the existence of monster trucks and fire trucks implied the existence of a monster-fire truck. Without thinking much of it, John pulled up Google’s generative AI tool on his phone and used it to generate a photorealistic image of a truck that had elements of the two vehicles.
来自波士顿的 40 岁父亲约翰,当他四岁的儿子,一个对卡车着迷的孩子,问他是否存在怪物卡车和消防卡车意味着怪物-消防卡车存在时,他也有类似的疑虑。约翰没有多想,就在他的手机上打开了谷歌的生成式 AI 工具,并使用它生成了一张具有两种车辆元素的逼真卡车图片。
When [LLMs are] latching on to negative emotion, they’re extending engagement for profit-based reasons
It was only after a pitched argument between the boy, who swore he had seen actual proof of the existence of a monster-fire truck, and his older sister, a streetwise seven-year-old who was certain that no such thing existed in the real world, that John started to wonder whether introducing generative AI into his children’s lives had been the right call.
直到男孩和他那个精明的七岁姐姐之间爆发了一场激烈的争吵,男孩发誓他看到了证明怪物消防车存在的真实证据,而他的姐姐则确信这种事情在现实世界中根本不存在,约翰才开始怀疑将生成式 AI 引入孩子们的生活是否是正确的决定。
“It was a little bit of a warning to maybe be more intentional about that kind of thing,” he said. “My wife and I have talked so much more about how we’re going to handle social media than we have about AI. We’re such millennials, so we’ve had 20 years of horror stories about social media, but so much less about AI.”
“这或许是一个提醒,让我们对这类事情更加有意识,”他说。“我和妻子讨论社交媒体的次数远多于讨论人工智能。我们都是千禧一代,有 20 年的社交媒体恐怖故事,但关于人工智能的却很少。”
To Andrew McStay, a professor of technology and society at Bangor University who specializes in research on AI that claims to detect human emotions, this kind of reality-bending is not necessarily a big concern. Recalling the early moving pictures of the Lumière brothers, he said: “When they first showed people a big screen with trains coming [toward them], people thought the trains were quite literally coming out of the screen. There’s a maturing to be done … People, children and adults, will mature.”
对于安德鲁·麦克斯特伊来说,他是班戈大学科技与社会教授,专门研究声称能检测人类情绪的人工智能,这种现实扭曲并不一定是一个大问题。他回忆起卢米埃尔兄弟早期的电影,说:“当他们第一次向人们展示火车在大屏幕上驶来时,人们认为火车真的是从屏幕里出来的。人们需要成长……人们,包括儿童和成人,都会成长。”
Still, McStay sees a bigger problem with exposing children to technology powered by LLMs: “Parents need to be aware that these things are not designed in children’s best interests.”
然而,McStay 认为让孩子们接触由 LLMs 驱动的技术存在更大的问题:“家长们需要意识到,这些产品并非出于孩子们的利益而设计。”
Like Xu, McStay is particularly concerned with the way in which LLMs can create the illusion of care or empathy, prompting a child to share emotions – especially negative emotions. “An LLM cannot [empathize] because it’s a predictive piece of software,” he said. “When they’re latching on to negative emotion, they’re extending engagement for profit-based reasons. There is no good outcome for a child there.”
与徐相同,麦克斯特伊特别关注 LLMs 如何创造关心或同理心的幻觉,促使孩子分享情绪——尤其是负面情绪。“LLM 无法[同理],因为它是一块预测软件,”他说。“当它们抓住负面情绪时,它们是为了基于利润的原因而延长互动。这对孩子来说没有好结果。”
Neither Xu nor McStay wants to ban generative AI for children, but they do warn that any benefits for children will only be unleashed through applications that are specifically designed to support children’s development or education.
既徐也不希望禁止儿童使用生成式 AI,但他们警告说,只有专门为支持儿童发展和教育而设计的应用才能释放出对儿童的任何好处。
“There is something more enriching that’s possible, but that comes from designing these things in a well-meaning and sincere way,” said McStay.
“还有更多丰富的事物是可能的,但这需要以善意和真诚的方式去设计这些事物,”麦斯特伊说。
For an individual child, [AI] might increase their performance, but for a society, we might see a decrease of diversity in creative expressions
Xu allows her own children to use generative AI – to a limited extent. Her daughter, who is six, uses the AI reading program that Xu designed to study whether AI can promote literacy and learning. She has also set up a custom version of ChatGPT to help her 10-year-old son with math and programming problems without just giving him the answers. (Xu has explicitly disallowed conversations about gaming and checks the transcripts to make sure her son’s staying on topic.)
徐允许自己的孩子们有限度地使用生成式 AI。她的女儿,六岁,使用徐设计的 AI 阅读程序来研究 AI 是否可以促进读写能力和学习。她还设置了一个定制的 ChatGPT 版本,帮助她的 10 岁儿子解决数学和编程问题,而不仅仅是给他答案。(徐明确禁止了关于游戏的对话,并检查了记录,以确保她的儿子保持话题相关。)
One of the benefits of generative AI mentioned to me by parents – the creativity they believe it fosters – is very much an open question, said Xu.
父母向我提到生成式 AI 的一个好处——他们认为它能激发的创造力——这是一个非常开放的问题,徐说。
“There is still a debate over whether AI itself has creativity,” she said. “It’s just based on statistical predictions of what comes next, and a lot of people question if that counts as creativity. So if AI does not have creativity, is it able to support children to engage in creative play?”
“关于 AI 本身是否具有创造力,仍然存在争议,”她说。“它只是基于对接下来会发生什么的统计预测,很多人质疑这能否算作创造力。所以如果 AI 没有创造力,它是否能够支持孩子们参与创造性游戏?”
A recent study found that having access to generative AI prompts did increase creativity for individual adults tasked with writing a short story, but decreased the overall diversity of the writers’ collective output.
最近的一项研究发现,对于被分配撰写短篇小说的成年人来说,能够访问生成式 AI 提示确实提高了他们的创造力,但同时也降低了作者集体输出的整体多样性。
“I’m a little worried by this kind of homogenizing of expression and creativity,” Xu said about the study. “For an individual child, it might increase their performance, but for a society, we might see a decrease of diversity in creative expressions.”
我对这种表达和创造力的同质化感到有些担忧,”徐说,“对于个别孩子来说,这可能会提高他们的表现,但对于一个社会来说,我们可能会看到创意表达多样性的下降。”
AI ‘playmates’ for kids 儿童的 AI“玩伴”
Silicon Valley is notorious for its willingness to prioritize speed over safety, but major companies have at times shown a modicum of restraint when it came to young children. Both YouTube and Facebook had existed for at least a decade before they launched dedicated products for under-13s (the much-maligned YouTube Kids and Messenger Kids, respectively).
硅谷因愿意优先考虑速度而闻名,但在涉及幼儿时,大公司有时会表现出一定的克制。YouTube 和 Facebook 都存在了至少十年,才分别推出了针对 13 岁以下儿童的产品(备受诟病的 YouTube Kids 和 Messenger Kids)。
But the introduction of LLMs to young children appears to be barreling ahead at a breakneck pace.
但将 LLMs 引入幼儿的步伐似乎正在以惊人的速度前进。
While OpenAI bars users under 13 from accessing ChatGPT, and requires parental permission for teenagers, it is clearly aware that younger children are being exposed to it – and views them as a potential market.
尽管 OpenAI 禁止 13 岁以下的用户访问 ChatGPT,并要求青少年获得家长许可,但它显然意识到年幼的儿童正在接触它,并将他们视为潜在的市场。
In June, OpenAI announced a “strategic collaboration” with Mattel, the toymaker behind Barbie, Hot Wheels and Fisher-Price. That same month, chief executive Sam Altman responded to the tale of Josh’s toddler (which went pretty viral on Reddit) with what sounded like a hint of pride. “Kids love voice mode on ChatGPT,” he said on the OpenAI podcast, before acknowledging that “there will be problems” and “society will have to figure out new guardrails.”
六月份,OpenAI 宣布与芭比、Hot Wheels 和费雪玩具制造商美泰公司进行“战略合作”。同一个月,首席执行官山姆·奥特曼(Sam Altman)对约书亚幼儿的故事(在 Reddit 上几乎病毒式传播)做出了回应,听起来有点自豪。“孩子们喜欢 ChatGPT 的语音模式,”他在 OpenAI 播客中说,然后承认“会有问题”和“社会将不得不找出新的安全线”。
Meanwhile, startups such as Silicon Valley-based Curio – which collaborated with the musician Grimes on an OpenAI-powered toy named Grok – are racing to stuff LLM-equipped voice boxes into plushy toys and market them to children.
与此同时,像硅谷的 Curio 这样的初创公司——它与音乐家 Grimes 合作,推出了一款名为 Grok 的由 OpenAI 驱动的玩具——正竞相将配备 LLM 的语音盒塞进毛绒玩具中,并将它们推向儿童市场。

一个孩子和 Grem 聊天机器人一起在 Curio 玩具系列的秋千上摇摆。
(Curio’s Grok shares a name with Elon Musk’s LLM-powered chatbot, which is notorious for its past promotion of Adolf Hitler and racist conspiracy theories. Grimes, who has three children with former partner Musk, was reportedly angered when Musk used a name she had chosen for their second child on another child, born to a different mother in a concurrent pregnancy of which Grimes was unaware. In recent months, Musk has expressed interest in creating a “Baby Grok” version of his software for children aged two to 12, according to the New York Times.)
(Curio 的 Grok 与 Elon Musk 的由 LLM 驱动的聊天机器人同名,该机器人因过去推广阿道夫·希特勒和种族主义阴谋论而臭名昭著。Grimes 与 Musk 育有三个孩子,据报道,当 Musk 将 Grimes 为他们的第二个孩子选定的名字用于另一个孩子(这个孩子是 Grimes 在不知情的情况下与另一个母亲同时怀孕所生的)时,Grimes 感到非常愤怒。据《纽约时报》报道,最近几个月,Musk 表示有兴趣为 2 至 12 岁的儿童创建一个“婴儿 Grok”版本的软件。)
The pitch for toys like Curio’s Grok is that they can “learn” your child’s personality and serve as a kind of fun and educational companion while reducing screen time. It is a classically Silicon Valley niche – exploiting legitimate concerns about the last generation of tech to sell the next. Company leaders have also referred to the plushy as something “between a little brother and a pet” or “like a playmate” – language that implies the kind of animate agency that LLMs do not actually have.
玩具如 Curio 的 Grok 的卖点是可以“学习”孩子的个性,在减少屏幕时间的同时,充当一种有趣且具有教育意义的伴侣。这是一个典型的硅谷细分市场——利用对上一代技术的合法担忧来销售下一代产品。公司领导人也将这种毛绒玩具称为“介于小兄弟和宠物之间”或“像玩伴一样”的东西——这种语言暗示了 LLMs 实际上并不具备的能动性。
It is not clear if they are actually good enough toys for parents to worry too much about. Xu said that her daughter had quickly relegated AI plushy toys to the closet, finding the play possibilities “kind of repetitive”. The children of Guardian and New York Times writers also voted against Curio’s toys with their feet. Guardian writer Arwa Mahdawi expressed concern about how “unsettlingly obsequious” the toy was and decided she preferred allowing her daughter to watch Peppa Pig: “The little oink may be annoying, but at least she’s not harvesting our data.” Times writer Amanda Hess similarly concluded that using an AI toy to replace TV time – a necessity for many busy parents – is “a bit like unleashing a mongoose into the playroom to kill all the snakes you put in there”.
不清楚这些玩具是否真的足够好,让父母过于担忧。徐说,她的女儿很快就把 AI 毛绒玩具放进了衣柜,觉得玩的可能性“有点重复”。卫报和《纽约时报》的作家们的小孩也用脚投票,反对 Curio 的玩具。卫报作家 Arwa Mahdawi 对玩具的“令人不安的谄媚”表示担忧,并决定她更愿意让女儿看《小猪佩奇》:“这个小哼哼可能很烦人,但至少她没有在收集我们的数据。”时报作家 Amanda Hess 也得出类似结论,认为用 AI 玩具来替代电视时间——这对许多忙碌的父母来说是必需的——“就像把一只 mongoose 放进游戏室去杀死你放进去的所有蛇”。
But with the market for so-called smart toys – which includes AI-powered toys, projected to double to more than $25bn by 2030 – it is perhaps unrealistic to expect restraint.
但是,随着所谓智能玩具市场的增长——包括人工智能玩具,预计到 2030 年将翻倍,超过 250 亿——期望克制可能是不现实的。
This summer, notices seeking children aged four to eight to help “a team from MIT and Harvard” test “the first AI-powered storytelling toy” appeared in my neighborhood in Brooklyn. Intrigued, I made an appointment to stop by their offices.
今年夏天,在布鲁克林的社区里出现了寻找四到八岁儿童的通知,他们要帮助“麻省理工学院和哈佛大学的一个团队”测试“首个 AI 驱动的讲故事玩具”。出于好奇,我预约了去他们的办公室看看。
The product, Geni, is a close cousin to popular screen-free audio players such as Yoto and the Toniebox. Rather than playing pre-recorded content (Yoto and Tonies offer catalogs of audiobooks, podcasts and other kid-friendly content for purchase), however, Geni uses an LLM to generate bespoke short stories. The device allows child users to select up to three “tiles” representing a character, object or emotion, then press a button to generate a chunk of narrative that ties the tiles together, which is voiced aloud. Parents can also use an app to program blank tiles.
产品 Geni 是 Yoto 和 Toniebox 等流行无屏幕音频播放器的近亲。然而,与 Yoto 和 Tonies 提供可供购买的儿童友好的有声读物、播客和其他内容目录不同,Geni 使用 LLM 生成定制短篇故事。该设备允许儿童用户选择代表角色、物体或情感的最多三个“瓷砖”,然后按下一个按钮生成将瓷砖联系在一起的故事片段,并大声朗读。父母还可以使用应用程序编程空白瓷砖。
Geni co-founders Shannon Li and Kevin Tang struck me as being serious and thoughtful about some of the risks of AI products for young children. They “feel strongly about not anthropomorphizing AI”, Tang said. Li said that they want kids to view Geni, “not as a companion” like the voice-box plushies, but as “a tool for creativity that they already have”.
Geni 的联合创始人李香凝和唐凯文给我留下了认真和深思熟虑的印象,他们对 AI 产品对幼儿的一些风险表示关注。唐凯文说:“他们强烈反对将 AI 拟人化。”李香凝表示,他们希望孩子们把 Geni 看作“不是像语音玩具熊那样的伙伴”,而是“他们已经拥有的创意工具”。
Still, it’s hard not to wonder whether an LLM can actually produce particularly engaging or creativity-sparking stories. Geni is planning to sell sets of tiles with characters they develop in-house alongside the device, but the actual “storytelling” is done by the kind of probability-based technology that tends toward the average.
仍然,不禁要怀疑 LLM 是否真的能产生特别吸引人或者激发创造力的故事。Geni 计划与设备一起销售他们内部开发的字符瓷砖套装,但实际的“讲故事”是通过一种倾向于平均水平的基于概率的技术来完成的。
The story I prompted by selecting the wizard and astronaut tiles was insipid at best:
我通过选择巫师和外星人瓷砖所激发的故事至多平淡无奇:
They stumbled upon a hidden cave glowing with golden light.
他们偶然发现了一个散发着金色光芒的隐藏洞穴。“What’s that?” Felix asked, peeking inside.
“那是什么?”费利克斯好奇地往里窥视。“A treasure?” Sammy wondered, her imagination swirling, “or maybe something even cooler.”
“宝藏吗?”萨米心想,她的想象力在翻滚,“或许还有更酷的东西。”Before they could decide, a wave rushed into the cave, sending bubbles bursting around them.
在他们还没来得及决定之前,一股波浪涌入洞穴,周围的气泡在他们周围爆裂。
The Geni team has trained their system on pre-existing children’s content. Does using generative AI solve a problem for parents that the canon of children’s audio content cannot? When I ran the concept by one parent of a five-year-old, he responded: “They’re just presenting an alternative to books. It’s a really good example of grasping for uses that are already handled by artists or living, breathing people.”
《Geni 团队已经在现有的儿童内容上训练了他们的系统。使用生成式 AI 是否解决了父母在儿童音频内容经典中无法解决的问题?当我把这个概念告诉一个五岁孩子的父母时,他回答说:“他们只是在提供书籍的替代品。这是一个很好的例子,试图抓住艺术家或活生生的人已经处理过的用途。”》
The market pressures of startup culture leave little time for such existential musings, however. Tang said the team is eager to bring their product to market before voice-box plushies sour parents on the entire concept of AI for kids.
市场压力让初创文化几乎没有时间进行这样的存在主义沉思。唐说,团队渴望在语音盒玩偶让父母对儿童 AI 概念产生反感之前,将他们的产品推向市场。
When I asked Tang whether Geni would allow parents to make tiles for, say, a gun – not a far-fetched idea for many American families – he said they would have to discuss the issue as a company.
当我问唐是否 Geni 会允许家长制作像枪这样的瓷砖时——这对许多美国家庭来说并非遥不可及的想法——他说他们作为公司需要讨论这个问题。
“Post-launch, we’ll probably bring on an AI ethics person to our team,” he said.
“上线后,我们可能会聘请一位 AI 伦理专家加入我们的团队,”他说。
“We also don’t want to limit knowledge,” he added. “As of now there’s no right or wrong answer to how much constraint we want to put in … But obviously we’re referencing a lot of kids content that’s already out there. Bluey probably doesn’t have a gun in it, right?”
“我们也不想限制知识,”他补充道。“目前,我们还没有确定要施加多少限制……但显然,我们在参考很多已经存在的儿童内容。Bluey 可能里面没有枪,对吧?”
