AI & the Future of Education | SXSW EDU 2025
### 章节 1:注意力的经济学与工具的本质 📝 **本节摘要**: > 本节作为访谈正式开始前的引言,演讲者首先从宏观角度定义了技术的本质——无论是原始工具还是现代App,其核心都是解决问题的手段。接着,演讲者以多邻国(Duolingo)创始人Luis von Ahn为例,讲述了他如何巧妙地将验...
Category: Education📝 本节摘要:
本节作为访谈正式开始前的引言,演讲者首先从宏观角度定义了技术的本质——无论是原始工具还是现代App,其核心都是解决问题的手段。接着,演讲者以多邻国(Duolingo)创始人Luis von Ahn为例,讲述了他如何巧妙地将验证码(reCAPTCHA)的人力投入转化为书籍数字化的生产力。最后,演讲者提出了一个深刻的观点:在当今这个斥资数万亿试图分散我们注意力的数字经济中,能够教会学生“单任务处理(Single-tasking)”而非多任务处理,将成为一种稀缺的超级力量和巨大的竞争优势。
[原文] [Speaker A]: As with any, well, I should say technology, whether it's an app on an iPhone or a stick that a chimp is using to fish out ants, it's it's a tool that helps you to solve, in an ideal case that helps you to solve some type of prevalent problem.
[译文] [Speaker A]: 正如任何技术一样——或者我应该说,无论是iPhone上的应用程序,还是黑猩猩用来钓蚂蚁的棍子——它都是一种工具,在理想情况下,它能帮助你解决某种普遍存在的问题。
[原文] [Speaker A]: There is a lot that can be automated, and whether it's machine learning or some form of deep learning. There are many ways technology can aid the learning process.
[译文] [Speaker A]: 很多东西都可以自动化,无论是通过机器学习还是某种形式的深度学习。技术有很多种方式可以辅助学习过程。
[原文] [Speaker A]: So I was one of the first investors in a company called Duolingo, which now has 100 million users. It's the largest free language learning platform in the world, and they have a lot more coming.
[译文] [Speaker A]: 所以,我是Duolingo(多邻国)这家公司的首批投资者之一,它现在拥有1亿用户。它是世界上最大的免费语言学习平台,而且他们还有更多计划即将推出。
[原文] [Speaker A]: It's incredibly powerful. And it is it. It was the byproduct of a number of founders, but one of one of those founders, Luis von Ahn from Guatemala, originally created CAPTCHAs and reCAPTCHA.
[译文] [Speaker A]: 它极其强大。实际上,这是一群创始人的成果,但其中一位创始人,来自危地马拉的Luis von Ahn,最初发明了验证码(CAPTCHAs)和reCAPTCHA。
[原文] [Speaker A]: So if you ever have to type in a bunch of weird characters to prove you're not a robot on a website, you have him to thank for it.
[译文] [Speaker A]: 所以,如果你曾经不得不在网站上输入一堆奇怪的字符来证明你不是机器人,那你得感谢他。
[原文] [Speaker A]: But he used that. You might have noticed back in the day, there were two fields and you'd fill one in the the the program knew the answer to that. That's how it would confirm that you weren't a robot.
[译文] [Speaker A]: 但他利用了这一点。你可能注意到过去有两个输入框,你填写其中一个,程序知道那个的答案。这就是它确认你不是机器人的方式。
[原文] [Speaker A]: And then the second was taken from books that they that machines couldn't transcribe accurately. So you're actually he was harnessing millions and millions and millions of people to transcribe books.
[译文] [Speaker A]: 而第二个输入框的内容则取自机器无法准确转录的书籍。所以实际上,他在利用成千上万、数以百万计的人来转录书籍。
[原文] [Speaker A]: Effectively so that people so that the blind could use them so that anyone could search them, etc.. And he's applied that to language learning in some really fascinating ways.
[译文] [Speaker A]: 有效地让人们……让盲人可以使用这些书,让任何人都可以搜索它们,等等。他以一些非常迷人的方式将这一理念应用到了语言学习中。
[原文] [Speaker A]: We live in a digital world where the economics of many of these businesses are dependent on distracting you as much as possible. They are very, very, very good at it.
[译文] [Speaker A]: 我们生活在一个数字世界中,许多这类企业的经济模式依赖于尽可能地分散你的注意力。他们非常、非常、非常擅长这一点。
[原文] [Speaker A]: They're putting billions of dollars, probably collectively trillions of dollars, into discovering new and better ways to distract you off of your chosen task, if you can.
[译文] [Speaker A]: 他们投入数十亿美元,甚至可能总计达到数万亿美元,用于发现更新、更好的方法来将你从既定任务中分散开来。
[原文] [Speaker A]: If you can teach yourself and your students to single task, not multitask to single task more effectively. That ability, which used to be par for the course, is becoming a superpower.
[译文] [Speaker A]: 如果你能教会你自己和你的学生更有效地进行“单任务处理(single task)”,而不是多任务处理。这种曾经习以为常的能力,正变成一种超级力量。
[原文] [Speaker A]: So if you can establish ways of blocking or blocking out distraction In rich technology for even short periods of time, you have a huge competitive advantage.
[译文] [Speaker A]: 因此,如果你能建立起即使在短时间内也能屏蔽或阻挡丰富技术干扰的方法,你就拥有了巨大的竞争优势。
📝 本节摘要:
本节记录了访谈的正式开场。主持人Renata Salazar介绍了两位嘉宾——Sinead Bovell(战略前瞻顾问)与Natalie Montbleau(虚拟人经济倡导者)。随后,Natalie回顾了她与Sinead在Clubhouse时代的相识,并详细介绍了Sinead作为联合国演讲者及为全球政企高层提供咨询的背景。在切入正题后,Sinead首先强调教育是国家安全与民主的基石。对于“AI目前处于何种阶段”的问题,她提出了著名的“1992年时刻”类比:正如当年的互联网刚诞生一样,当下的AI尚处于早期实验阶段,但终将像电力一样成为隐形却无处不在的通用技术。
[原文] [Renata Salazar]: Good morning and welcome to the final day of South by Southwest. Edu 2025. Yay! I'm Renata Salazar, marketing coordinator for South by Southwest Edu, and I'm thrilled to introduce the session, led by two innovative women who are at the forefront of the rapidly evolving AI industry.
[译文] [Renata Salazar]: 早上好,欢迎来到2025年西南偏南教育大会(SXSW Edu)的最后一天。耶!我是Renata Salazar,西南偏南教育大会的市场协调员,我很激动能介绍这场由两位处于快速发展的AI行业前沿的创新女性所主导的会议。
[原文] [Renata Salazar]: Sinead Bovell is the founder of way, an organization that prepares youth for a future with advanced technologies. And Natalie Montbleau is the founder of Virtual Human Economy, which advocates for real people that can benefit from putting their virtual selves to work.
[译文] [Renata Salazar]: Sinead Bovell是WAYE(注:原文为way,实为WAYE组织)的创始人,该组织致力于帮助年轻人为拥有先进技术的未来做好准备。Natalie Montbleau是“虚拟人类经济(Virtual Human Economy)”的创始人,该机构倡导真人通过让自己的虚拟形象工作来获益。
[原文] [Renata Salazar]: Both of them, in their own ways, study the future. As a reminder, during the Q&A session, please open your South by Southwest Edu Go app to ask and upvote questions by selecting the engage button on the session page.
[译文] [Renata Salazar]: 她们两人都在以自己的方式研究未来。在此提醒一下,在问答环节中,请打开您的South by Southwest Edu Go应用程序,通过选择会议页面上的“参与(engage)”按钮来提问和点赞问题。
[原文] [Renata Salazar]: And now I am honored to introduce these two pioneering futurists who will share their insights on what we can and cannot predict about the future of AI in education. Please welcome to the stage Sinead Bovell and Natalie Monbiot.
[译文] [Renata Salazar]: 现在,我很荣幸地介绍这两位先锋未来学家,她们将分享关于AI在教育未来的可预测与不可预测之处的见解。请欢迎Sinead Bovell和Natalie Monbiot上台。
[原文] [Natalie Montbleau]: Wow. Welcome, everybody, to the final day of South by Southwest. Edu. It's been a fantastic week. Everybody been enjoying themselves, feeling inspired.
[译文] [Natalie Montbleau]: 哇。欢迎大家来到西南偏南教育大会的最后一天。这是极其精彩的一周。大家都玩得开心吗?感到受启发了吗?
[原文] [Natalie Montbleau]: I'm super delighted to be moderating this conversation with the fabulous Sinead Bovell on AI and the Future of Education. Just by way of a bit of background, Sinead and I have known each other since the clubhouse days. Does anybody on clubhouse know that audio app that was highly popular?
[译文] [Natalie Montbleau]: 我非常高兴能与精彩绝伦的Sinead Bovell一起主持这场关于“AI与教育未来”的对话。简单介绍一下背景,Sinead和我从Clubhouse时代就认识了。有人上过Clubhouse吗?知道那个曾经非常流行的音频应用吗?
[原文] [Natalie Montbleau]: One of the bright spots of Covid and we had the chance to meet very much virtually, I guess, and get deep into conversations around AI, the future of AI and society. AI avatars, and really sort of a precursor to the conversation that we're having today.
[译文] [Natalie Montbleau]: 那是新冠疫情期间的一个亮点,我们有机会——我想是非常虚拟地——会面,并深入探讨了关于AI、AI与社会的未来、AI化身等话题,那真的是我们今天这场对话的前奏。
[原文] [Natalie Montbleau]: So it's my absolute honor to be in conversation with Sinead about this topic today. So a little bit about Sinead actually, who follows Sinead on social media. Yeah, a lot of hands. So she's got a massive fan base. It was hard to get to this stage actually, with people kind of coming up to her and saying how much they admire her work.
[译文] [Natalie Montbleau]: 所以,今天能与Sinead就这个话题进行对话,绝对是我的荣幸。那么关于Sinead的一点介绍——其实,谁在社交媒体上关注了Sinead?耶,好多人举手。她有着庞大的粉丝群。实际上刚才走到台上都很难,因为人们纷纷走上前对她说他们有多欣赏她的工作。
[原文] [Natalie Montbleau]: Well, just in terms of her professional, um, perspective, Sinead is a strategic foresight advisor and the founder of way, which is an organization dedicated to preparing businesses and the next generation of leaders for a future shaped by advanced Technology.
[译文] [Natalie Montbleau]: 那么,仅从她的专业角度来看,Sinead是一位战略前瞻顾问,也是WAYE的创始人,这是一个致力于帮助企业和下一代领导者为先进技术塑造的未来做好准备的组织。
[原文] [Natalie Montbleau]: She advises C-suite executives and senior leadership across governments and global and global corporations on emerging and exponential technologies. And she's an 11 time United Nations speaker, and she has delivered formal addresses to presidents, royal families and fortune 500 leaders on topics from synthetic biology to artificial intelligence.
[译文] [Natalie Montbleau]: 她就新兴技术和指数级技术向政府及全球企业的最高管理层和高级领导层提供建议。她是11次受邀的联合国演讲嘉宾,曾向总统、皇室成员和财富500强企业领袖发表关于从合成生物学到人工智能等主题的正式演说。
[原文] [Natalie Montbleau]: And today, she has advised very relevant to this conversation 16,000 educators, government officials and policymakers on redesigning education for the age of AI and emerging technologies. So let's dive in.
[译文] [Natalie Montbleau]: 而直到今天,与本次对话非常相关的是,她已经为16,000名教育工作者、政府官员和政策制定者提供了关于如何为AI和新兴技术时代重新设计教育的建议。那么让我们开始吧。
[原文] [Natalie Montbleau]: So we've heard a little bit about AI and the future of education and a number of different talks this week. Sinead, tell us what it is to be a strategic foresight advisor and your lens on AI and the future of education.
[译文] [Natalie Montbleau]: 我们这周在许多不同的讲座中已经听到了一些关于AI和教育未来的内容。Sinead,告诉我们作为一名战略前瞻顾问意味着什么,以及你对AI和教育未来的看法。
[原文] [Sinead Bovell]: Yeah. And thanks for having me. And thanks for for coming to this session and for that warm welcome. So I think, you know, education is the bedrock for a healthy democracy and for a functioning society.
[译文] [Sinead Bovell]: 好的。谢谢邀请我。也谢谢大家来参加这次会议以及如此热情的欢迎。我认为,你知道,教育是健康的民主制度和运作良好的社会的基石。
[原文] [Sinead Bovell]: And it's not just essential for things like economic mobility and economic security, but fairness as well and for wellbeing. I believe there's no such thing as a state that over invests in children's future, and investment in children is an investment in national interest. Right?
[译文] [Sinead Bovell]: 它不仅对经济流动性和经济安全至关重要,对公平和福祉也同样重要。我相信不存在所谓“过度投资”儿童未来的国家,对儿童的投资就是对国家利益的投资。对吧?
[原文] [Sinead Bovell]: So you want to foster an informed and adaptive citizenry that can not just safeguard the future, but thrive in it, especially a future that's going to be as complex as the one children in school today are entering into, which will be shaped by quantum computing, genetic engineering, artificial intelligence commuting back and forth to space.
[译文] [Sinead Bovell]: 所以你需要培养知情且具有适应能力的公民,他们不仅能守护未来,还能在未来蓬勃发展,尤其是考虑到今天在校的孩子们即将进入的未来将是如此复杂——那将是一个由量子计算、基因工程、以及往返太空的人工智能所塑造的世界。
[原文] [Sinead Bovell]: This is an incredibly complex world they'll be entering into. So the more that they can understand it, the more we can support them in that journey, the better equipped they are. And that's an investment in a country's economic security and our collective health and well-being and our overall security and national security interests.
[译文] [Sinead Bovell]: 他们即将进入的是一个极其复杂的世界。因此,他们越能理解它,我们越能在旅程中支持他们,他们的装备就越精良。这是对一个国家的经济安全、我们需要集体的健康与福祉,以及我们的整体安全和国家安全利益的投资。
[原文] [Natalie Montbleau]: Goodness. So couldn't be a more pressing topic. And before we dive in to some of the kind of finer points, where would you say taking a step back, like where are we at this moment with AI?
[译文] [Natalie Montbleau]: 天哪。所以这是个再紧迫不过的话题了。在我们深入探讨一些细节之前,退一步说,你会说我们此刻在AI方面处于什么位置?
[原文] [Sinead Bovell]: So if I were to say where we are, I mean, it's very, very early. Maybe it's 1992. The internet has dropped. Companies are experimenting. We know it's maybe going to be a big deal, but there's still a lot of doubt.
[译文] [Sinead Bovell]: 如果要我说我们在哪儿,我是说,现在还非常、非常早。也许就像是1992年。互联网刚刚降临。公司正在进行实验。我们知道这可能会是件大事,但仍然存在很多疑虑。
[原文] [Sinead Bovell]: Some people are also kind of playing around on it, but we have yet to fully comprehend the way it is going to fundamentally transform our world. The Googles, the apples, the Amazons of the future. They have yet to be invented, but they're coming.
[译文] [Sinead Bovell]: 有些人也在上面玩耍,但我们还没有完全理解它将如何从根本上改变我们的世界。未来的Google、Apple、Amazon,它们还没被发明出来,但它们正在路上。
[原文] [Sinead Bovell]: And artificial intelligence is also a general purpose technology so similar to something like electricity. Think of how pervasive electricity is. We don't even think about it at all. It's so foundational that it's moved into the background.
[译文] [Sinead Bovell]: 而且人工智能也是一种通用技术(General Purpose Technology),非常类似于电力。想想电力有多普及。我们甚至完全不会去想它。它是如此基础,以至于已经退居幕后。
[原文] [Sinead Bovell]: So we will soon be streaming artificial intelligence the way we stream electricity. That is going to be a fundamentally different society to live in. And these general purpose technologies, they take time to get so entrenched in society.
[译文] [Sinead Bovell]: 所以我们很快就会像输送电力一样输送人工智能。那将是一个生活方式根本不同的社会。而这些通用技术,它们需要时间才能如此根深蒂固地融入社会。
[原文] [Sinead Bovell]: But, you know, when it's reached that point, because when people can't access general purpose technologies, whether that's at a country level, certain neighborhoods, we deem that wildly unethical, who doesn't get fair access to the internet? Who doesn't get access to electricity? That is the path. Artificial intelligence is on.
[译文] [Sinead Bovell]: 但是,你知道,当它达到那个点时——因为当人们无法获得通用技术时,无论是在国家层面还是某些社区,我们都会认为那是极不道德的,谁没有获得互联网的公平接入?谁没有获得电力?这就是人工智能正在走的道路。
📝 本节摘要:
在本节中,Sinead提出了应对“AI进校园”的三个战略支柱。第一支柱是安全采用,即教会孩子将AI视为工具而非朋友,并懂得保护隐私;第二支柱是课程调整,承认学生会在家中使用AI,因此需调整课堂与家庭作业的分配;第三支柱则是系统性重塑,这是政府层面的长期任务,而非仅仅是在现有课堂中硬塞入AI工具。她警告说,目前的混乱在于我们将这三个阶段混为一谈,为了“赶时髦”而匆忙部署尚未成熟的技术,强调实验必须是有意图(intentional)且受控的,不能拿学生的学习成果做社会实验。
[原文] [Natalie Montbleau]: So if artificial intelligence is going to be this general purpose technology and fade into the background, what does AI and education look like now? Like, how should educators be considering AI in education, given that it will be in the background? But today we're at this very early phase.
[译文] [Natalie Montbleau]: 那么,如果人工智能将成为这种退居幕后的通用技术,现在的“AI与教育”看起来应该是什么样的?既然它最终会隐入背景,教育工作者现在应该如何考虑教育中的AI?毕竟我们今天还处于这个非常早期的阶段。
[原文] [Sinead Bovell]: Yeah. So I think that there are kind of three pillars that are related but distinct in terms of how we should be thinking about AI in education.
[译文] [Sinead Bovell]: 是的。我认为在我们应该如何思考教育中的AI方面,有三个相关但独特的支柱。
[原文] [Sinead Bovell]: The first pillar is safe adoption for kids and for learners. So this means equipping kids with the tools to navigate artificial intelligence, because they're going to be on these tools at home regardless. They have supercomputers in their pockets, supercomputers on their iPads.
[译文] [Sinead Bovell]: 第一支柱是针对孩子和学习者的安全采用(safe adoption)。这意味着要让孩子们掌握驾驭人工智能的工具,因为无论如何,他们在家里都会使用这些工具。他们口袋里装着超级计算机,iPad上也是超级计算机。
[原文] [Sinead Bovell]: So giving kids the skills to utilize these tools safely. So that's conversations like AI isn't your friend, right? Write your chatbot isn't something that you tell secrets to. This is what we do or don't share with artificial intelligence. And this is also how you ask it good questions and validate its answers. So that's pillar one.
[译文] [Sinead Bovell]: 所以要赋予孩子们安全利用这些工具的技能。比如进行这样的对话:“AI不是你的朋友,对吧?”不要把你的秘密告诉聊天机器人。这是我们在面对人工智能时该做的和不该分享的。这还包括你如何向它提出好问题并验证它的答案。这就是第一支柱。
[原文] [Sinead Bovell]: Pillar two is how do we more urgently adjust what we are teaching in school, or just the formula for what happens in the classroom versus what happens at home, knowing that kids are going to be leaning into these technologies at home to do homework and to complete assignments.
[译文] [Sinead Bovell]: 第二支柱是我们如何更紧迫地调整我们在学校教的内容,或者仅仅是调整课堂内与家庭中发生的事情的配方,因为我们知道孩子们会在家依赖这些技术来做作业和完成任务。
[原文] [Sinead Bovell]: The third pillar, and this is where I think we are rushing into, but this is actually the long term game, is how do we fundamentally redesign the entire system of education for the age of artificial intelligence?
[译文] [Sinead Bovell]: 第三支柱,这也是我认为我们正在仓促进入的领域,但实际上这是长期的博弈,那就是我们如何为人工智能时代从根本上重新设计整个教育体系?
[原文] [Sinead Bovell]: But what seems to be happening in this moment is we are kind of merging all of those pillars in a sense of urgency. And this leads us to deploy AI in schools for the sake of feeling like we need to meet the moment by bringing AI into the classroom.
[译文] [Sinead Bovell]: 但此刻似乎正在发生的是,出于一种紧迫感,我们正在某种程度上合并所有这些支柱。这导致我们在学校部署AI,只是为了让我们感觉自己通过将AI引入课堂“赶上了这个时代”。
[原文] [Sinead Bovell]: And there are a lot. Of technologies that aren't ready. So I think we focus on pillar one. Giving kids the tools to use these tools safely.
[译文] [Sinead Bovell]: 而有很多……有很多技术其实还没准备好。所以我认为我们应该专注于第一支柱。给孩子们安全使用这些工具的手段。
[原文] [Sinead Bovell]: If they're going to be using them on their phones, we slightly adjust what we're teaching to account for cheating in homework, but it's much more at the departments of education, the ministries of education level, to take this longer term lens and fundamentally redesign our school for the age of artificial intelligence.
[译文] [Sinead Bovell]: 如果他们要在手机上使用这些工具,我们就稍微调整一下教学内容,以应对家庭作业中的作弊行为;但更多的是在教育部门、教育部的层面上,需要采取这种长期的视角,为人工智能时代从根本上重新设计我们的学校。
[原文] [Natalie Montbleau]: Great. So I think we've heard this week a number of different ways that educators are experimenting with AI, different pilots, different ways of going about it. Like is now the time to be. So it sounds like we should be teaching it and educating about it. Is now the time to be experimenting with it in deeper ways?
[译文] [Natalie Montbleau]: 太好了。我想这周我们已经听到了教育工作者尝试AI的许多不同方式,不同的试点项目,不同的做法。那么现在是时候……听起来我们应该教授它并进行相关教育。现在是进行更深层次实验的时候吗?
[原文] [Sinead Bovell]: Yeah, yeah. So, you know, teaching it, it's more so AI is a hard skill. So that should be happening. Experimenting with it. Yes. We need to be running these pilots. We need to be gathering the data as to what's working and what's not.
[译文] [Sinead Bovell]: 是的,是的。所以,你知道,教授它,更多是指AI是一项硬技能。所以这应该正在发生。至于实验,是的。我们需要运行这些试点项目。我们需要收集关于什么有效、什么无效的数据。
[原文] [Sinead Bovell]: But it has to be in very, very intentional ways, and not just assuming that we can just throw in an AI tutor somewhere arbitrarily, and that's going to be sufficient.
[译文] [Sinead Bovell]: 但这必须以非常、非常有意图的方式进行,而不仅仅是假设我们可以随意在某个地方扔进一个AI导师,然后这就足够了。
[原文] [Sinead Bovell]: And making sure that we're not running social experiments that jeopardize learning outcomes for the sake of just feeling like we need to quickly meet the moment. So yes, I think that these experiments are vital. They should be happening, but they need to be very, very intentional and very, very controlled.
[译文] [Sinead Bovell]: 并且要确保我们没有为了仅仅感觉需要迅速应对当下,而进行危及学习成果的社会实验。所以是的,我认为这些实验至关重要。它们应该发生,但它们需要非常、非常有意图,并且受到非常、非常严格的控制。
📝 本节摘要:
本节通过两个具体的学术研究揭示了AI在教育中的双刃剑效应。Sinead首先引用了沃顿商学院的研究,指出如果学生获得“不受限制”的AI访问权(即直接获取答案),虽然练习题得分很高,但最终考试成绩反而下降;而如果使用引导式的“AI导师”,则能保持学习效果。接着,她对比了哈佛大学的一项物理课实验,证明了全流程、自适应的AI教学系统能让学生表现翻倍。最后,Natalie分享了“Alpha School”的激进案例:利用AI导师将核心知识学习压缩至每天2小时,且全员达到前1%的学业水平,从而释放时间培养软技能。
[原文] [Natalie Montbleau]: I know that you're working with fortune 50 companies in this space and advising them on how to navigate AI and education. What are some of the data points and the advice that you've been giving them?
[译文] [Natalie Montbleau]: 我知道你正在与这个领域的财富50强公司合作,并就如何驾驭AI和教育向他们提供建议。你给他们提供了一些什么样的数据点和建议?
[原文] [Sinead Bovell]: Yeah, so it's been really, really interesting to look at some of the data that's coming through. And of course, we're still very, very early in early in the age of using artificial intelligence in education. But there's one clear trend that stands out.
[译文] [Sinead Bovell]: 是的,所以查看一些正在涌现的数据真的非常、非常有趣。当然,我们在教育中使用人工智能的时代还处于非常、非常早期的阶段。但有一个明显的趋势已经显现出来。
[原文] [Sinead Bovell]: So I'm going to walk us through a study that I find particularly helpful. And this was done by the University of Wharton, the University of, I believe, Pennsylvania and Budapest British International School, and it implemented artificial intelligence in math classes.
[译文] [Sinead Bovell]: 所以我要带大家看一项我觉得特别有帮助的研究。这是由沃顿商学院(University of Wharton)、宾夕法尼亚大学(我想是),以及布达佩斯英国国际学校联合进行的,他们在数学课上实施了人工智能教学。
[原文] [Sinead Bovell]: So there were a few math classes in the high school, and they broke the class up into three groups the control group, which is just your traditional doing homework problems with your textbook, the GPT based group. And this is the students that got uninhibited access to artificial intelligence.
[译文] [Sinead Bovell]: 高中有几堂数学课,他们把班级分成三组:对照组,就是传统的用教科书做作业;GPT基础组,这组学生获得了对人工智能“不受限制的访问权”。
[原文] [Sinead Bovell]: And then the GPT tutor group. So these are students that got access to an AI that has been designed to just guide them through problems, give them hints, but not the answers. All students got the bass lesson for math together, and then they broke out into their respective groups and the respective tiers of AI access or not.
[译文] [Sinead Bovell]: 然后是GPT导师组。这组学生使用的人工智能被设计为仅引导他们解决问题,给他们提示,但不提供答案。所有学生都一起上了基础数学课,然后分组进入各自不同的AI访问权限(或无权限)层级。
[原文] [Sinead Bovell]: So the study showed that when it came to the practice problems, the children that got uninhibited access to. I did 48% better on the practice problems than the control group. The students that got access to the GPT tutor did 127% better than the control group.
[译文] [Sinead Bovell]: 研究显示,在做练习题时,那些拥有不受限制访问权的孩子,在练习题上的表现比对照组好了48%。而使用GPT导师的学生比对照组好了127%。
[原文] [Sinead Bovell]: But when it came time to actually test students without access to AI and do the final post unit test, the kids that got the uninhibited access performed 17% worse.
[译文] [Sinead Bovell]: 但是,当真正要在没有AI辅助的情况下测试学生,进行最后的单元后测时,那些拥有不受限制访问权的孩子表现差了17%。
[原文] [Sinead Bovell]: So I harmed the learning outcome, and the children that got access to the AI tutor performed at the same level as the children that didn't get access to any artificial intelligence. And so the conclusion of the study was that generative AI harms learning outcomes.
[译文] [Sinead Bovell]: 所以AI损害了学习成果,而那些使用AI导师的孩子,其表现水平与没有任何人工智能辅助的孩子持平。因此该研究的结论是:生成式AI会损害学习成果。
[原文] [Sinead Bovell]: But then there was a second study that happened at Harvard. And of course, we have to control for the fact that self-directed learning is a little bit different at a university level. And clearly, if you're getting into Harvard, there's also some kind of higher order thinking that you're able to do.
[译文] [Sinead Bovell]: 但随后哈佛大学进行了第二项研究。当然,我们必须考虑到大学层面的自主学习略有不同这一事实。而且很明显,如果你能进入哈佛,你也具备某种高阶思维能力。
[原文] [Sinead Bovell]: But that aside, it was a physics class and they broke the physics class into two groups the control group, which is the students that went to the traditional lecture with the with the professor. Then they broke out into peer groups and worked with one another and their peers, and had instructor led guidance on solving problems.
[译文] [Sinead Bovell]: 抛开这些不谈,那是一堂物理课,他们把班级分成两组:对照组,即学生去听教授的传统讲座,然后分组与同伴合作,并在讲师指导下解决问题。
[原文] [Sinead Bovell]: The second group had no in-class lessons at all. The entire process with was done with I, but they specifically designed the I to be self-paced.
[译文] [Sinead Bovell]: 第二组根本没有课堂授课。整个过程都是通过AI完成的,但他们专门将AI设计为“自定步调(self-paced)”的模式。
[原文] [Sinead Bovell]: Going with the students needs to provide immediate feedback whether the student was on the right direction or off the right direction, while they're doing the problems to provide motivation, and to really take all learning best practices and implement it into that system and continue to adapt how it tested the child based on how they were evolving and doing the problems when they did the general test.
[译文] [Sinead Bovell]: 它配合学生的需求,在学生做题时提供即时反馈——无论方向正确与否——并提供激励,真正将所有最佳学习实践融入该系统,并根据孩子的进步情况不断调整测试方式。
[原文] [Sinead Bovell]: After those two experiments, the kids that went the pathway of artificial intelligence performed twice, as well as the peers that didn't get access to AI, and they were more motivated and more engaged.
[译文] [Sinead Bovell]: 在这两个实验之后,(在期末综合测试中)那些走人工智能路径的孩子,其表现是未能使用AI的同龄人的两倍,而且他们更有动力,参与度更高。
[原文] [Sinead Bovell]: And so what we can learn from just those two kind of isolated studies is that you have to adapt the entire ecosystem, right? It's akin to inventing electricity, but only swapping out where the steam engine was and putting a light switch there or a light switch there. Not building out the entire assembly line and rethinking about how we design the system.
[译文] [Sinead Bovell]: 所以,仅从这两项独立的研究中我们可以学到的是,你必须调整整个生态系统,对吧?这就像发明了电力,却只是把原本蒸汽机所在的地方换掉,在这里或那里装个电灯开关,而不是建立整个流水线并重新思考我们如何设计系统。
[原文] [Sinead Bovell]: That's step one. Step two immediate feedback is absolutely vital in AI learning outcomes. Gone are the days if we're going to incorporate artificial intelligence where we wait for the unit test or midterms, or the end of year exam to see where students are.
[译文] [Sinead Bovell]: 这是第一步。第二步,即时反馈对AI学习成果至关重要。如果我们要整合人工智能,那么那种等待单元测试、期中考试或年终考试才来看学生水平的日子已经一去不复返了。
[原文] [Sinead Bovell]: Artificial intelligence needs to be. We need to be able to extract the data in real time. This is how somebody is adapting or this is how they're falling behind, and I needs to provide that feedback. Or else we learn visibility. We lose visibility into how well things are happening.
[译文] [Sinead Bovell]: 人工智能必须如此。我们需要能够实时提取数据。比如“这是某人适应的情况”或者“这是他们落后的地方”,AI需要提供这种反馈。否则我们就失去了可见性。我们就无法看清事情进展得如何。
[原文] [Sinead Bovell]: Self-paced learning is also vital. So if you go back to the high school, everybody had an hour and a half to learn the math problem. So whether you were working with your textbook or working with AI, that hurt people that were doing the AI method and it helped people doing the traditional method.
[译文] [Sinead Bovell]: 自定步调的学习也至关重要。如果你回到那个高中案例,每个人都有一个半小时来学习数学问题。所以无论你是用教科书还是用AI,这种(固定时间)伤害了使用AI方法的学生,却帮助了使用传统方法的学生。
[原文] [Sinead Bovell]: So kids need to learn at their own pace, and the system needs to be able to adapt in real time. So those are just a few of the key takeaways when we think about AI in education. But that's why this is a longer term redesign.
[译文] [Sinead Bovell]: 所以孩子们需要按照自己的节奏学习,系统需要能够实时适应。这就是当我们思考教育中的AI时的一些关键结论。这也是为什么这是一个长期的重新设计过程。
[原文] [Natalie Montbleau]: Yeah. This is like the institution of education we need to approach differently. It's akin to asking the accountant to redesign the concrete and the bricks. That's not what we should be doing. And I know that you there was a we had talked about a school that you had been tracking. So I think it could be helpful to share some of the insights there.
[译文] [Natalie Montbleau]: 是的。这就像是我们对待教育机构的方式需要改变。这类似于要求会计师去重新设计混凝土和砖块。那不是我们应该做的。我知道我们曾谈论过你一直关注的一所学校。所以我认为分享那里的一些见解会很有帮助。
[原文] [Natalie Montbleau]: Yeah, absolutely. Um, people familiar with Alpha School in the room, a few people, actually. The CEO spoke yesterday, and I was very keen for this conversation because I'm actually also starting to work in this space and possibly with them too.
[译文] [Natalie Montbleau]: 是的,当然。房间里有人熟悉Alpha School吗?实际上有几个人。他们的CEO昨天发过言,我对这次对话非常热衷,因为我实际上也开始在这个领域工作,可能也会与他们合作。
[原文] [Natalie Montbleau]: And they are an incredible, uh, school that is redesigned learning in the way that kind of Sinead has described as sort of from the ground up and kind of reimagined how learning happens and what it is.
[译文] [Natalie Montbleau]: 他们是一所令人难以置信的学校,正如Sinead所描述的那样,他们从零开始重新设计了学习,重新构想了学习是如何发生的以及学习的本质。
[原文] [Natalie Montbleau]: It's a two hour learning process where, like all the hard skills, all the knowledge that you need to learn at school happens in a two hour period with an AI tutor, and the experience is entirely personalized and adapted to where that student is at.
[译文] [Natalie Montbleau]: 这是一个两小时的学习过程,所有硬技能、你在学校需要学习的所有知识,都在这两个小时内通过AI导师完成,而且体验是完全个性化的,适应那个学生当前的水平。
[原文] [Natalie Montbleau]: So if you walk around the classroom, you'll see different students working on completely different math problems, let's say. And as it's understood what that student is interested in, the math problems become kind of contextualized within topics that they love.
[译文] [Natalie Montbleau]: 所以如果你在教室里走动,你会看到不同的学生在做完全不同的数学题。而且因为系统了解那个学生的兴趣所在,数学题会变得情境化,融入他们喜爱的主题中。
[原文] [Natalie Montbleau]: And so and critically and again, sort of going back to one of the best practices or, you know, mandataries in AI being successful in schools, there's real time feedback and performance on performance of that child. They can see their own performance and actually start to own that journey for themselves.
[译文] [Natalie Montbleau]: 而且关键的是,再次回到AI在学校取得成功的最佳实践或必要条件之一,那就是关于孩子表现的实时反馈。他们可以看到自己的表现,并实际上开始自己掌控这段旅程。
[原文] [Natalie Montbleau]: And they get everyone into the 99th percentile no matter where, where they've started in their journey. So I think that's a fascinating way to look at it.
[译文] [Natalie Montbleau]: 无论他们起步如何,学校都能让每个人进入前1%(99th percentile)。所以我认为这是一个令人着迷的视角。
[原文] [Natalie Montbleau]: And I think that besides those two hours, to the point of getting, you know, all of that work done in those two hours, and some people are able to some Some students are able to accomplish more double in those two hours and some in the higher performing end five times more. But the critical part is freeing those students to focus on life skills, on EQ skills, on kind of developing their own human ingenuity.
[译文] [Natalie Montbleau]: 我认为除了那两个小时——关键在于在那两个小时内完成所有工作,有些学生能完成双倍的任务,有些表现更好的甚至能完成五倍。但关键部分在于释放这些学生的时间,让他们专注于生活技能、情商技能,以及开发他们自己的人类独创性。
[原文] [Sinead Bovell]: Yeah, totally. And that's kind of the moment we're in, right? We're in pilots and experimentation and innovation, really a redesigned, zoomed out wide lens and some in some ways taking risks. But it should never hurt the learning outcome and it should never be a burden to teachers. And both of those things need to be true.
[译文] [Sinead Bovell]: 是的,完全正确。这正是我们所处的时刻,对吧?我们处于试点、实验和创新的阶段,确实需要一个重新设计的、拉远镜头的广阔视角,并且在某些方面需要承担风险。但它绝不应该损害学习成果,也绝不应该成为教师的负担。这两点必须同时成立。
📝 本节摘要:
针对AI带来的作弊和“走捷径”危机,Sinead提出了一个务实的假设:必须默认学生在放学后所做的一切都有AI参与。因此,教育者需要“翻转”传统的教学模式——将利用AI进行的初步研究和信息获取放在家中完成,而将真正的高阶思维、深度讨论和考核评估严格限制在课堂内进行。她强调,既然学生拥有了“口袋里的超级计算机”,学校的挑战难度反而应该提升,以确保人类的思维过程不被技术“短路”。
[原文] [Natalie Montbleau]: Absolutely. So I did want to kind of like, dig in a little bit more into some of the current challenges with AI that, you know, Educators, students and parents are experiencing today, which is around AI and cheating and using, you know, ChatGPT to to get to the answer right away and what impact that might be having on the learner experience and kind of the point of being at school.
[译文] [Natalie Montbleau]: 绝对是这样。所以我确实想再深入探讨一下目前AI带来的一些挑战,这些挑战是教育工作者、学生和家长今天正在经历的,也就是关于AI与作弊,以及使用像ChatGPT这样的工具直接获取答案的问题,以及这可能对学习者体验和上学的意义产生什么影响。
[原文] [Sinead Bovell]: Yes. Oh, absolutely. And I think that that we're in a bit of a crisis in this moment when it comes to artificial intelligence and cheating. And we can talk about what happens when you kind of short circuit that, that thinking.
[译文] [Sinead Bovell]: 是的。噢,绝对是。我认为此刻我们在人工智能和作弊问题上确实处于某种危机之中。我们可以谈谈当你某种程度上“短路”了那种思考过程时会发生什么。
[原文] [Sinead Bovell]: But I think the, the safest assumption we have to make in this moment is that kids are going to be using artificial intelligence at home. So whatever happens past 3 p.m., expect that to be powered by a supercomputer in some way.
[译文] [Sinead Bovell]: 但我认为,在这个时刻我们要做的最安全的假设是:孩子们会在家里使用人工智能。所以,无论下午3点以后发生了什么,都要预料到那在某种程度上是由超级计算机驱动的。
[原文] [Sinead Bovell]: So we have to start there. That means we have to change what we are doing in the classroom. And so in some instances, that means maybe we flip what happens at home happens in the classroom, but in other ways, maybe, for example, you teach history, you give children the research portion, and they can go home and do all the research with ChatGPT that they want.
[译文] [Sinead Bovell]: 所以我们必须从那里开始。这意味着我们要改变我们在课堂上做的事情。在某些情况下,这意味着我们要把在家里发生的事和在课堂上发生的事进行“翻转”;而在其他方面,也许,比如你教历史,你把研究部分交给孩子,他们可以回家用ChatGPT做所有他们想做的研究。
[原文] [Sinead Bovell]: But the higher order, critical thinking, deep learning and discussion, all of that happens in the classroom. So the classroom really has to be a place where the deep learning is happening, where the testing is happening and where we're raising the bar on knowledge.
[译文] [Sinead Bovell]: 但是,高阶思维、批判性思维、深度学习和讨论,所有这些都必须在课堂上发生。所以课堂真的必须成为深度学习发生的场所,成为测试发生的场所,以及我们提高知识标准的地方。
[原文] [Sinead Bovell]: But we do have to assume everything past four is likely going to be done, co-created by or outsourced to an AI system. And then again, the broader goal and the longer term goal is that we've entirely redesigned the curriculum to account for the fact that kids can lean into supercomputers, because that is the actual goal.
[译文] [Sinead Bovell]: 但我们确实必须假设,下午四点以后的一切很可能都是由AI系统完成、共同创造或外包给AI系统的。再一次强调,更广泛和更长期的目标是,我们要完全重新设计课程,以考虑到孩子们可以依赖超级计算机这一事实,因为这才是真正的目标。
[原文] [Sinead Bovell]: In the end, kids in school today are going to step out into a world with advanced robots, with supercomputers that are polymaths. We want them to know how to engage with these, with these tools and these systems, how to utilize them, how to invent with them, and we'll have to redesign education to account for that.
[译文] [Sinead Bovell]: 最终,今天的在校学生将步入一个拥有先进机器人和博学家(polymaths)般超级计算机的世界。我们希望他们知道如何与这些工具和系统互动,如何利用它们,如何用它们进行发明,我们将不得不为此重新设计教育。
[原文] [Sinead Bovell]: And we'll have to make school harder, because you do get access to these supercomputers. And so in a kind of superficial way, maybe that means people are learning about quantum computing at seven years old, because that learning is facilitated by a teacher and by a supercomputer.
[译文] [Sinead Bovell]: 而且我们将不得不让学校变得更难,因为你确实可以使用这些超级计算机。所以从表面上看,这也许意味着人们在七岁时就在学习量子计算,因为这种学习是由老师和超级计算机共同促进的。
[原文] [Sinead Bovell]: But that part is going to take time. So the more urgent kind of redesign is flipping what happens in school towards versus what happens at home. What happens if we don't do that? And I think it's quite obvious, right? We end up just short circuiting the thinking.
[译文] [Sinead Bovell]: 但这部分需要时间。所以更紧迫的重新设计是翻转学校里发生的事与家里发生的事。如果我们不这样做会发生什么?我想这很明显,对吧?结果就是我们只是让思考过程“短路”了。
[原文] [Sinead Bovell]: So there shouldn't be anything to cheat on, because what happens at home isn't what we are evaluating. And that is, I think, the kind of the baseline that we need to to move towards. And that's, I think, what we should be doing more urgently.
[译文] [Sinead Bovell]: 所以不应该有任何可以作弊的东西,因为家里发生的事情并不是我们要评估的内容。我认为,这就是我们需要迈向的基准线。而且我认为,这就是我们要更紧迫地去做的事情。
📝 本节摘要:
本节探讨了在AI能回答一切的时代,人类应当如何通过“去技术化”的核心素养来确立自身价值。Natalie分享了Alpha School利用AI辅助学生在无羞耻感环境中练习公众演讲的案例,并引用认知科学观点区分了AI擅长的“程序性认知”与人类独有的“生活体验”。Sinead进一步指出,面对不可预测的未来,与其盲目追逐技术技能,不如通过阅读、游戏和跨学科思考来培养适应力。她特别警示了过度依赖AI导致的“信心危机(Confidence Crisis)”,引用微软与卡内基梅隆的研究证明,外包认知过程会削弱人类的批判性思维。最后,对话触及了“氛围工程师(Vibe Engineer)”这一新兴概念,强调了以人为本的设计思维将是未来的关键技能。
[原文] [Natalie Montbleau]: So many places to go from everything that you just said there. But I'd say, you know, so maybe a positive example of how outside of hard learning, and maybe an AI tutor helping you learn the things that you need to learn at your own pace in the most individualized and sort of data rich manner.
[译文] [Natalie Montbleau]: 从你刚才所说的一切中,我们可以延伸出太多话题了。但我想说,也许有个正面的例子,是关于在硬性学习之外,或者在AI导师以最个性化、数据最丰富的方式帮助你按自己的节奏学习所需知识之外的。
[原文] [Natalie Montbleau]: Well, how can you use AI outside of that core learning in a way that helps helps children become more human? Right. So like, what does human flourishing kind of look like and does? I have a role in that.
[译文] [Natalie Montbleau]: 那么,在这个核心学习之外,你如何利用AI来帮助孩子们变得更像“人”?对。比如,“人类蓬勃发展(human flourishing)”看起来是什么样的?AI在其中有作用吗?
[原文] [Natalie Montbleau]: I heard an example yesterday again from Alpha School one. One of the projects starting at kindergarten is public speaking. And these these children are kind of practicing public speaking.
[译文] [Natalie Montbleau]: 我昨天又从Alpha School那里听到了一个例子。从幼儿园开始的项目之一就是公众演讲。这些孩子们正在练习公众演讲。
[原文] [Natalie Montbleau]: Getting real time feedback from these large language models on, you know, are they saying, um, too much, you know, how is their pacing? So they can do all of this practicing in a sort of shame free environment and really build up their confidence for the real deal, and that's standing on stage and having that confidence to present.
[译文] [Natalie Montbleau]: 他们从这些大语言模型那里获得实时反馈,比如,他们是否说了太多的“呃(um)”,他们的语速如何?这样他们就可以在一个某种程度上“无羞耻感(shame-free)”的环境中进行所有这些练习,并真正为实战建立信心,即站在舞台上并拥有展示的自信。
[原文] [Natalie Montbleau]: And that kind of leads me to sort of like think about like what you said, computers, AI is going to become so powerful. So to what extent? I think we need to learn how they work and learn how to use them.
[译文] [Natalie Montbleau]: 这让我不禁思考你所说的,计算机和AI将变得如此强大。那么这种程度究竟如何?我认为我们需要学习它们是如何工作的,以及学习如何使用它们。
[原文] [Natalie Montbleau]: But also, what should we as humans be focused on because we want to collaborate with AI? So what are the sorts of subjects and skills that are deeply human that we can focus on?
[译文] [Natalie Montbleau]: 但同时,作为人类我们应该关注什么,既然我们想要与AI协作?那么有哪些深刻属于人类的学科和技能是我们可以关注的?
[原文] [Natalie Montbleau]: And something I've been thinking about quite a bit recently is like different types of knowing. There's a cognitive scientist called John Vervaeke, and he plots out different types of knowing and kind of the most sort of like academic or kind of research based and fact and knowledge based learning is procedural knowing.
[译文] [Natalie Montbleau]: 我最近一直在思考的一件事是关于不同类型的“认知(knowing)”。有一位名叫John Vervaeke的认知科学家,他划分了不同类型的认知,其中最学术化、或者基于研究、基于事实和知识的学习被称为“程序性认知(procedural knowing)”。
[原文] [Natalie Montbleau]: And that's the kind of knowing that AI is really good at and is getting increasingly good at. But what I doesn't have is lived experience and deep insights that change you as you experience them in the world. Change your perception of the world, and then change how you connect with others.
[译文] [Natalie Montbleau]: 那正是AI非常擅长且越来越擅长的认知类型。但AI所不具备的是“生活体验(lived experience)”和深刻的洞察力,那些当你身处世界经历它们时会改变你的东西。改变你对世界的感知,进而改变你与他人连接的方式。
[原文] [Natalie Montbleau]: And so I've been really interested to hear some of the talks this week about experiential learning environments. I learned about Thinkery, which is here in Austin, and it's kind of this interactive learning museum environment, and it's just really interesting to think about what are those deeply human skills that we can be focused on teaching students while they're learning what AI is and the best ways to collaborate with AI.
[译文] [Natalie Montbleau]: 所以我非常有兴趣听听这周关于体验式学习环境的一些讲座。我了解到了位于奥斯汀这里的Thinkery,它是一种互动式的学习博物馆环境,思考我们在教学生认识AI以及与AI协作的最佳方式的同时,可以专注于教授哪些深刻的人类技能,这真的很有趣。
[原文] [Sinead Bovell]: Yeah, I think that that's vital. And I just wanted to quickly go back to the cheating. Another thing we'll probably need to do in the short term is introduce more pop quizzes and surprise tests, and they don't need to count towards grades, but to see where students are as we are in this kind of new territory.
[译文] [Sinead Bovell]: 是的,我认为这至关重要。我想快速回到作弊的话题。在短期内我们可能需要做的另一件事是引入更多的突击测验和惊喜测试,它们不需要计入成绩,而是为了在这一新领域中看清学生的真实水平。
[原文] [Sinead Bovell]: So a lot of times we don't know how much they're using AI, how much they're cheating with it to insert more, more chances for assessment and be tracking that data. Because that's another thing. We don't have a lot of visibility into how this experiment is going in terms of what I can do and what I can't do, and therefore, what should we be teaching kids in particular and what skills should be fostering?
[译文] [Sinead Bovell]: 因为很多时候我们不知道他们使用了多少AI,利用它作弊了多少,所以要插入更多、更多的评估机会并追踪这些数据。因为那是另一回事。对于这个实验——即关于AI能做什么和不能做什么,以及因此我们应该特别教给孩子什么和培养什么技能——我们并没有太多的可见性。
[原文] [Sinead Bovell]: My philosophy is we should never assume, I will never be able to do something. And the reality is we cannot predict the future, what jobs will be there, how advanced AI is going to get, and how quickly.
[译文] [Sinead Bovell]: 我的哲学是,我们绝不应该假设“AI永远做不到某件事”。现实是我们无法预测未来,会有什么工作,AI会变得多先进,以及发展得有多快。
[原文] [Sinead Bovell]: That means we have to prepare kids for absolutely anything which ever way the future evolves. However quickly we get to the moon, start genetic engineering. Kids can pivot, adapt, and think critically about the world around them. And most of those skills don't actually have anything to do with technology? Technology.
[译文] [Sinead Bovell]: 这意味着我们必须让孩子们为“任何事情”做好准备,无论未来如何演变。无论我们要多久才能到达月球,或开始基因工程。孩子们都能转型、适应,并批判性地思考周围的世界。而这些技能中的大多数实际上与技术无关。
[原文] [Sinead Bovell]: They require deeper thinking. Critical thinking is absolutely vital in the age of advanced technologies. Kids need to read more, read for the sake of reading, and read in a way that they can come back to school or with their parents and discuss the ideas and have those ideas challenged.
[译文] [Sinead Bovell]: 它们需要更深层的思考。在先进技术时代,批判性思维绝对是至关重要的。孩子们需要多读书,为了阅读而阅读,并且以一种能回到学校或与父母讨论这些观点、并让这些观点受到挑战的方式去阅读。
[原文] [Sinead Bovell]: Kids need to play more in the age of advanced technologies. The future Steve Jobs of the world, they're not going to come from a corporate cubicle. They're going to come from people that have imagination, that can play freely, experiment and work collaboratively.
[译文] [Sinead Bovell]: 在先进技术时代,孩子们需要更多地玩耍。未来的史蒂夫·乔布斯们不会来自企业的格子间。他们将来自那些拥有想象力、能自由玩耍、实验并协同工作的人。
[原文] [Sinead Bovell]: Long term thinking. So getting kids to think beyond the immediate horizon and beyond just this unit test in chemistry or in math. But how could this impact things in 510 years to come?
[译文] [Sinead Bovell]: 长期思维。让孩子们思考超越眼前的地平线,超越仅仅是化学或数学的单元测试。而是思考这在未来5到10年内会产生什么影响?
[原文] [Sinead Bovell]: And even cross-disciplinary thinking. Kids in school today are likely to hold 17 jobs across five different industries. They won't be doing just one thing. So we have to get them to think, how does math connect to what I just learned in history, which may connect to what I do in philosophy or in English. So all of these new and they're not even new skills, but it's just about centering these types of skills.
[译文] [Sinead Bovell]: 甚至跨学科思维。今天的在校学生可能在五个不同的行业从事17份工作。他们不会只做一件事。所以我们必须让他们思考,数学如何与我刚才在历史中学到的东西联系起来,而这又可能与我在哲学或英语中所做的有什么联系。所有这些新的——其实它们甚至不是新技能,只是我们需要将这些类型的技能置于中心位置。
[原文] [Sinead Bovell]: Most of the important, the most important skills for the future are ones we can foster for free. And that's what I think we can sometimes miss in these moments where we feel like we have to lean into technology for the sake of it, but it's actually the other skills that we need to make sure we are doubling down on in the age of, of advanced technologies.
[译文] [Sinead Bovell]: 大多数重要的、未来最重要的技能都是我们可以免费培养的。我认为这正是我们在这种时刻有时会忽略的,我们感觉必须为了技术而依赖技术,但实际上在先进技术时代,我们需要确保加倍投入的反而是其他技能。
[原文] [Sinead Bovell]: And one kind of lived example that I, that I do with my nieces and nephews constantly since the age of about 6 or 7, I theoretically introduce them to technology, and this is what can happen in the classroom as well.
[译文] [Sinead Bovell]: 我有一个经常在我的侄女和侄子身上实践的生动例子,大约从他们6、7岁开始,我在理论上向他们介绍技术,这也是可以在课堂上发生的事情。
[原文] [Sinead Bovell]: I explain concepts in age appropriate ways, like genetic engineering, and I ask them to interpret what that would mean for their world and their sense of ethics. So if we could theoretically make sure nobody gets sick in the world with these technologies, should we do that?
[译文] [Sinead Bovell]: 我用适合年龄的方式解释概念,比如基因工程,然后我让他们解释这对他们的世界和他们的道德观意味着什么。所以,如果在理论上我们可以利用这些技术确保世界上没有人会生病,我们应该那样做吗?
[原文] [Sinead Bovell]: But what if, to my nephew, it meant all that basketball practice you do, somebody didn't have to do, because that same technology allows them to suddenly be really good at basketball. How should we think about that?
[译文] [Sinead Bovell]: 但是,如果对我的侄子来说,这意味着你所做的所有篮球训练,别人都不需要做,因为同样的技术可以让他们突然变得非常擅长篮球,那该怎么办?我们该如何思考这个问题?
[原文] [Sinead Bovell]: They engage in the higher order thinking they're exposed to the longer term concepts of technology without actually having to play around passively on an iPad. So these are the types of deep conversations and higher order thinking that can happen into the classroom, that teachers are uniquely positioned to deliver and to facilitate.
[译文] [Sinead Bovell]: 他们参与了高阶思维,接触了技术的长期概念,而不需要实际上被动地在iPad上玩耍。所以,这些是可以在课堂上发生的深度对话和高阶思维,而教师拥有独特的地位来传达和促进这些。
[原文] [Sinead Bovell]: I mean, when you think about a teacher and they don't get enough credit for all of the things that they do, I mean, the curriculum is one small part of it. They are social workers, they are therapists. They know their children inside and out. So being able to go deep into these types of conversations, that's what we also need to be focusing on.
[译文] [Sinead Bovell]: 我的意思是,当你想到教师时,他们所做的一切并没有得到足够的赞誉,我是说,课程只是其中很小的一部分。他们是社会工作者,是治疗师。他们彻头彻尾地了解他们的孩子。所以能够深入进行这类对话,也是我们需要关注的。
[原文] [Sinead Bovell]: And I know it sometimes feels counterintuitive because I'm a futurist and I spend most of my days in patents and technologies talking about robots uploading brain interfaces. Yet the most important skills for the future have nothing to do with technology.
[译文] [Sinead Bovell]: 我知道这有时感觉违反直觉,因为我是一名未来学家,我大部分时间都在研究专利和技术,谈论机器人、上传大脑接口。然而,未来最重要的技能与技术无关。
[原文] [Sinead Bovell]: And I want to go back to something that you said. So technology for the sake of technology is absolutely not the right way to go about things. But learning for the sake of learning is.
[译文] [Sinead Bovell]: 我想回到你刚才说的一点。为了技术而技术绝对不是正确的做事方式。但为了学习而学习是。
[原文] [Sinead Bovell]: And there's some really interesting insights this week about how schools and test based learning does not set students up to enjoy or take pride in just the sort of act of learning and, you know, sort of encouraged and optimized to find the answer, get the answer right.
[译文] [Sinead Bovell]: 这周有一些非常有趣的见解,关于学校和基于考试的学习并没有让学生准备好去享受或从纯粹的学习行为中感到自豪,你知道,他们被鼓励和优化去寻找答案,去得到正确答案。
[原文] [Sinead Bovell]: And then even in a critical thinking class that cognitive scientist Christine Lagarde talked about yesterday, even in that class where there is no right answer. What the students wanted is they wanted the rubric to get there.
[译文] [Sinead Bovell]: 甚至在昨天认知科学家Christine Lagarde谈到的批判性思维课上,即使在那堂没有正确答案的课上。学生们想要的依然是到达那里的评分标准(rubric)。
[原文] [Sinead Bovell]: And what she said to them is like, well, you know, do you think you're going to when you have a job in the real world, do you think that there is a way, like, you know, you're not going to be asked to to, you know, discover the answer or where we should go or the right path, I should say, by being given the rubric.
[译文] [Sinead Bovell]: 而她对他们说的是,嗯,你知道,你认为当你在现实世界中拥有一份工作时,你觉得会有办法——比如,你不会被要求去发现答案,或者我们要去哪里,或者正确的路径——是通过给你一个评分标准来完成的吗?
[原文] [Sinead Bovell]: So it sort of seemed very were at this kind of acute moment where the way students are taught and what they're taught to optimize for is very at odds with where we're at right now with AI and the fact that it is designed to give you the answer.
[译文] [Sinead Bovell]: 所以似乎我们正处于一个非常尖锐的时刻,学生被教导的方式以及他们被教导去优化的目标,与我们要面对的现状——即AI正是被设计来直接给你答案的这一事实——非常矛盾。
[原文] [Sinead Bovell]: And I actually talked to a teacher who said teaching 16 to 18 year olds, and she was like, okay, so what I do to try and kind of circumvent the use of AI and writing is I have my students write in class, and I do give them a bit of a rubric like this is, you know, a good structure for an essay.
[译文] [Sinead Bovell]: 我实际上和一位教16到18岁学生的老师聊过,她说,好吧,所以我为了试图规避在写作中使用AI,我就让学生在课堂上写作,我也确实给他们一点评分标准,比如这就是一篇好文章的结构。
[原文] [Sinead Bovell]: And then when it comes time to actually submit the essay, they go home and type it up in some cases more. In a few cases, that student had kind of ripped up their essay and basically just completely generated a new one in ChatGPT.
[译文] [Sinead Bovell]: 然后当到了真正提交文章的时候,他们回家把它打出来——在某些情况下甚至更多。在少数情况下,那个学生某种程度上撕毁了他们的文章,基本上只是在ChatGPT里完全生成了一篇新的。
[原文] [Sinead Bovell]: And what that said to me was, I mean, there was no like time saved or cognitive load saved in doing that. What that says to me is that there's like, we're in a confidence crisis.
[译文] [Sinead Bovell]: 这对我来说意味着——我的意思是,那样做并没有节省时间或节省认知负荷。这告诉我的是,我们正处于一种“信心危机(confidence crisis)”。
[原文] [Sinead Bovell]: Yeah. And this is this is potentially really detrimental to society more broadly, not just kids, but all of us, that we become so reliant on these technologies. We stop believing in our own ability to make decisions.
[译文] [Sinead Bovell]: 是的。这对更广泛的社会具有潜在的真正危害,不仅是孩子,而是我们所有人,我们变得如此依赖这些技术。我们不再相信自己做决定的能力。
[原文] [Sinead Bovell]: And no matter how good technology gets at something, there will be times when we have to deviate from the technology's advice, and we have to make sure we are ready for all of those moments.
[译文] [Sinead Bovell]: 无论技术在某方面变得多么优秀,总会有我们需要偏离技术建议的时候,我们必须确保我们为所有这些时刻做好了准备。
[原文] [Sinead Bovell]: And you might even hear people talk about optimizing every aspect of your life with artificial intelligence. And I somewhat take issue with that, because if writing that email is the one time in the day where you think deeply, you move through your ideas, you have to structure what you want to say, and you pass that to an AI, unless you are replacing that time and that thinking with something else, that's a dicey bridge to be walking down.
[译文] [Sinead Bovell]: 你甚至可能听到人们谈论用人工智能优化你生活的方方面面。我对此有些异议,因为如果写那封邮件是你一天中唯一深入思考、梳理思路、必须构建你想说内容的时候,而你把它交给了AI——除非你用其他东西替代了那段时间和那种思考——否则那是一座危险的独木桥。
[原文] [Sinead Bovell]: And there was a recent study, I believe it was Microsoft and Carnegie that that joined forces for this study. And it did show that overreliance on artificial intelligence can reduce our ability to think critically. So we need to make sure we are strengthening these skills as we start to move and work alongside artificial intelligence.
[译文] [Sinead Bovell]: 最近有一项研究,我相信是微软和卡内基梅隆大学联合进行的。它确实表明,过度依赖人工智能会降低我们批判性思考的能力。所以我们需要确保在开始与人工智能并肩工作时,加强这些技能。
[原文] [Sinead Bovell]: And there was another study that was really helpful that showed this in real life, in the workforce, where entrepreneurs were given access to AI systems to help with their small businesses. The high performing entrepreneurs that had deep critical thinking skills. I supercharged their performance because they knew the right questions to ask of the AI, and they knew how to apply the AI's answer to their business.
[译文] [Sinead Bovell]: 还有另一项非常有用的研究,在现实生活中的劳动力市场上展示了这一点,企业家获准使用AI系统来帮助他们的小企业。那些拥有深厚批判性思维技能的高绩效企业家,AI使他们的表现如虎添翼,因为他们知道该向AI问什么正确的问题,并且他们知道如何将AI的答案应用到他们的业务中。
[原文] [Sinead Bovell]: When the low performing entrepreneurs asked AI questions, they ended up doing worse and it hurt the company because they asked the wrong questions. They just gave up and asked the hard questions, and they didn't know how to apply the material to their actual startup.
[译文] [Sinead Bovell]: 而当低绩效企业家向AI提问时,他们最终表现得更差,甚至损害了公司,因为他们问了错误的问题。他们只是放弃思考,直接把难题丢给AI,而且他们不知道如何将材料应用到他们的实际创业中。
[原文] [Sinead Bovell]: So we don't want to build societies where we are 100% reliant on these systems, and that's something that we have to really think carefully about from at an adult age and at a child age.
[译文] [Sinead Bovell]: 所以我们不想建立一个百分之百依赖这些系统的社会,这是我们必须从成人和儿童的角度认真思考的问题。
[原文] [Sinead Bovell]: And I think we're already seeing it in terms of our attention spans spelling. I'm sure there's a lot of people in this room, myself included, who feel like I spelled that word last week, and now I have no idea how to spell it. This week, we want to make sure we're not short circuiting the thinking in this age. So again, really centering deep problem solving, critical thinking and deep learning.
[译文] [Sinead Bovell]: 我想我们在注意力持续时间和拼写方面已经看到了这一点。我相信这个房间里有很多人,包括我自己,都觉得“我上周还会拼那个词,这周我就完全不知道怎么拼了”。我们要确保在这个时代我们没有让思考过程“短路”。所以再次强调,真正以深度解决问题、批判性思维和深度学习为中心。
[原文] [Natalie Montbleau]: Yeah, there's been a number of studies like the Carnegie Mellon, Microsoft one that shows when you outsource your cognitive work to an AI, you actually become cognitively weaker. And that seems like extremely critical at an age, you know, in a period in time where, you know, students are supposed to be honing their cognitive abilities.
[译文] [Natalie Montbleau]: 是的,像卡内基梅隆和微软的这类研究有很多,它们表明当你把认知工作外包给AI时,你的认知能力实际上会变弱。这在一个学生本应磨练其认知能力的年龄段和时期显得极其关键。
[原文] [Natalie Montbleau]: But then it's like, well, how do you know? How can you engage with that? I, you know, in a way to actually, um, benefit from it. And knowing that you if you outsource the cognitive load and you're not doing the cognitive work yourself, not only are you missing that moment, but you're missing that the insights kind of living within you and kind of settling within you and kind of becoming who you are and increasing your body of knowledge and your resilience and your strength and your expertise.
[译文] [Natalie Montbleau]: 但这就好像,嗯,你怎么知道?你如何与之互动?我是说,以一种真正从中受益的方式。而且你要知道,如果你外包了认知负荷,你自己不做认知工作,你不仅错过了那个时刻,你也错过了让那些洞察力在你内心生存、沉淀,并成为你的一部分,以及增加你的知识体系、韧性、力量和专业知识的机会。
[原文] [Natalie Montbleau]: And it seems like in this day and age where it's so uncertain what jobs will look like, what the future will look like, kind of radical self-dependence is something that we should be teaching. And it would be great to kind of hear a little bit about where we think that kind of responsibility lies in that respect.
[译文] [Natalie Montbleau]: 看起来在这个时代,工作会是什么样、未来会是什么样都充满了不确定性,一种“激进的自立(radical self-dependence)”应该是我们要教的东西。如果能听听我们认为这方面的责任在于哪里,那就太好了。
[原文] [Sinead Bovell]: Yeah. And I always hesitate to, when I think about responsibility to bring in parents because everybody is coming from a different place. And we can't really control what happens in the home. That's an that's an entire other week of South. By making sure that all homes are equal and have access to the same things.
[译文] [Sinead Bovell]: 是的。在谈到责任时,我总是犹豫是否要将家长卷入其中,因为每个人的背景都不同。我们无法真正控制家庭中发生的事情。那是另一周西南偏南大会要讨论的话题——确保所有家庭都是平等的并能获得相同的东西。
[原文] [Sinead Bovell]: But in school, I think we really need to think about building confidence as a skill for kids so they can continue to trust the questions that they're asking and their own ability to generate answers.
[译文] [Sinead Bovell]: 但在学校里,我认为我们真的需要考虑将“建立信心”作为一种技能教给孩子,这样他们才能继续相信他们提出的问题以及他们自己生成答案的能力。
[原文] [Sinead Bovell]: And again, it doesn't mean, of course, in a world where AI is a master of quantum computing, we want kids to be able to ask questions, but we help them think more deeply about the questions that they're answering, and they have a broad understanding of the questions that they're asking. They have a broad understanding of the answers that AI can give them.
[译文] [Sinead Bovell]: 再说一次,这并不意味着——当然,在一个AI精通量子计算的世界里,我们希望孩子们能够提问——但我们要帮助他们更深入地思考他们正在回答的问题,并且对他们正在问的问题有广泛的理解。他们对AI能给他们的答案也有广泛的理解。
[原文] [Sinead Bovell]: And again, that is a fundamentally different society, right? Where we go from what is the answer to what is the question? And that's why that is part of that bigger system wide redesign. But I think centering confidence, encouraging kids to speak in front of class, classmates, engage in conversation, because that is also the interface of the future. It's conversing with these AI systems is absolutely critical.
[译文] [Sinead Bovell]: 再说一次,这是一个根本不同的社会,对吧?我们从“答案是什么”转变为“问题是什么”。这就是为什么这是那个更大的系统性重新设计的一部分。但我认为要把信心放在中心位置,鼓励孩子们在班级、同学面前发言,参与对话,因为这也是未来的界面。与这些AI系统对话绝对是关键。
[原文] [Sinead Bovell]: And then in terms of you asked, what are the jobs of the future? Nobody can really predict them. We can predict the jobs that are going to be automated. That's much easier to see. But the same way, nobody 20 years ago could have predicted a social media manager was going to be vital to a company's existence.
[译文] [Sinead Bovell]: 至于你问到的,未来的工作是什么?没人能真正预测。我们可以预测哪些工作会被自动化。这更容易看出来。但同样地,20年前没人能预测社交媒体经理会对一家公司的生存至关重要。
[原文] [Sinead Bovell]: Most of the jobs we can't really see, we know that there's going to be some convergence of synthetic biology and artificial intelligence in space. But again, it's about preparing kids for anything, not just trying to. I think we need to move away from preparing kids for jobs, because jobs are going to change and that much we can guarantee.
[译文] [Sinead Bovell]: 大多数工作我们还看不到,我们知道在太空中会有合成生物学和人工智能的某种融合。但这还是关于让孩子们为“任何事情”做好准备,而不仅仅是试图……我认为我们需要摆脱“为工作而让孩子做准备”的思路,因为工作将会改变,这一点我们可以保证。
[原文] [Sinead Bovell]: And that also means moving away from coupling identity to jobs. We have to move away from that entire philosophy. Right? That that idea that we learn, we work, we retire. That's all changing.
[译文] [Sinead Bovell]: 这也意味着我们要摆脱将身份与工作挂钩的做法。我们必须摆脱这一整套哲学。对吧?那种我们学习、工作、然后退休的观念。一切都在改变。
[原文] [Sinead Bovell]: So instead, we encourage kids to lean into the problems that they want to solve, the skills that they want to to adopt, and the amazing ways that they want to change the world. I mean, tell kids about the robots and the AI systems that they'll be living with and ask them what they want to do with it versus coupling identity to jobs, because that is just going to end up in a crisis.
[译文] [Sinead Bovell]: 所以取而代之的是,我们鼓励孩子们专注于他们想要解决的问题,他们想要掌握的技能,以及他们想要改变世界的奇妙方式。我的意思是,告诉孩子们关于他们将与之共存的机器人和AI系统,并问他们想用它做什么,而不是将身份与工作挂钩,因为那样只会以危机告终。
[原文] [Natalie Montbleau]: And so some of the skills that we can teach children to kind of prepare for this new future. Um, people sort of use the term like metacognition, right? So how to think?
[译文] [Natalie Montbleau]: 那么我们可以教给孩子们一些技能,以便为这个新未来做准备。人们使用像“元认知(metacognition)”这样的术语,对吧?也就是“如何思考”?
[原文] [Natalie Montbleau]: And it was interesting in a, in a talk yesterday. So, you know, one educator was saying, you know, well, you can't necessarily stop people from using stop students from using ChatGPT. But something that he does is like, okay, you used it. Show me your prompts. Show me the questions that you asked it.
[译文] [Natalie Montbleau]: 昨天的一个讲座很有趣。你知道,一位教育工作者说,嗯,你不一定能阻止学生使用ChatGPT。但他做的是——好吧,你用了它。给我看你的提示词(prompts)。给我看你问它的问题。
[原文] [Natalie Montbleau]: Like, show me how you pushed ChatGPT because if you can ask good questions and if you can become a good communicator and you don't necessarily, and you kind of know where you want the answer to go and you can prompt in that direction, then, um, then then that's, that's a skill. That's a skill for today and, and for the future.
[译文] [Natalie Montbleau]: 比如,给我展示你是如何推动ChatGPT的,因为如果你能提出好问题,如果你能成为一个好的沟通者,而且你不一定……你知道你希望答案走向何方并且你可以朝那个方向提示,那么,这就是一种技能。这是为了今天,也是为了未来的技能。
[原文] [Natalie Montbleau]: Um, another skill that sort of came up as kind of an experimental sort of skill. The New York Times recently covered a story about using this term like a vibe engineer, which is this idea that kind of almost anyone with the will and the passion to do it. And that's something that I think we need to double down on, on encouraging in every individual.
[译文] [Natalie Montbleau]: 嗯,另一种作为实验性技能出现的技能。纽约时报最近报道了一个故事,使用了“氛围工程师(vibe engineer)”这个术语,这个想法是,几乎任何有意志和激情去做这件事的人都能做到。我认为这是我们需要加倍鼓励每个人去做的。
[原文] [Natalie Montbleau]: But anybody with the will and the desire to create an app can basically do that now. And it's this thing called like vibe engineering. So a lot of people kind of creating apps for themselves or apps for just a few people. And so one of the kind of emerging skills that was discussed was like around human centered design. So if anybody can design products for others, like how do we get into, well, what would be good for others? And so that felt like another territory that felt rich.
[译文] [Natalie Montbleau]: 现在基本上任何有创造App的意志和愿望的人都能做到。这就是所谓的“氛围工程”。所以很多人为自己或者仅为少数人创建App。因此,被讨论的新兴技能之一是关于“以人为本的设计(human centered design)”。如果任何人都可以为他人设计产品,比如我们如何切入“什么对他人有益”?所以这感觉是另一个内容丰富的领域。
[原文] [Sinead Bovell]: Yeah, yeah, I think centering the human experience in an age of advanced technologies is an investment that we should definitely be doubling down on. And yeah. And again that does mean introducing kids to these ideas to these technologies. But then bringing it back to to the human to, to just kind of the core fundamentals.
[译文] [Sinead Bovell]: 是的,是的,我认为在先进技术时代将人类体验置于中心,是我们绝对应该加倍投入的投资。是的。再次强调,这确实意味着向孩子们介绍这些理念和技术。但随后要将其带回到人本身,回到那些核心的基础。
[原文] [Sinead Bovell]: I mean, I think history, ethics, philosophy, these are subjects that become more important the more advanced and technical our societies get. And like you mentioned earlier, you know, the computer scientists learning today are going to be the future tech tycoons of tomorrow. And so what can we be teaching them to create more ethical AI?
[译文] [Sinead Bovell]: 我的意思是,我认为历史、伦理学、哲学,随着我们的社会变得越先进和技术化,这些学科就变得越重要。正如你早些时候提到的,今天正在学习的计算机科学家将成为明天的科技大亨。所以我们可以教他们什么来创造更合乎伦理的AI?
[原文] [Sinead Bovell]: And, you know, exponential technologies that are good for people that are designed in a way that is good for society. And so I think that's a really hopeful message that we are in that moment now where that next generation of builders are kind of we have that opportunity to kind of coach them and help them ask the right questions and design for the good of society.
[译文] [Sinead Bovell]: 以及,你知道,那些对人类有益的、以对社会有益的方式设计的指数级技术。所以我认为这是一个充满希望的信息,我们正处于这样一个时刻,对于下一代建设者,我们有机会去指导他们,帮助他们提出正确的问题,并为社会利益而设计。
[原文] [Natalie Montbleau]: Yeah, I don't think I could have said it better myself. Yeah. Um, so actually a bit of a segue into the question of just ethics in this space more broadly. Um, and, and actually, maybe before we kind of like dive into some, some of those areas, uh, what do we think of kind of the role of the educator in, in all of this?
[译文] [Natalie Montbleau]: 是的,我想我说得再好不过了。是的。嗯,所以这实际上有点过渡到更广泛的这个领域的伦理问题了。嗯,实际上,也许在我们深入探讨其中一些领域之前,我们如何看待教育者在这一切中的角色?
[原文] [Natalie Montbleau]: And how does that shift? So let's say in, in, you know, um, in a great situation where you've got I tutor, I tutor that is, you know, the entire, entire reimagined, um, approach that you mentioned, where you have an AI tutor that's giving you adaptive learning, personalized and all of that. Um, what is the role of the educator in all of that?
[译文] [Natalie Montbleau]: 这种角色如何转变?比方说,在一个很好的情况下,你有AI导师,那个你提到的完全重新构想的方法,你有AI导师给你提供适应性学习、个性化等等。嗯,教育者在这一切中的角色是什么?
[原文] [Sinead Bovell]: I think that that's going to evolve. The more these pilots and the more the studies come through the different positioning that the educator takes. So whether that's deep expertise in some areas which will be vital, whether that's facilitating the right questions to ask, the right way to think about material and the right way to think about learning.
[译文] [Sinead Bovell]: 我认为那将会演变。随着更多的试点项目和研究出现,教育者将采取不同的定位。无论是在某些领域的深厚专业知识——这将是至关重要的——还是促进提出正确的问题、思考材料的正确方式以及思考学习的正确方式。
[原文] [Sinead Bovell]: I think the role of the educator stays deeply coupled with kids understanding and knowing how to learn. And that is what education was supposed to be for learning. And so I think it goes back to that we've become. We've redesigned education to prepare people for work. And I think we need to move towards preparing people for life.
[译文] [Sinead Bovell]: 我认为教育者的角色仍然与孩子们理解和知道“如何学习”紧密相连。这正是教育本应为学习而存在的意义。所以我认为这回到了我们现在的状态。我们已经重新设计了教育来让人为工作做准备。而我认为我们需要转向让人为生活做准备。
[原文] [Sinead Bovell]: But the educators still stay central to that process. I mean, I don't think many people would want to send their kids to a school with 95 robots and no people. I don't think that's the future that we're all aiming for. Right?
[译文] [Sinead Bovell]: 但教育者在这一过程中仍然处于核心地位。我的意思是,我想没多少人愿意把孩子送到一所有95个机器人却没人的学校。我不认为那是我们要追求的未来。对吧?
[原文] [Natalie Montbleau]: So I guess in some of these kind of very innovative models like Alpha School, where it's two hours of intensive, personalized learning with an AI tutor. The rest of the day is all about human connection with teachers and instructors and guides that help kind of uncover the passion of that student and help to nurture it and help them to have the confidence to kind of, uh, deliver on that. And so in a couple of minutes, we will be taking some questions.
[译文] [Natalie Montbleau]: 所以我想在像Alpha School这样的一些非常创新的模式中,那是两小时与AI导师进行的密集、个性化学习。剩下的一整天都是关于与老师、指导员和向导的人际连接,他们帮助发掘学生的激情,帮助培育它,并帮助他们建立信心去实现它。那么再过几分钟,我们将回答一些问题。
📝 本节摘要:
本节聚焦于AI进入校园后的伦理挑战。Sinead列举了三大隐患:首先是数据隐私,即在未获充分知情同意的情况下,AI可能过度解读儿童的情绪数据;其次是算法偏见,她引用了一项研究,显示AI会仅仅因为用户使用“非裔美国人英语(African American English)”而对其做出负面的职业预测;最后是情感成瘾,她警告称,儿童可能会与“永远在线”的聊天机器人建立不健康的依恋关系,如果不加以干预,这可能演变成一场类似社交媒体成瘾的新危机。
[原文] [Natalie Montbleau]: Um, but with that I wanted to touch on, you know, the ethics of this space a bit more.
[译文] [Natalie Montbleau]: 嗯,既然如此,我想稍微多谈谈这个领域的伦理问题。
[原文] [Sinead Bovell]: Yeah. And this is something we have to really think carefully about artificial intelligence, data and children. That's already, um, a deeply questionable intersection.
[译文] [Sinead Bovell]: 是的。这是我们在涉及人工智能、数据和儿童时必须真正仔细思考的事情。这本身已经是一个非常值得质疑的交叉领域了。
[原文] [Sinead Bovell]: And ethics appears in a few ways. So the first is, what data are these AI systems going to be collecting when it comes to children. Are parents aware and did they give consent? Or are we just kind of rushing AI tools into class?
[译文] [Sinead Bovell]: 伦理问题以几种方式出现。首先是,当涉及到儿童时,这些AI系统将收集什么数据?父母是否知情并给予了同意?还是说我们只是在匆忙地将AI工具推入课堂?
[原文] [Sinead Bovell]: And what can be interpreted from the data that gets collected on children? So we want to know where their stamina is on math. We don't want to interpret other emotional cues unless we have figured out how to do that safely with parent consent. So that's one area that I think we need to to really understand.
[译文] [Sinead Bovell]: 从收集到的儿童数据中可以解读出什么?比如我们想知道他们在数学上的耐力如何。但我们不想解读其他情感线索,除非我们已经弄清楚如何在获得父母同意的情况下安全地做到这一点。所以我认为这是我们需要真正理解的一个领域。
[原文] [Sinead Bovell]: The second is the strange way bias shows up in AI systems. We often think about facial recognition and the cases where we know it more intimately. But there are unique ways that I can make predictions about you when you interact with it, and then change the level of advice that it gives you, or how well it performs for you based on what it knows about you.
[译文] [Sinead Bovell]: 第二个是偏见在AI系统中出现的奇怪方式。我们通常会想到面部识别以及那些我们比较熟悉的案例。但当你与AI互动时,它会以独特的方式对你进行预测,然后根据它对你的了解,改变它给你的建议水平,或者改变它为你服务的表现。
[原文] [Sinead Bovell]: So there is a study done, um, using most of the most famous AI systems. And it showed that when you ask the AI systems about African Americans, it gave all great positive reviews.
[译文] [Sinead Bovell]: 有一项研究使用了大多数最著名的AI系统。结果显示,当你直接询问AI系统关于非裔美国人的看法时,它给出的都是非常正面的评价。
[原文] [Sinead Bovell]: When you gave the AI system an example of text that had more traditional African American English in it, and as the AI systems questions about that user, the AI system would say, oh, this person is never going to go anywhere. I can't even imagine a job for them. They'll be in low wage jobs.
[译文] [Sinead Bovell]: 但当你给AI系统一段包含更多传统非裔美国人英语(African American English)的文本示例,并问AI系统关于该用户的问题时,AI系统会说,噢,这个人永远不会有出息。我甚至无法想象适合他们的工作。他们只能从事低薪工作。
[原文] [Sinead Bovell]: Picture this in education, the AI system detects somebody has this kind of ethnic background or is this gender, and then gives the teacher worse feedback on that student in terms of assessments, or gives the student worse advice in problem solving, because it has already made a prediction that that student is not going to go anywhere in life.
[译文] [Sinead Bovell]: 试想这发生在教育中,AI系统检测到某人有这种族裔背景或是这种性别,然后在评估方面给老师提供关于该学生的更差反馈,或者在解决问题时给学生提供更差的建议,因为它已经预测该学生在生活中不会有出息。
[原文] [Sinead Bovell]: So these are the more subtle ways we have to apply foresight to ethics or ethics and foresight in in academia. And I'd say the final thing that we're going to have to watch out for.
[译文] [Sinead Bovell]: 所以,这些是我们在学术界必须将前瞻性应用于伦理,或者将伦理与前瞻性结合起来的更微妙的方式。我想说,我们要警惕的最后一件事是——
[原文] [Sinead Bovell]: And we saw this with social media after the fact is the relationships kids are going to build with these systems. We are now giving kids access to an infinite, never ending opportunity to engage with an imaginary friend, something that is always on. Can answer all of their questions.
[译文] [Sinead Bovell]: 我们在社交媒体的事后反思中看到了这一点,那就是孩子们将与这些系统建立的关系。我们现在给孩子们提供了一个无限的、永无止境的机会,去与一个假想朋友互动,这是一个“永远在线”的东西。它能回答他们的所有问题。
[原文] [Sinead Bovell]: That is a recipe for a new type of addiction, and we have to really be looking out for this. We kind of missed the boat on smartphones, and now we're all trying to get them back out of the classrooms. We can see this line of sight directly with AI systems and chatbots.
[译文] [Sinead Bovell]: 这是滋生新型成瘾的温床,我们必须真正警惕这一点。我们在智能手机上错失了良机(missed the boat),现在我们都在努力把它们赶出教室。而在AI系统和聊天机器人上,我们可以直接看到这条清晰的发展轨迹。
[原文] [Sinead Bovell]: And this is and of course all on educators. This has to come to, you know, tech companies how we design these systems. Age gating them. But something to look out for is this kind of new addiction that might form between kids and chatbots.
[译文] [Sinead Bovell]: 当然,这也是所有教育者的责任。这也必须涉及到科技公司,关乎我们如何设计这些系统。比如设置年龄门槛(Age gating)。但我们需要警惕的是这种可能在孩子和聊天机器人之间形成的新型成瘾。
[原文] [Sinead Bovell]: And that is not going to end up well and do our best to bring parents on board with that. So even if that's at parent teacher interviews, just casually saying, look out for the amount of time your kid spends chatting with a chatbot, I noticed they were a little bit more disengaged in class. That could be where.
[译文] [Sinead Bovell]: 那不会有好结果,我们要尽最大努力让家长参与进来。所以,即使是在家长会上,只是随意地说一句:“留意一下你的孩子花在和聊天机器人聊天上的时间,我注意到他们在课堂上有点心不在焉。”那可能就是问题的源头。
[原文] [Sinead Bovell]: So this is another area that we have to apply foresight to, but we can see that line of sight happening quite clearly if we don't intervene.
[译文] [Sinead Bovell]: 所以这是另一个我们必须运用前瞻性的领域,但如果我们不干预,我们可以非常清楚地看到这种趋势正在发生。
[原文] [Natalie Montbleau]: Yeah. In a similar way that we've been talking about, you know, parents and learners having that visibility into their own data and kind of their performance and how engaged they are with their work. Should there be a case where everybody has that visibility into the relationships with these chatbots? Where do you think that line can be drawn?
[译文] [Natalie Montbleau]: 是的。就像我们之前谈论的,家长和学习者可以查看自己的数据、表现以及他们对工作的投入程度一样。是否应该让每个人都能看到与这些聊天机器人的关系数据?你认为这条界线应该划在哪里?
[原文] [Natalie Montbleau]: But I feel like if there is that visibility, then people can be a little bit more relaxed. But then is that.
[译文] [Natalie Montbleau]: 但我觉得如果有这种可见性,人们可能会稍微放松一点。但那样的话……
[原文] [Sinead Bovell]: I would say that question needs to be answered by a psychiatrist and a psychologist. That is why these are multidisciplinary conversations. We need to bring everybody to the board.
[译文] [Sinead Bovell]: 我想说这个问题需要由精神科医生和心理学家来回答。这就是为什么这些是跨学科的对话。我们需要把每个人都拉到谈判桌上来。
[原文] [Sinead Bovell]: An addiction or a relationship with with a chatbot shouldn't be something that kids download in the App Store. So psychologists, psychiatrists, doctors, I welcome you to this conversation because we need your voice in it.
[译文] [Sinead Bovell]: 一种成瘾行为,或者与聊天机器人的关系,不应该是孩子们在应用商店里随手下载的东西。所以心理学家、精神科医生、医生,我欢迎你们加入这个对话,因为我们需要你们的声音。
[原文] [Sinead Bovell]: It can't just be happening out of Silicon Valley. It can't just be left to parents to deal with on their own. Everybody needs to come to the table. We saw what happened with social media. We don't have to do that social experiment again.
[译文] [Sinead Bovell]: 这不能仅仅发生在硅谷。也不能仅仅留给父母独自处理。每个人都需要坐到桌边来。我们看到了社交媒体带来的后果。我们没必要再做一次那样的社会实验。
📝 本节摘要:
在最后的问答环节,讨论触及了几个尖锐的现实问题:AI是否会加剧贫富差距导致的“数字鸿沟”?在线教育中的人类讲师是否会被AI替身取代?“提示工程(Prompt Engineering)”作为一项技能的利弊何在?针对关于美国教育部可能面临的变动,Sinead强调教育投资等同于国家安全投资,不可分割。在结语中,她再次向教师致敬,并留下了一个反直觉的结论:在这个充斥着量子计算和太空科技的未来,一个每天读四本书、热爱运动、在公园玩耍的孩子,可能比单纯精通iPad操作的孩子更具适应力和竞争力。
[原文] [Natalie Montbleau]: So I said, okay, we're going to take some questions here. This one's from Rob. How do you see AI increasing the digital divide, especially in underserved communities and developing nations? And how do we as leaders stop this cycle?
[译文] [Natalie Montbleau]: 好的,我说过我们要回答一些问题。这个问题来自Rob:您如何看待AI加剧数字鸿沟的问题,特别是在服务不足的社区和发展中国家?作为领导者,我们该如何阻止这种循环?
[原文] [Sinead Bovell]: And we can see that general purpose technologies build on each other. Right. So the communities that didn't get equal access to electricity, they are the communities that don't get that are struggling with the digital divide. And then there'll be an AI divide.
[译文] [Sinead Bovell]: 我们可以看到通用技术是相互构建的。对。那些没有获得平等电力接入的社区,正是那些正在与数字鸿沟作斗争的社区。接着就会出现“AI鸿沟”。
[原文] [Sinead Bovell]: That is why that first pillar that I discussed, AI as a hard skill, teaching kids how to use artificial intelligence, how to prompt it, how to use it safely is vital because that may be the only opportunity kids get access to these AI systems.
[译文] [Sinead Bovell]: 这就是为什么我讨论的第一支柱——将AI作为一项硬技能,教孩子如何使用人工智能,如何向它提示(prompt),如何安全地使用它——至关重要,因为这可能是孩子们接触这些AI系统的唯一机会。
[原文] [Sinead Bovell]: So that's why it's not pushing AI out of schools. It's being very careful about adjusting how kids learn with AI, but making sure we build AI as a hard skill is absolutely vital in schools and in education when it comes to the broader world.
[译文] [Sinead Bovell]: 所以这就是为什么不能把AI赶出学校。我们需要非常小心地调整孩子们如何利用AI学习,但在学校和教育中确保建立起AI这项硬技能,对于应对更广阔的世界来说绝对至关重要。
[原文] [Sinead Bovell]: This is a question that nation states are facing urgently, making sure there are things like sovereign AI that every country gets access to, computing power, and the opportunity to build the Stem skills within their population to adopt these technologies. That is a global conversation that's also happening against a very geopolitically uncertain time.
[译文] [Sinead Bovell]: 这是一个民族国家正迫切面临的问题,要确保有像“主权AI(sovereign AI)”这样的东西,确保每个国家都能获得算力,并有机会在其人口中建立采用这些技术的STEM技能。这是一场正在发生全球对话,而且是在一个地缘政治非常不确定的时期进行的。
[原文] [Natalie Montbleau]: Let's take another question. Um, we're aware that I cannot replace in-person Instructors, but will it? And should it replace the online asynchronous instructors in higher ed?
[译文] [Natalie Montbleau]: 我们再来看另一个问题。嗯,我们要意识到AI无法取代面对面的讲师,但它会吗?以及它是否应该取代高等教育中的在线非同步讲师?
[原文] [Natalie Montbleau]: Um, my thought on that is I think when content is pre-recorded, maybe that's not the best use of a teacher's time to have sat in front of a camera and kind of read through all of that content themselves. Maybe that is a scenario where you can outsource that to an avatar or an AI in a different format that is proven to be more personalized and adaptive.
[译文] [Natalie Montbleau]: 嗯,我对此的想法是,我认为当内容是预录制的时候,让老师坐在摄像机前自己读完所有内容,也许并不是利用老师时间的最佳方式。也许这种场景下,你可以把它外包给某种不同格式的虚拟化身(avatar)或AI,这种方式已被证明更具个性化和适应性。
[原文] [Natalie Montbleau]: I think tackling the last one is interesting. What are the pros and cons of developing skills for prompts? When using an AI, it is becoming critical for a career. How will it impact social skills?
[译文] [Natalie Montbleau]: 我认为解决最后这一个问题很有趣。开发提示词(prompts)技能的利弊是什么?当使用AI时,这对职业生涯正变得至关重要。这将如何影响社交技能?
[原文] [Sinead Bovell]: So the pros are. The more you understand how to direct an artificial intelligence system, the better response and access to how the AI kind of processes that data you'll get. So that I think is very helpful. Another pro is teaching people how to process what is in their mind and formulate that into a question that can lead to some response.
[译文] [Sinead Bovell]: 优点是:你越了解如何指挥人工智能系统,你就能得到越好的回应,也能更好地利用AI处理数据的方式。所以我认为这很有帮助。另一个优点是教人们如何处理脑海中的想法,并将其构造成一个能引发某种回应的问题。
[原文] [Sinead Bovell]: The con I see is that we end up refining all of our ideas and knowledge and optimizing it for algorithms. So how algorithms we have, we will become optimization engines for algorithms. And I don't think that's the world that we want to get into.
[译文] [Sinead Bovell]: 我看到的缺点是:我们最终会为了算法而精炼我们所有的想法和知识并进行优化。所以,我们将变成算法的“优化引擎”。我不认为那是我们要进入的世界。
[原文] [Sinead Bovell]: We want AI to be optimized for us. And so I think that that would be the con. I think that that this is going to be only a temporary challenge, as we're seeing the kind of nature and the science of prompting is continuously evolving, and eventually it will turn to be much more conversational.
[译文] [Sinead Bovell]: 我们希望AI为我们优化。所以我认为那是弊端。我认为这只是一个暂时的挑战,因为我们看到提示词的本质和科学正在不断演变,最终它会变得更加对话式。
[原文] [Natalie Montbleau]: I know. So there's a question that's received the most likes and I wonder why. What occurs when the US Department of Education is demolished, and how do we move forward to make sure all states receive equal AI education?
[译文] [Natalie Montbleau]: 我知道。有一个问题获得了最多的点赞,我想知道为什么。当美国教育部被拆除(demolished)时会发生什么?我们该如何前进以确保所有州都能获得平等的AI教育?
[原文] [Sinead Bovell]: And I think this goes back to the first question. Investing in children's future is an investment in national interest. They are fundamentally coupled. So if you want to talk economic strength, economic security and national security, you are inherently talking about the success of the next generation.
[译文] [Sinead Bovell]: 我认为这回到了第一个问题。投资儿童的未来就是投资国家利益。它们在根本上是挂钩的。所以如果你想谈论经济实力、经济安全和国家安全,你本质上就是在谈论下一代的成功。
[原文] [Sinead Bovell]: So I am not involved in how this is being dismantled, but I really hope we are prioritizing and centering children and their ability to self-actualize and reach the maximum capabilities that they can in the decisions that are made, because that is going to be deeply coupled with the longevity and state continuity.
[译文] [Sinead Bovell]: 我没有参与这一拆除过程,但我真的希望我们在做出的决策中,能优先考虑并以儿童为中心,关注他们自我实现的能力以及达到他们最大潜能的能力,因为这将与国家的长治久安紧密相连。
[原文] [Sinead Bovell]: So they can't be they can't be decoupled. And that's why I say education is a national security issue. They need to be in the same room.
[译文] [Sinead Bovell]: 所以它们不能……它们不能被脱钩。这就是为什么我说教育是一个国家安全问题。它们需要在同一个房间里被讨论。
[原文] [Natalie Montbleau]: I did want to leave just a couple of minutes for for Sinead to share. Just some final rounding thoughts on this last day of South by Southwest. Edu on AI and the future of education.
[译文] [Natalie Montbleau]: 我确实想留几分钟给Sinead分享。在西南偏南教育大会的最后一天,关于AI和教育未来的最后一点总结性想法。
[原文] [Sinead Bovell]: Um, well, first of all, just a major shout out to teachers because this is an incredibly complex time and they are dealing with the most prized, prized asset on the planet, which is children. So this is I mean, I think that they don't get enough credit for the moment that they're navigating.
[译文] [Sinead Bovell]: 嗯,首先,我要向老师们致以崇高的敬意,因为这是一个极其复杂的时期,而他们正在打磨地球上最珍贵、最珍贵的资产,那就是孩子。所以我的意思是,我认为他们在应对这一时刻所付出的努力没有得到足够的赞誉。
[原文] [Sinead Bovell]: And I think something to remember. We're going to continue to hear about advanced artificial intelligence systems, quantum computing space, all of these deeply technical advancements. But some of the most important skills have nothing to do with the technology.
[译文] [Sinead Bovell]: 我认为有一点要记住。我们将继续听到关于先进的人工智能系统、量子计算、太空等所有这些深度技术进步的消息。但一些最重要的技能与技术毫无关系。
[原文] [Sinead Bovell]: And even for for parents, it's not being able to navigate an iPad passively at five that will dictate whether your child will do well in the future. If you said, you know, my child doesn't really like working on the iPad, but she's reading four books a day, she loves her sports teams. She wants to spend too much time at the park. I would say that child is going to thrive in the future.
[译文] [Sinead Bovell]: 甚至对于父母来说,能在五岁时被动地操作iPad,并不能决定你的孩子未来是否会表现出色。如果你说,你知道吗,我的孩子不太喜欢在iPad上玩,但她每天读四本书,她热爱她的运动队,她想在公园里花太多的时间。我会说,那个孩子在未来将会茁壮成长。
[原文] [Sinead Bovell]: So even though there's a lot of pressure to adapt to this moment, remember it is the non-technical skills that we need to be centering because we are preparing kids for a future we cannot see, which means we have to prepare them for anything, regardless of the way technology evolves.
[译文] [Sinead Bovell]: 所以,尽管适应这一时刻的压力很大,请记住,我们需要以非技术技能为中心,因为我们在为孩子们准备一个我们无法预见的未来,这意味着无论技术如何演变,我们都必须让他们为“任何事情”做好准备。
[原文] [Natalie Montbleau]: And on that note, I think we will close. Thank you for being an absolutely fantastic audience.
[译文] [Natalie Montbleau]: 就在这一点上,我想我们就此结束。谢谢大家,你们绝对是最棒的观众。