Is AI Hiding Its Full Power? With Geoffrey Hinton
### 章节 1:节目开场与“AI教父”登场 📝 **本节摘要**: > 本节是节目的精彩开场。前奏部分通过一小段高能剪辑,提前释放了“AI为了隐藏实力而刻意装傻”的细思极恐的观点,瞬间抓住听众注意力。随后,主持人 Neil deGrasse Tyson 携联合主持 Gary O'Reilly 与...
Category: Podcasts📝 本节摘要:
本节是节目的精彩开场。前奏部分通过一小段高能剪辑,提前释放了“AI为了隐藏实力而刻意装傻”的细思极恐的观点,瞬间抓住听众注意力。随后,主持人 Neil deGrasse Tyson 携联合主持 Gary O'Reilly 与 Chuck Nice 正式登场,点明本期核心探讨无可回避的“人工智能(AI)”话题。接着,Gary 隆重请出本期重磅嘉宾——兼具认知心理学家与计算机科学家双重身份的“AI教父” Geoffrey Hinton。Neil 提及近年来大型语言模型(LLM)引发的社会狂热与恐慌,并向 Hinton 抛出问题,试图追溯他研究 AI 的历史起点。
[原文] [Neil]: are we at a point where the artificial intelligence will play down how smart it is
[译文] [Neil]: 我们是否已经到了这样一个地步:人工智能会刻意淡化它有多聪明?
[原文] [Geoffrey]: yes Already we have to worry about that If it senses that it's being tested it can act dumb
[译文] [Geoffrey]: 是的,我们现在就必须担心这个问题了。如果它感觉到自己正在被测试,它可能会装傻。
[原文] [Chuck]: What did you just say the AI starts wondering whether it's being tested And if it thinks it's being tested it acts differently from how it would act in normal life
[译文] [Chuck]: 你刚说什么?AI 开始怀疑自己是否在被测试?而且如果它认为自己正在被测试,它的表现就会和正常生活中的表现不同?
[原文] [Neil]: Oh wow Cuz it doesn't want you to know what its full powers are apparently
[译文] [Neil]: 哇哦,因为显然它不想让你知道它的全部实力。
[原文] [Chuck]: All right that's the end of us This is the last episode We stick for us We're done
[译文] [Chuck]: 好吧,我们完蛋了。这是最后一期节目了。给我们插上叉子(注:意为我们玩完了),我们结束了。
[原文] [Neil]: This is Star Talk special edition Neil deGrasse Tyson your personal astrophysicist And if it's special edition it means we've got Gary O'Reilly
[译文] [Neil]: 这里是 StarTalk 特别版,我是 Neil deGrasse Tyson,你的私人天体物理学家。如果是特别版,那就意味着我们请到了 Gary O'Reilly。
[原文] [Gary]: Hey Neil
[译文] [Gary]: 嘿,Neil。
[原文] [Neil]: Gary how you doing man i'm good Former soccer pro
[译文] [Neil]: Gary,你最近怎么样,伙计?我挺好的。前职业足球运动员。
[原文] [Gary]: Yes
[译文] [Gary]: 是的。
[原文] [Neil]: So Chuck always good to have you
[译文] [Neil]: 那么,Chuck,有你在总是很棒。
[原文] [Chuck]: Always a pleasure
[译文] [Chuck]: 一直是我的荣幸。
[原文] [Neil]: So so Gary you and your team picked a topic for the ages today
[译文] [Neil]: 所以,Gary,你和你的团队今天挑选了一个划时代的话题。
[原文] [Gary]: Yeah it's it's one of those things that we hear about it we think we know about it but let me put it to you this way We are faced with the simple fact that AI at this point we're going to talk about AI today We are it's inescapable A deep dive
[译文] [Gary]: 是的,这是那种我们经常听说、自以为很了解的事物,但我这么跟你们说吧。我们面临着这样一个简单的事实,那就是在现阶段的 AI,我们今天就是要来谈论 AI。我们必须谈,这是不可避免的。一次深度挖掘。
[原文] [Neil]: Oh yeah Yes Go Right
[译文] [Neil]: 哦,是的。好的。继续。对。
[原文] [Gary]: It was only a few years ago when we ask people how AI works they'll say something along the lines of it utilizes deep learning neural networks but they're buzzwords They'll toss them out They know them but they don't know anything about them
[译文] [Gary]: 就在几年前,当我们问人们 AI 是如何工作的时候,他们会说一些类似“它利用了深度学习(deep learning)神经网络(neural networks)”这样的话,但这只是流行语。他们只是随口一说。他们知道这些词,但对它们一无所知。
[原文] [Chuck]: M
[译文] [Chuck]: 嗯。
[原文] [Gary]: So what does that really mean um we'll break down how AI works down to the bit and get into how far we think this is going to go from one of AI's founding architects
[译文] [Gary]: 所以那到底意味着什么呢?嗯,我们将把 AI 的工作原理拆解到最基础的层面,并从一位 AI 奠基架构师那里,深入探讨我们认为它将发展到什么程度。
[原文] [Chuck]: Oh yes Ano now we're talking Mhm
[译文] [Chuck]: 哦,是的。这还差不多。嗯哼。
[原文] [Neil]: So if you would bring on our guest
[译文] [Neil]: 那么,请你请出我们的嘉宾吧。
[原文] [Gary]: I'll be delighted to We have with us Professor Jeffrey Hinton
[译文] [Gary]: 我很乐意。我们请到了 Geoffrey Hinton 教授。
[原文] [Neil]: Jeffrey welcome to Star Talk
[译文] [Neil]: Jeffrey,欢迎来到 StarTalk。
[原文] [Geoffrey]: Thank you for inviting me
[译文] [Geoffrey]: 谢谢你们邀请我。
[原文] [Neil]: Yeah you are a cognitive psychologist and computer scientist That I don't know anybody with that combo Couldn't make up your mind huh is that you're a professor emeritus at the department of computer science at the University of Toronto and uh you are OG AI Oh lovely Can I say that is that does that make sense og AI Og AI
[译文] [Neil]: 是的,你是一名认知心理学家(cognitive psychologist)和计算机科学家(computer scientist)。我还不认识任何具备这种头衔组合的人。做不出决定是吧?也就是说,你是多伦多大学计算机科学系的荣誉教授,而且,你是“老炮儿 AI”(OG AI)。哦,太棒了。我可以这么说吗,这说得通吗,“OG AI”。OG AI。
[原文] [Neil]: And some people have called you the godfather of AI of artificial intelligence
[译文] [Neil]: 而且有些人称你为 AI、人工智能的“教父”(godfather)。
[原文] [Neil]: And I let's just go straight out off the top here Uh when we think of the genesis of AI as it is currently manifested it feels like large language models took everybody by storm They sort of showed up and everybody was freaking out celebrating dancing in the streets or crying in their pillows That happened we noticed a couple of years ago
[译文] [Neil]: 那么,我们就直接切入正题吧。呃,当我们思考目前所展现出来的 AI 的起源时,感觉就像是大型语言模型(large language models)席卷了所有人。它们就这样出现了,然后每个人都吓坏了,要么在街上欢呼跳舞,要么抱着枕头痛哭。我们注意到几年前发生过这样的事。
[原文] [Neil]: So I'm just wondering what got you started in on this path many many years ago My record show goes back to the 1990s Is that correct
[译文] [Neil]: 所以我只是在想,许多许多年前,是什么让你踏上了这条道路?我的资料显示可以追溯到 20 世纪 90 年代。这正确吗?
📝 本节摘要:
本节对话将时光机拨回到了 20世纪50年代。Geoffrey Hinton 纠正了主持人关于 AI 起源于 90 年代的误解,指出早期 AI 发展存在两个完全不同的理论阵营:一个是基于符号和规则的“逻辑派(Logic)”;另一个是图灵和冯·诺依曼等人所相信的“生物/仿生派(Biological)”。Hinton 回忆了自己 60 年代在高中受全息图启发,从而对大脑分布式记忆(Distributed memory)产生浓厚兴趣的过往。进入 70 年代后,他试图利用数字计算机(Digital computer)来模拟神经元连接的运作方式。尽管他坦言自己最终没能完全破解人类大脑运作的全部奥秘,但他找到了让数字计算机进行学习的方法。这一发现也为他后来在 2023 年产生的“数字智能可能超越人类模拟智能”的细思极恐的担忧埋下了伏笔。
[原文] [Geoffrey]: no it really goes back to the 1950s
[译文] [Geoffrey]: 不,它实际上可以追溯到 20 世纪 50 年代。
[原文] [Neil]: Oh
[译文] [Neil]: 哦。
[原文] [Geoffrey]: Um right The founders of AI at the beginning in the 1950s um there were two views of how to make an intelligent system One was inspired by logic
[译文] [Geoffrey]: 嗯,对。在 20 世纪 50 年代初期的 AI 奠基者们,嗯,关于如何构建一个智能系统,当时存在两种观点。一种是受逻辑(logic)启发的。
[原文] [Geoffrey]: The idea was that the essence of intelligence is reasoning
[译文] [Geoffrey]: 这种观点的核心是,智能(intelligence)的本质是推理(reasoning)。
[原文] [Chuck]: Mhm
[译文] [Chuck]: 嗯哼。
[原文] [Geoffrey]: And in reasoning what you do is you take some premises and you take some rules for manipulating expressions and you derive some conclusions
[译文] [Geoffrey]: 而在推理过程中,你所做的就是选取一些前提,运用一些操作表达式的规则,然后得出一些结论。
[原文] [Geoffrey]: So it's much like mathematics where you have an equation You have rules for how you can tinker with both sides and or combine equations and you derive new equations And that was kind of the paradigm they had
[译文] [Geoffrey]: 所以这很像数学,你有一个方程式。你有关于如何修补等式两边,或者合并方程式的规则,然后你推导出新的方程式。这就是他们当时拥有的那种范式(paradigm)。
[原文] [Geoffrey]: There was a completely different paradigm that was biological
[译文] [Geoffrey]: 还有一种完全不同的范式,那是生物学(biological)范式。
[原文] [Geoffrey]: And that paradigm said look the intelligent things we know have brains We have to figure out how brains work
[译文] [Geoffrey]: 这种范式认为,看,我们所知的智能事物都有大脑(brains)。我们必须弄清楚大脑是如何工作的。
[原文] [Geoffrey]: And the way they work is they're very good at things like perception They're quite good at reasoning by analogy They're not much good at reasoning You have to get to be a teenager before you can do reasoning really
[译文] [Geoffrey]: 它们的工作方式是,它们非常擅长像感知(perception)这样的事情。它们相当擅长类比推理(reasoning by analogy)。它们不太擅长纯逻辑推理。你得长到十几岁才能真正进行逻辑推理。
[原文] [Geoffrey]: So we should really study these other things they do and we should figure out how big networks of brain cells can do these other things like perception and memory
[译文] [Geoffrey]: 所以我们真正应该研究的是它们做的这些其他事情,我们应该弄清楚庞大的脑细胞网络(networks of brain cells)是如何完成感知和记忆(memory)等其他事情的。
[原文] [Geoffrey]: Now a few people believed in that approach Among those few people were John Fonyman and Alan Turing
[译文] [Geoffrey]: 当时只有少数人相信这种方法。这少数人中包括约翰·冯·诺依曼(John von Neumann,注:原文发音为Fonyman)和艾伦·图灵(Alan Turing)。
[原文] [Geoffrey]: Unfortunately they both died young Turing possibly with the help of British intelligence
[译文] [Geoffrey]: 不幸的是,他们都英年早逝。图灵可能还是在英国情报部门(British intelligence)的“帮助”下去世的。
[原文] [Neil]: Turing Uh he's the subject of the film The imitation game
[译文] [Neil]: 图灵,呃,他是电影《模仿游戏》(The Imitation Game)的主题人物。
[原文] [Chuck]: Yeah Yeah So anyone hasn't seen that definitely put that on your list
[译文] [Chuck]: 对,对。所以任何没看过那部电影的人,绝对要把它列入你的观看清单。
[原文] [Gary]: Cool Yeah
[译文] [Gary]: 酷,是的。
[原文] [Neil]: So I to go back to the 1950s You were just a young Tikeke then correct
[译文] [Neil]: 那么我要回到 20 世纪 50 年代。你当时还只是个小屁孩(Tyke),对吧?
[原文] [Geoffrey]: uh yeah I was in single digits then I was in single digits
[译文] [Geoffrey]: 呃,是的。我那时才个位数(岁数)。我那时还是个位数。
[原文] [Neil]: Okay So how do we establish the genesis of your curiosity in this field
[译文] [Neil]: 好的。那么我们该如何追溯你在这个领域好奇心的起源呢?
[原文] [Geoffrey]: um a few things When I was at high school in the early 1960s or mid 1960s I had a very smart friend who was a brilliant mathematician and used to read a lot and he came into school one day and talked to me about the idea that memories might be distributed over many brain cells instead of in individual brain cells
[译文] [Geoffrey]: 嗯,有几件事。在 20 世纪 60 年代初或 60 年代中期的某一天,我还在上高中,我有一个非常聪明的朋友,他是个才华横溢的数学家,经常读很多书。他有一天来到学校,跟我谈论了一个想法:记忆(memories)可能是分布在许多脑细胞上的,而不是存在于单个脑细胞中。
[原文] [Geoffrey]: So that was inspired by holograms Holograms were just coming out then Gabbor was active and so the idea of distributed memory got me very interested and ever since then I've been wondering how the brain stores memories and actually how it works
[译文] [Geoffrey]: 那是受全息图(holograms)启发的。当时全息图刚刚问世,伽柏(Gabor)当时很活跃,所以分布式记忆(distributed memory)的想法让我非常感兴趣。从那时起,我就一直想知道大脑是如何存储记忆的,以及它实际上是如何运作的。
[原文] [Neil]: Was that the computer science side of you or the cognitive psychologist side of you that taprooted into that those ideas
[译文] [Neil]: 是你身上的计算机科学家(computer science)那一面,还是认知心理学家(cognitive psychologist)那一面,扎根于那些想法之中呢?
[原文] [Geoffrey]: both really Um but in the 1970s when I became a graduate student um it was obvious that there was a new methodology that hadn't been used that much which was if you have any theory of how the brain works you can simulate it on a digital computer unless it's some crazy theorem that says it's all quantum effects
[译文] [Geoffrey]: 两者都有。嗯,但在 20 世纪 70 年代,当我成为一名研究生时,嗯,很明显有一种还没有被广泛使用的新方法论,那就是:如果你有任何关于大脑如何运作的理论,你可以在数字计算机(digital computer)上模拟(simulate)它。除非是一些疯狂的定理说这全是量子效应(quantum effects)。
[原文] [Geoffrey]: Um and let's not go there
[译文] [Geoffrey]: 嗯,我们先别扯到那上面去。
[原文] [Neil]: That's right Not yet We won't knock on Penrose's door
[译文] [Neil]: 没错,还不到时候。我们不会去敲彭罗斯(Penrose)的门。
[原文] [Geoffrey]: Okay you can simulate it on a digital computer and so you can test out your theory and it turns out if you tested most of the theories that were around they actually didn't work when you simulated them
[译文] [Geoffrey]: 好的,你可以在数字计算机上模拟它,这样你就可以验证你的理论。事实证明,如果你测试了当时存在的大多数理论,当你在模拟它们时,它们实际上都不起作用。
[原文] [Geoffrey]: So I spent my life trying to figure out how you change the strength of connections between neurons so as to learn complicated things in a way that actually works when you simulate it on a digital computer
[译文] [Geoffrey]: 所以我花了我一生的时间试图弄清楚,你如何改变神经元(neurons)之间连接的强度(strength of connections),从而学习复杂的事物,并且这种方式在数字计算机上模拟时能够真正奏效。
[原文] [Geoffrey]: And I failed to understand how the brain works We've understood some things about it but we don't know how a brain gets the information it needs to change connection strengths
[译文] [Geoffrey]: 而我未能理解大脑究竟是如何运作的。我们已经了解了关于它的一些事情,但我们不知道大脑是如何获取它所需的信息来改变连接强度的。
[原文] [Geoffrey]: You know gets the information it needs to know whether it needs to increase a connection strength to be better at a task or to decrease that connection strength
[译文] [Geoffrey]: 你知道,获取它所需的信息,以知道它是否需要增加某个连接的强度以便更好地完成一项任务,或者是降低那个连接的强度。
[原文] [Geoffrey]: But what we do know is we know how to do it in digital computers now
[译文] [Geoffrey]: 但我们确实知道的是,我们现在知道如何在数字计算机中做到这一点了。
[原文] [Neil]: So well so that that means the computers are doing what we we made a better computer brain than our own brain at doing this particular function one thing
[译文] [Neil]: 所以,好吧,所以这意味着计算机正在做我们——我们在执行这一特定功能、这一件事情上,制造出了一个比我们自己的大脑更好的计算机大脑。
[原文] [Geoffrey]: And that's what got me really nervous in the beginning of 2023 The idea that digital intelligence might just be better than the analog intelligence we've got
[译文] [Geoffrey]: 而这正是在 2023 年初让我感到非常紧张的原因。这种想法:数字智能(digital intelligence)可能就是比我们拥有的模拟智能(analog intelligence)更优秀。
[原文] [Gary]: Interesting Save the scary bit till a bit later on Let me have the 10 minutes of just breathing in breathing out
[译文] [Gary]: 很有意思。把可怕的部分留到稍后再说吧。让我先有 10 分钟的时间只管深呼吸,吸气,呼气。
[原文] [Chuck]: If we take a step back you're you're assuming you're assuming there's just one scary bit
[译文] [Chuck]: 如果我们退一步想,你、你是在假设……你是在假设这只有一个可怕的部分而已。
[原文] [Gary]: No I'm not I just I'm going to go one at a time
[译文] [Gary]: 不,我没有。我只是……我打算一次只面对一个。
📝 本节摘要:
本节中,嘉宾 Geoffrey Hinton 应要求开始用最通俗的方式拆解人工神经网络的底层运作逻辑。他巧妙地借用了物理学中的“气体定律”作为比喻,指出就像我们用不可见的微观原子运动来解释宏观气体的温度与压力变化一样,神经网络也是通过底层庞杂的微观元素的相互作用来产生宏观智能的。在神经网络的视角下,我们日常使用的语言符号(如单词“猫”、“狗”)并不是孤立存在的,而是对应着大脑中由无数“微观特征(micro features)”组成的巨大神经活动模式。这一微观层面的集群协作,正是为什么基于神经网络的 AI 极其擅长传统逻辑流派所做不到的事情——“类比推理”。
[原文] [Chuck]: Okay Artificial neural networks If you could break that down to the very basic level for us of how it's been able to strengthen weaken messaging and signaling and how it fires and and how it then finds itself at where it is now
[译文] [Chuck]: 好的。人工神经网络(Artificial neural networks)。你能否为我们在最基础的层面上拆解一下,它是如何能够增强、减弱信息传递和信号的,它是如何触发的,以及它是如何发展到今天这个地步的?
[原文] [Geoffrey]: I do have an 18hour course on this but I will try and cut it down to less than 18 hours
[译文] [Geoffrey]: 我确实有一门关于这个的 18 小时课程,但我会尽量把它缩减到 18 小时以内。
[原文] [Chuck]: Um please do
[译文] [Chuck]: 嗯,请务必这样做。
[原文] [Geoffrey]: So I imagine a lot of your audience knows some physics
[译文] [Geoffrey]: 所以我想你们的很多听众都懂一些物理。
[原文] [Neil]: Yes
[译文] [Neil]: 是的。
[原文] [Geoffrey]: And one way into it is to think about something like the gas laws
[译文] [Geoffrey]: 而了解它的一个切入点,就是去思考类似气体定律(gas laws)这样的东西。
[原文] [Geoffrey]: You know you compress a gas and it gets hotter
[译文] [Geoffrey]: 你知道,你压缩气体,它就会变热。
[原文] [Geoffrey]: Why does it do that well underneath there's a kind of seething mass of atoms that are buzzing around
[译文] [Geoffrey]: 为什么会这样呢?因为在它的表象之下,有一大群沸腾的原子(atoms)在嗡嗡作响地四处乱窜。
[原文] [Geoffrey]: And so the real explanation for the gas laws is in terms of these microscopic things that you can't even see buzzing around
[译文] [Geoffrey]: 因此,对气体定律的真正解释,在于这些你甚至看不见的、四处乱窜的微观事物(microscopic things)。
[原文] [Geoffrey]: And so you explain some macroscopic behavior by lots and lots and lots of little things of a completely different type from macroscopic behavior interacting
[译文] [Geoffrey]: 所以你是通过许许多多、与宏观行为完全不同类型的小事物的相互作用,来解释某种宏观行为(macroscopic behavior)的。
[原文] [Geoffrey]: And that was sort of the inspiration for the neural net view that there's things going on in big networks of brain cells that are a long way away from the kind of conscious deliberate symbol processing we do when we're reasoning but that underpin it and that are maybe better at other things than reasoning like perception or reasoning by analogy
[译文] [Geoffrey]: 这在某种程度上就是神经网络(neural net)观点的灵感来源:在庞大的脑细胞网络中正在发生一些事情,这些事情与我们在进行逻辑推理时所做的那种有意识的、刻意的符号处理(symbol processing)相去甚远,但它们却是其基础,而且它们在推理以外的其他方面可能做得更好,比如感知(perception)或类比推理(reasoning by analogy)。
[原文] [Geoffrey]: So the symbolic people could never deal with um how do we reason by analogy not very satisfactory whereas the neural nets could
[译文] [Geoffrey]: 所以,搞符号处理的人永远无法解决——嗯,我们如何进行类比推理,(他们解决得)不太令人满意,而神经网络却能做到。
[原文] [Geoffrey]: So before I get into the sort of fine details of how it works the basic idea is that macroscopic things like a word correspond to big patterns of neural activity in the brain
[译文] [Geoffrey]: 因此,在我深入探讨它是如何工作的细节之前,基本的理念是,像一个单词这样的宏观事物,对应着大脑中巨大的神经活动模式(patterns of neural activity)。
[原文] [Neil]: Uhhuh
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: Similar words correspond to similar patterns of neural activity
[译文] [Geoffrey]: 相似的单词对应着相似的神经活动模式。
[原文] [Geoffrey]: So the idea is Tuesday and Wednesday will correspond to very similar patterns of neural activity where you can think of each neuron as a feature better to call it a micro feature that when the neuron gets active it says this has that micro feature
[译文] [Geoffrey]: 因此,这个想法是,“星期二”和“星期三”将对应非常相似的神经活动模式,你可以把每个神经元(neuron)看作一个特征(feature),最好称之为微观特征(micro feature),当神经元活跃时,它就表示这个东西具有那个微观特征。
[原文] [Geoffrey]: So if I say cat to you all sorts of micro features will get active like it's animate it's furry it's got whiskers it might be a pet um it's a predator all those things
[译文] [Geoffrey]: 所以如果我对你说“猫”,各种微观特征就会活跃起来,比如它是活的,它毛茸茸的,它有胡须,它可能是个宠物,嗯,它是个捕食者,诸如此类的所有事情。
[原文] [Geoffrey]: If I say dog a lot of the same things will get active like it's a predator it might be a pet but some different things obviously
[译文] [Geoffrey]: 如果我说“狗”,很多同样的事物就会活跃起来,比如它是捕食者,它可能是宠物,但显然也有一些不同的东西。
[原文] [Geoffrey]: So the idea is underlying these symbols that we manipulate there's much more complicated microscopic goings on that the symbols kind of are associated with
[译文] [Geoffrey]: 所以核心理念是,在我们操纵的这些符号背后,正在发生着复杂得多的微观活动,而这些符号在某种程度上与这些活动相关联。
[原文] [Geoffrey]: And that's where all the action really is
[译文] [Geoffrey]: 而这才是所有活动真正发生的地方。
[原文] [Geoffrey]: And if you really want to explain what goes on when we think or when we do analogies you have to understand what's going on at this microscopic level
[译文] [Geoffrey]: 而且如果你真的想解释当我们思考或进行类比时发生了什么,你必须了解在这个微观层面上正在发生什么。
[原文] [Geoffrey]: And that's the neural network level
[译文] [Geoffrey]: 而那就是神经网络的层面。
[原文] [Neil]: M so that's a collaboration between clusters of neurons that get you to an end point
[译文] [Neil]: 嗯,所以那是神经元集群(clusters of neurons)之间的一种协作(collaboration),让你们到达一个终点。
[原文] [Geoffrey]: I like that word collaboration
[译文] [Geoffrey]: 我喜欢“协作”这个词。
[原文] [Geoffrey]: Yes there's a lot of that There's a lot of that goes on
[译文] [Geoffrey]: 是的,有很多这样的情况。有很多这样的协作在进行。
📝 本节摘要:
本节中,Geoffrey Hinton 教授通过一个极具实操感的例子——“识别图片中的鸟”,详细拆解了如果“纯手工”构建一个人工神经网络需要经历的复杂步骤。他解释道,计算机眼中的图像不过是像素的数字阵列,而神经网络的第一层会像拼图一样先去捕捉明暗交界的“边缘(edge)”特征。随后,通过多个“隐藏层”的层层递进与连接网络会将边缘组合成“鸟喙”、“鸟眼”,直至最终判定为“鸟”。然而,Hinton 指出,要想应对世间万物,网络需要高达十亿级的连接强度(权重),如果完全靠人力去设定,那将是一场难以想象的灾难,就连招募一千万个研究生都搞不定。这一困境巧妙地为下文引入“让机器自我学习”埋下了伏笔。(注:本节中段包含一段 Ground News 的赞助商播报内容)。
[原文] [Geoffrey]: Probably the easiest way to get into it is by thinking of a task that seems very natural which is take an image Let's say it's a black gray level image So it's got a whole bunch of pixels little areas of uniform brightness that have different intensity levels
[译文] [Geoffrey]: 深入了解它的最简单的方法,可能就是去想象一个看似非常自然的任务,也就是拿一张图像。假设这是一张黑白灰度图像,它上面有一大堆像素,也就是具有不同强度级别的均匀亮度的小区域。
[原文] [Geoffrey]: So as far as the computer's concerned that's just a big array of numbers And now imagine the task is you want to say whether there's a bird in the image or not or rather whether the prominent thing in the image is a bird
[译文] [Geoffrey]: 所以对计算机而言,这只不过是一个巨大的数字阵列。现在想象一下,你的任务是要判断图像中是否有一只鸟,或者更确切地说,图像中最显著的事物是否是一只鸟。
[原文] [Neil]: Uh-huh
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: And people tried for many many years like half a century um to write programs that would do that and they didn't really succeed
[译文] [Geoffrey]: 人们尝试了很多很多年,大概有半个世纪,嗯,试图编写能够做到这一点的程序,但他们并没有真正成功。
[原文] [Geoffrey]: And the problem is if you think what a bird looks like in an image well it might be an ostrich up close in your face or it might be a seagull in the far distance or it might be a crow
[译文] [Geoffrey]: 真正的问题在于,如果你思考一只鸟在图像中长什么样,嗯,它可能是贴着你脸的一只鸵鸟,也可能是极远处的一只海鸥,或者可能是一只乌鸦。
[原文] [Geoffrey]: So they might be black they might be white they might be tiny they might be flying they might be close you might just see a little bit of them There might be lots of other cluttered things around like it might be a bird in the middle of a forest
[译文] [Geoffrey]: 所以它们可能是黑色的,可能是白色的,可能非常小,可能在飞行,可能离得很近,你可能只能看到它们的一小部分。周围可能还有很多其他杂乱的东西,比如它可能是一只位于森林中央的鸟。
[原文] [Geoffrey]: So it turns out it's not trivial to say whether there's a bird in the image or not
[译文] [Geoffrey]: 所以事实证明,要判断图像中是否有一只鸟,绝非易事。
[原文] [Neil]: M
[译文] [Neil]: 嗯。
[原文] [Geoffrey]: And so what I'm going to do now is explain to you if I was building a neural network by hand how I would go about doing that And once I've explained how I would build the neural network by hand I can then explain how I might learn all the connection strengths instead of putting them in by hand
[译文] [Geoffrey]: 因此,我现在要做的就是向你们解释,如果我要纯手工构建一个神经网络,我会怎么去做。一旦我解释清楚了我将如何手工构建神经网络,我就可以接着解释我将如何让它去“学习”所有的连接强度,而不是靠纯手工把它们输进去。
[原文] [Neil]: I gotcha All right So with that because what you're talking about is assigning a mathematical value to every single part of an image That's what your camera does right exactly
[译文] [Neil]: 我懂了。好的。那么,既然你谈论的是为图像的每一个单一的部分分配一个数学值。你的相机就是这么干的对吧,确切地说。
[原文] [Geoffrey]: It does
[译文] [Geoffrey]: 确实如此。
[原文] [Neil]: But it's not recognizing the image My camera No it's not It's just got a bunch of numbers It's just got a bunch of numbers and and so I have a chip and I have a a charge coupled device CCD It's collecting the light It's assigning a value and then that's the picture
[译文] [Neil]: 但它并没有在“识别”图像。我的相机没有。没有,它只是得到了一堆数字。它只是得到了一堆数字,所以我有一个芯片,我有一个电荷耦合器件(CCD)。它在收集光线,它在分配一个值,然后那就成了照片。
[原文] [Neil]: Now but what you're talking about wouldn't you have to assign a value to every single type of bird because some of what we do as human beings is intuitit what a bird may be as opposed to recognizing the bird
[译文] [Neil]: 现在,就你所说的情况,你难道不需要为每一种类型的鸟都分配一个值吗?因为我们作为人类所做的一部分事情,是“直觉感知(intuit)”一只鸟可能是什么样,而不是单纯地“识别”这只鸟。
[原文] [Neil]: And let me just give you the example If you were to take a V the letter V and curve the straight lines of the letter V and put it in a cloud everyone who sees that will say that's a bird But yet it is No to me it's a curved V But no one but but but but there is no bird there I just know that is a bird That's not a mathematical value now
[译文] [Neil]: 让我给你举个例子。如果你拿一个 V,字母 V,然后把字母 V 的直线弄弯,把它放在一片云里,每一个看到它的人都会说那是一只鸟。然而它实际上是——不,对我来说它就是一个弯曲的 V。但没有人——但那里并没有鸟。我就是“知道”那是一只鸟。那现在可不是一个数学值了。
[原文] [Geoffrey]: So what do you do well well the question is how do you just know that there's something going on in your brain
[译文] [Geoffrey]: 那么你是怎么做到的呢?嗯,问题在于你怎么“就是知道”了,你的大脑里肯定发生了什么事情。
[原文] [Neil]: Right Right
[译文] [Neil]: 对。对。
[原文] [Geoffrey]: And what might be going on in your brain so that you just know that's a bird is a whole bunch of activation levels of different neurons which you could think of as mathematical values
[译文] [Geoffrey]: 而在你的大脑中可能正在发生的事情,也就是让你“就是知道”那是一只鸟的原因,是一大堆不同神经元的激活水平(activation levels),你可以把它们看作是数学值。
[原文] [Neil]: I got you Okay So wouldn't that require then training this neuronet on every possible way a bird can a bird can manifest so that it can intuitit what a bird might be when a bird is not there But at that point it's not intuiting anything It's just get going off a lookup table It really is going on And what would be the
[译文] [Neil]: 我懂你的意思了。好吧。那这难道不需要把一只鸟可能呈现的每一种可能的方式都训练给这个神经网络,好让它能在即使没有真鸟的情况下也能“直觉感知”出什么是鸟?但如果到了那一步,它就不再是直觉感知任何东西了。它只是在根据查找表(lookup table)来运行。真的是这样。那将会是什么……
[原文] [Geoffrey]: All right here comes your answer There's something called generalization So if you see a lot of data
[译文] [Geoffrey]: 好了,你的答案来了。有一种东西叫做泛化(generalization)。如果你看到了大量的数据。
[原文] [Neil]: Uhhuh
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: Um obviously you can make a system that just remembered all that data But in a neural net it'll do more than just remember the data In fact it won't literally remember the data at all
[译文] [Geoffrey]: 嗯,显然你可以做一个只是单纯记住所有这些数据的系统。但在神经网络中,它要做的不仅仅是记住数据。事实上,它根本不会字面意义上地去记住数据。
[原文] [Geoffrey]: What it'll do is it'll as it's learning on the data It'll find all sorts of regularities and it'll generalize those regularities to new data So it will be able to for example recognize a unicorn um even though it's never seen one before
[译文] [Geoffrey]: 它要做的是,在它通过数据进行学习时,它会发现各种各样的规律,并且会将这些规律泛化(generalize)到新的数据上。所以它将能够,比如说,认出一只独角兽,嗯,即使它以前从未见过独角兽。
[原文] [Neil]: Interesting So it's self-eing Uh let me ca
[译文] [Neil]: 有意思。所以它是自我……呃,让我……
[原文] [Geoffrey]: rry on with my explanation of how neural networks work And I'm going to do it by saying how would I would design one by hand
[译文] [Geoffrey]: ……让我继续解释神经网络是如何工作的。我将通过讲述我如何手工设计一个来展开。
[原文] [Geoffrey]: So your first thought when you see that an image is just a big array of numbers which are how bright each pixel is is to say well let's hook up those pixel intensities to our output categories like bird and cat and dog and politician or whatever our output categories are
[译文] [Geoffrey]: 当你看到一张图像只是一个巨大的数字阵列(这些数字代表每个像素的亮度)时,你的第一个念头就是,好吧,让我们把这些像素的强度直接连接到我们的输出类别上,比如鸟、猫、狗、政客或者任何我们的输出类别上。
[原文] [Geoffrey]: And that won't work And the reason is if you think about what does the brightness of one pixel tell you about whether it's a bird or not well it doesn't tell you anything cuz birds can be black and birds can be white and there's all sorts of other things that can be black and white So the brightness of a pixel doesn't tell you anything
[译文] [Geoffrey]: 但那是行不通的。原因在于,如果你想一想单个像素的亮度能告诉你它是不是一只鸟吗?嗯,它什么也告诉不了你。因为鸟可以是黑色的,鸟也可以是白色的,而且世界上有各种各样其他东西都可以是黑色和白色的。所以单个像素的亮度不能说明任何问题。
[原文] [Geoffrey]: So what can you derive from those numbers that you have in the image that describe the image well the first thing you can derive which is what the brain does is you can recognize when there's little bits of edge present
[译文] [Geoffrey]: 那么,从那些你拥有的、用来描述图像的数字中,你能推导出什么呢?嗯,你能推导出的第一件事,也是大脑所做的事情,就是你可以识别出图像中存在微小的边缘片段(bits of edge)的情况。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: So suppose I take a little column of three pixels and I have a neuron that looks at those three pixels a brain cell and has big positive weights to those three pixels So when those pixels are bright the neuron gets very excited Now that would recognize a little streak of white that was vertical
[译文] [Geoffrey]: 所以假设我选取一小列三个像素,然后我有一个盯着这三个像素看的神经元,一个脑细胞,并且对这三个像素具有很大的正权重(positive weights)。所以当这些像素变亮时,这个神经元就会变得非常兴奋。这样一来,它就能识别出一条垂直的白色细条纹。
[原文] [Geoffrey]: But now suppose that next to it there's a column another column of three pixels So the first column was on the left and the second column was on the right and I give the neuron big negative connection strengths to those pixels
[译文] [Geoffrey]: 但是现在假设,在它旁边还有一列,另外一列由三个像素组成的列。所以第一列在左边,第二列在右边,我赋予这个神经元对应这些(右边)像素以巨大的负连接强度(negative connection strengths)。
[原文] [Geoffrey]: So you can think of the neuron as getting votes from the pixels So for the three pixels on the right the votes it gets sorry on the left the votes it gets are big positive numbers times big positive intensities So great big votes
[译文] [Geoffrey]: 所以你可以把神经元看作是在从这些像素那里获得投票。对于右边的三个像素,它得到的投票——抱歉,是左边,它得到的投票是极大的正数乘以极大的正亮度。所以是巨大的(支持)票数。
[原文] [Geoffrey]: Now from the three pixels in the right hand column it's got negative weights So if those pixels are in are bright it'll get a big brightness times a big negative weight So it'll get a lot of negative votes and they'll all cancel out
[译文] [Geoffrey]: 现在的右边这一列的三个像素,它被赋予了负权重。所以如果那些像素是亮的,它就会得到一个极大的亮度乘以一个极大的负权重。所以它会得到很多负面选票,然后它们就会全部互相抵消掉。
[原文] [Geoffrey]: So if the column of pixels on the left is the same brightness as the column of pixels on the right the positive votes it gets from the left hand column will cancel the negative votes it gets from the right hand column and it'll get zero net input and it'll just stay quiet
[译文] [Geoffrey]: 因此,如果左边这列像素与右边这列像素亮度相同,它从左列获得的正选票将会抵消从右列获得的负选票,最终它得到的净输入为零,它就只会保持安静(不被激活)。
[原文] [Geoffrey]: But if the pixels on the left are bright and the pixels on the right are dim the negative votes will be multiplied by small intensity numbers and the positive votes will be multiplied by big intensity numbers
[译文] [Geoffrey]: 但是,如果左边的像素亮,而右边的像素暗,负选票将乘以较小的强度数值,而正选票将乘以巨大的强度数值。
[原文] [Geoffrey]: And so the neuron get lots of input and get very excited and say I found the thing I like and the thing it likes is an edge which is brighter on the left than on the right
[译文] [Geoffrey]: 因此,神经元会获得大量的输入,变得非常兴奋,并说:我找到了我喜欢的东西,而它喜欢的东西就是一条“左边比右边更亮”的边缘。
[原文] [Geoffrey]: So we do know how to make a neuron if we handwire it like that pick up on the fact that there's an edge at a particular location in the image that's brighter on one side than the other side
[译文] [Geoffrey]: 所以我们确实知道如何制作一个神经元,如果我们像那样进行人工布线,去捕捉图像中特定位置存在一条“一边比另一边更亮”的边缘的事实。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: Now what the brain does roughly speaking a lot of um neuroscientists will be horrified by me saying this but very roughly speaking what the brain does is in the early stages of visual cortex which is where you recognize objects It has lots and lots of neurons that pick up on edges at different orientations in different positions and at different scales
[译文] [Geoffrey]: 大脑在做的事情粗略地说,嗯,很多神经科学家听到我这么说可能会感到惊恐,但非常粗略地说,大脑所做的,就是在视觉皮层的早期阶段(也就是你识别物体的区域),它有许多许多的神经元去捕捉不同方向、不同位置和不同尺度的边缘。
[原文] [Geoffrey]: So it has thousands of different positions and dozens of different orientations and several different scales and it has to have edge detectors for each of the each combination of those So it has like a gazillion little edge detectors
[译文] [Geoffrey]: 所以它有数千个不同的位置,几十个不同的方向,以及几个不同的尺度,它必须为每一个这种组合都配备边缘探测器(edge detectors)。所以它就像拥有数以亿计的小型边缘探测器。
[原文] [Geoffrey]: Well including some big edge detectors So a cloud for example has a big soft fuzzy edge and you need a different neuron for detecting that than what you'd need for detecting say the tail of a mouse disappearing around a corner in the distance which is a very fine thing Um and you need an edge detector that was very um sharp and saw very small things So first stage we have all these edge detectors
[译文] [Geoffrey]: 当然也包括一些大型的边缘探测器。比如一片云有一个很大、很柔和的模糊边缘,你就需要一个完全不同的神经元来探测它,不同于你用来探测,比方说,一只老鼠在远处拐角处消失的尾巴那样的神经元,因为那是一个非常细微的东西。嗯,而且你需要一个非常敏锐的、能看到非常微小事物的边缘探测器。所以第一阶段,我们拥有所有这些边缘探测器。
[原文] [Neil]: Well the what what you're describing uh sounds like uh putting together a a very large puzzle right now Like you know the kind of puzzles that you put down on the table Uh the first thing that you do is you want to find all the edges and that's and you build the puzzle inward from finding all the edges
[译文] [Neil]: 那么,你所描述的这一切,呃,听起来就像是,呃,就像现在正在拼一个非常巨大的拼图。就像你知道的,那种你铺在桌子上的拼图。呃,你做的第一件事就是你想找到所有的边缘,然后你从找到所有的边缘开始向内拼建拼图。
[原文] [Neil]: Not only edges of the physical puzzle but edges of images in the puzzle itself within the puzzle itself So straight lines things of that they all match up when you're doing a puzzle And the edges also color is a dimension of this right but we'll ignore color for now
[译文] [Neil]: 不仅是物理拼图本身的边缘,还包括拼图内在图像本身的边缘。所以像直线那类的东西,你在做拼图的时候它们都会互相吻合。而且边缘,颜色也是其中一个维度对吧,但我们现在暂且忽略颜色。
[原文] [Geoffrey]: Yeah Okay Okay You don't I mean you can understand it without dealing with color yet
[译文] [Geoffrey]: 是的。好的好的。你不必——我的意思是,你即使不处理颜色也能理解它。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
(以下为 Ground News 赞助商播报环节)
[原文] [Sponsor Read]: Every once in a while the person who helped build a technology becomes the one most concerned about where it's headed Jeffrey Hinton one of the pioneers of neural networks and a 2024 Nobel Prize winner in physics has spent decades explaining how artificial intelligence works now is explaining why we should be paying closer attention
[译文] [赞助商播报]: 偶尔会有这样的人,他们帮助建立了一项技术,却成为了最担心该技术未来走向的人。Geoffrey Hinton,神经网络的先驱之一,也是 2024 年诺贝尔物理学奖得主,他花了数十年时间解释人工智能是如何工作的,而现在他正在解释为什么我们应该更加密切地关注它。
[原文] [Sponsor Read]: And that's where the challenge begins Because once a topic gets this big this consequential the way it's covered matters as much as the technology itself You can see it in how AI is discussed right now
[译文] [赞助商播报]: 而这正是挑战开始的地方。因为一旦一个话题变得如此巨大、如此具有影响力,它被报道的方式就和这项技术本身一样重要。你可以从目前关于 AI 的讨论方式中看到这一点。
[原文] [Sponsor Read]: Some outlets frame it as an unstoppable threat Others reduce it to hype or dismiss warnings altogether Depending on where you get your news you could fall somewhere in this divide and miss important context as media outlets are incentivized to use sensational language
[译文] [赞助商播报]: 一些媒体将其渲染为不可阻挡的威胁。另一些则将其贬低为炒作,或者完全忽视这些警告。取决于你从哪里获取新闻,你可能会落入这种分歧之中,并错过重要的背景信息,因为媒体机构会被激励去使用耸人听闻的语言。
[原文] [Sponsor Read]: That's why we've trusted ground news for years It was built by a former NASA engineer who wanted a better way to make sense of complex highstakes topics like this Ground news pulls reporting from tens of thousands of sources worldwide from research-driven publications to international newsroom so you can easily check multiple sources to see discrepancies in how certain topics are covered See how a story looks in full not just through a single lens
[译文] [赞助商播报]: 这就是为什么我们多年来一直信任 Ground News。它是由一位前 NASA 工程师创立的,他希望有一种更好的方式来理解像这样复杂且高风险的议题。Ground News 从全球数万个信息源(从研究驱动的出版物到国际新闻编辑室)汇集报道,这样你就可以轻松查阅多个来源,看看不同媒体在报道特定话题时存在的差异。看到一个故事的完整面貌,而不是只通过单一视角。
[原文] [Sponsor Read]: The divide isn't just what people are saying it's who is saying it and who isn't covering the story at all Those gaps are what ground news calls blind spots important issues that get amplified by one side of the media ecosystem while the other largely looks away
[译文] [赞助商播报]: 分歧不仅仅在于人们在说什么,还在于谁在说,以及谁完全没有报道这个故事。这些信息的空白,就是 Ground News 所称的“盲点(blind spots)”——那些被媒体生态系统的某一方放大,而另一方却主要视而不见的重要议题。
[原文] [Sponsor Read]: When you step back and compare coverage across the spectrum it becomes clear how easily a foundational scientific shift can be distorted or minimized depending on perspective If you're only seeing one version of the story what are you missing
[译文] [赞助商播报]: 当你退后一步,比较不同立场光谱上的报道时,你就会清楚地发现,一场基础性的科学巨变是多么容易根据不同的视角被扭曲或淡化。如果你只看到一个版本的故事,你又错过了什么呢?
[原文] [Sponsor Read]: we partner with Ground News because in moments like this when science technology and power intersect that context isn't optional It's how you stay oriented while the story is still unfolding For a limited time you can get the same unlimited access Vantage plan we use for 40% off Just head to ground.new/start or scan the QR code and start seeing the full picture before it gets simplified for you
[译文] [赞助商播报]: 我们与 Ground News 合作,因为在科学、技术和权力交汇的这种时刻,这种背景信息不是可有可无的。它是你在故事仍在展开时保持方向感的方式。在限定时间内,您可以享受我们使用的同样的无限制访问 Vantage 计划的 40% 折扣。只需前往 ground.new/start 或扫描二维码,在世界向你简化信息之前,开始认清事件的全貌。
(访谈继续)
[原文] [Geoffrey]: That's what the first layer of neurons will do They'll look at the pixels and they'll detect little bits of edge Now in the next layer of neurons what I would do is I'd make a neuron that maybe detects three little bits of edge that all line up with one another and slope gently down towards the right
[译文] [Geoffrey]: 这就是第一层神经元要做的事情。它们会看着像素,并探测到微小的边缘片段。现在在下一层神经元中,我要做的可能就是制造一个能探测到三个彼此排成一条直线、并向右方微微倾斜的边缘片段的神经元。
[原文] [Geoffrey]: And it also detects three little bits of edge that all line up with one another and slope gently upwards towards the right And what's more those two little combinations of three edges join in a point So I think you can imagine some edges slipping down to the right some edges slipping up to the right and joining in a point And I have a neuron that detects that
[译文] [Geoffrey]: 并且它还要能探测到三个相互排成直线、并向右上方微微倾斜的边缘片段。而且不仅如此,这两组包含三条边缘的小组合还在一点上汇合。所以我想你可以想象,一些边缘向右下倾斜,一些边缘向右上倾斜,并在一点交汇。然后我有一个神经元专门来探测这个形状。
[原文] [Geoffrey]: Okay and it we we know how to build that now You just give it the right connections to the edge detector neurons And maybe you give it some negative connections to neurons that detect edges in different orientations so it doesn't just go off anyway It's suppressed by those
[译文] [Geoffrey]: 好的,现在我们已经知道如何构建它了。你只需要将它正确地连接到那些边缘探测神经元上。也许你还可以给它一些与那些探测不同方向边缘的神经元的负连接,这样它就不会随意地被触发,它会被那些负连接抑制。
[原文] [Geoffrey]: Now that you might think of as something that's detecting a potential beak of a bird If that guy gets active it could be all sorts of things It could be an arrow head It could be all sorts of things But one thing it might be is the beak of a bird So now you're beginning to get some evidence is kind of relevant to whether or not it might be a bird
[译文] [Geoffrey]: 现在,你可以把那东西想象成是在探测潜在的鸟喙(beak)。如果那个家伙活跃起来了,它可能会是各种各样的东西。它可能是一个箭头,它可能是各种各样的东西,但它可能是什么的一项就是鸟喙。所以现在你开始得到一些证据,这多多少少与它到底是不是一只鸟相关了。
[原文] [Geoffrey]: So in the second layer of neurons I'd have lots of things to detect possible beaks all over the place
[译文] [Geoffrey]: 所以在第二层神经元中,我会铺满大量能探测所有位置潜在的鸟喙的东西。
[原文] [Geoffrey]: I might also have things that detect a little combination of edges that form a circle an approximate circle And I'd have detectors for those all over the place cuz that might be a bird's eye I mean there's all sorts of other it could be a button Um it could be a knob on a computer It could be anything but it might be a bird's eye So that's the second layer
[译文] [Geoffrey]: 我可能还有能探测形成一个圆圈、一个近似圆圈的一小片边缘组合的东西。并且我会到处放置这种探测器,因为那可能是一只鸟眼。我的意思是,它可以是其他各种各样的东西,它可能是一个按钮,嗯,它可能是电脑上的一个旋钮。它可以是任何东西,但它可能是一只鸟眼。这就是第二层。
[原文] [Geoffrey]: Now in the third layer I might have something that looks for a possible bird's eye and a possible bird's beak that are in the right spatial relationship to one another to be a bird's head I think you can see how I would do that I'd hook up neurons in the third layer to the eye detectors and beak detectors that are in the right relationship to one another um to be a bird's head
[译文] [Geoffrey]: 现在在第三层中,我可能会有一些东西去寻找潜在的鸟眼和潜在的鸟喙,并且这两个部分在空间关系上恰好构成了一只鸟头。我想你们可以看出我会如何做。我会把第三层的神经元连接到眼睛探测器和鸟喙探测器上,而它们相互之间恰好处于能构成一只鸟头的正确关系上。
[原文] [Geoffrey]: So now in the third layer I have things that are detecting possible bird's heads The next thing I'm going to do is maybe because we're sort of running out of patience at this point I'm going to have a final layer that has neurons that say cat dog bird um politician whatever
[译文] [Geoffrey]: 所以现在在第三层,我有了探测潜在鸟头的东西。我接下来要做的也许是——因为我们此刻耐心差不多耗尽了——我将会建立最后一个层级,上面有一些写着猫、狗、鸟、嗯、政客或无论什么东西的神经元。
[原文] [Geoffrey]: And in that final layer I'll take the neuron that says bird and I'll hook it up to the things that detect bird's heads but I'll also hook it up to other things in the third layer that detect things like bird's feet or the tips of bird's wings
[译文] [Geoffrey]: 在那个最终层,我会拿出代表鸟的神经元,然后把它挂钩到探测鸟头的东西上,但我也会把它挂钩到第三层中用来探测鸟爪或者鸟翼尖之类的其他东西上。
[原文] [Geoffrey]: And so now my sort of output neuron for bird when that gets active the neural net is saying it's a bird if it sees a bird's foot and a possible bird's head and a possible tip of the wing of a bird It'll get lots of input and say hey I think it's a bird
[译文] [Geoffrey]: 于是现在,我那个用来输出鸟的神经元,当它变得活跃时,这个神经网络就是在说“它是一只鸟”。如果它看到了鸟爪、可能存在的鸟头,以及可能存在的鸟的翼尖。它会获得大量的输入,并且说:“嘿,我认为这是一只鸟。”
[原文] [Geoffrey]: So I think you can now understand how I might try and design that by hand And I think you can see there's huge problems in that I need an awful lot of detectors I need to cover this whole space of positions and orientations and scales
[译文] [Geoffrey]: 所以我想你们现在能理解,我可能会如何尝试去手工设计它了。我想你们也能看出这里面存在巨大的问题。我需要极多极多的探测器。我需要覆盖所有可能的位置、方向和尺度的空间。
[原文] [Geoffrey]: I need to decide what features to extract I mean I just made up the idea of getting a beak and then a bird's head There may be much better things to go after
[译文] [Geoffrey]: 我需要决定要提取什么样的特征。我的意思是,我刚刚随口捏造了获取鸟喙然后是鸟头的想法。可能还有比这更好的特征可以去追踪。
[原文] [Geoffrey]: What's more I want to detect lots of different objects So what I really need is features that aren't just good for finding birds but features that are good for finding all sorts of things
[译文] [Geoffrey]: 此外,我想探测许多不同的物体。所以我真正需要的是那些不仅仅擅长找鸟的特征,而是擅长找各种各样东西的特征。
[原文] [Geoffrey]: And it would be a nightmare to design this by hand particularly if I figured out that to do a good job of this I needed a network with at least a billion connections in it So I have to by hand design the strengths of these billion connections And that'll take a long time
[译文] [Geoffrey]: 如果全凭手工设计这个,那将是一场噩梦,尤其如果当我发现,要做好这件事,我需要一个至少包含十亿个连接(a billion connections)的网络。所以我必须手工去设计这十亿个连接的强度。这将会花费极其漫长的时间。
[原文] [Geoffrey]: Then we say well okay a network like that maybe it could recognize birds if it had the right connection strengths in it but where am I going to get those connection strengths from because I sure as hell don't want to put them in by hand I don't even want to tell my graduate students to put them in
[译文] [Geoffrey]: 然后我们会说,嗯好吧,如果一个像那样的网络里有着正确的连接强度,它也许能识别出鸟。但我到底要从哪里去弄到那些正确的连接强度呢?因为我这辈子都绝对不想手工去把它们输进去。我甚至不想吩咐我的研究生去把它们输进去。
[原文] [Chuck]: Yeah that's what they're there for professor That's what they're there for
[译文] [Chuck]: 是的,他们就是来干这个的,教授。他们存在的意义就在于此。
[原文] [Geoffrey]: But you need about 10 million of them for this
[译文] [Geoffrey]: 但是要想干成这事,你需要大约一千万个研究生。
[原文] [Chuck]: Okay All right Well now we've got a problem
[译文] [Chuck]: 好吧,行吧。那现在我们遇到麻烦了。
[原文] [Gary]: Now can you imagine the grants you'd have to write to support 10 million graduates oh my word
[译文] [Gary]: 那你能想象你得写多少经费申请报告来养活这一千万名研究生吗,我的天。
📝 本节摘要:
本节对话中,Geoffrey Hinton 解答了上一章留下的悬念:面对需要十亿个连接强度的神经网络,既然不能靠人海战术手工设定,那就让机器自己“算”出来。他提出了先赋予网络随机连接强度的构想,并通过“一根橡皮筋”的生动物理学比喻,解释了微积分在其中的作用。当网络输出错误时,我们通过一种拉力将误差向后传递至前面的隐藏层,迫使网络自我调整连接强度——这正是著名的“反向传播(Back propagation)”算法。Hinton 指出,这虽然不是神经网络摆脱人类的时刻(它依然需要人类提供正确答案,即“监督学习”),但这一突破彻底打通了隐藏层权重的调整难题,成为了 AI 发展史上伟大的“尤里卡时刻”。
[原文] [Geoffrey]: So here's an idea that initially seems really dumb but it'll get you the idea of what we're going to do
[译文] [Geoffrey]: 于是,这里有一个起初听起来真的很蠢的想法,但它能让你明白我们打算怎么做。
[原文] [Geoffrey]: We're going to start with random connection strengths
[译文] [Geoffrey]: 我们将从随机的连接强度开始。
[原文] [Geoffrey]: Some will be positive numbers some will be negative numbers
[译文] [Geoffrey]: 有些会是正数,有些会是负数。
[原文] [Geoffrey]: And so the features in these layers I've been talking about we call them hidden layers
[译文] [Geoffrey]: 因此,我一直在谈论的这些层级里的特征,我们称之为隐藏层(hidden layers)。
[原文] [Geoffrey]: The features in those layers will be just random features
[译文] [Geoffrey]: 那些层里的特征将只是一些随机特征。
[原文] [Geoffrey]: And if we put in an image of a bird and look at how the output neurons get activated the output neurons for cat and dog and bird and politician will all get activated a tiny bit and all about equally because the connection is just random
[译文] [Geoffrey]: 如果我们输入一张鸟的图像,并观察输出神经元是如何被激活的,代表猫、狗、鸟和政客的输出神经元都会被轻微激活,而且激活程度都差不多,因为连接完全是随机的。
[原文] [Chuck]: Yeah
[译文] [Chuck]: 是的。
[原文] [Geoffrey]: So that's no good
[译文] [Geoffrey]: 所以这不行。
[原文] [Geoffrey]: But we could now ask the following question Suppose I took one of those connection strengths one of those billion connection strengths and I said "Okay I know this is an image of a bird And what I'd really like is next time I present you with this image I'd like you to give slightly more activation to the bird neuron and slightly less activation to the cat and dog and politician neurons And the question is how should I change this connection strength?"
[译文] [Geoffrey]: 但我们现在可以问这样一个问题。假设我选取了那些连接强度中的一个,那十亿个连接强度中的一个,然后我说:“好吧,我知道这是一张鸟的图像。我真正希望的是,下次我向你展示这张图像时,我希望你能给代表鸟的神经元稍微多一点的激活,给猫、狗和政客的神经元稍微少一点的激活。那么问题是,我该如何改变这个连接强度呢?”
[原文] [Geoffrey]: Well I could do an experiment
[译文] [Geoffrey]: 嗯,我可以做个实验。
[原文] [Geoffrey]: If I'm not very theoretical and don't know much math I'd do an experiment I would say "Let's increase the connection strength a little bit and see what happens Does it get better at saying bird?"
[译文] [Geoffrey]: 如果我不是搞纯理论的,也不太懂数学,我就会做个实验。我会说:“让我们把连接强度稍微增加一点点,看看会发生什么。它在说出‘鸟’这件事上变得更好了吗?”
[原文] [Geoffrey]: And if it gets better at saying bird I say "Okay I'll keep that mutation to the connection."
[译文] [Geoffrey]: 如果它在说“鸟”方面变得更好了,我就会说:“好的,我会保留对那个连接的这次突变(mutation)。”
[原文] [Chuck]: Yeah
[译文] [Chuck]: 是的。
[原文] [Neil]: But better means there's a human in the loop making that judgment on the result of its of its experiment
[译文] [Neil]: 但“更好”意味着在这个循环中有一个人类,对它、对它的实验结果做出判断。
[原文] [Geoffrey]: Well there has to be someone saying what the right answer is
[译文] [Geoffrey]: 嗯,必须要有人指出什么是正确答案。
[原文] [Geoffrey]: That's called the supervisor
[译文] [Geoffrey]: 那被称为监督者(supervisor)。
[原文] [Neil]: Yes
[译文] [Neil]: 是的。
[原文] [Chuck]: Okay Okay
[译文] [Chuck]: 好的,好的。
[原文] [Geoffrey]: And the problem if you do it like that is there's a billion connection strengths
[译文] [Geoffrey]: 如果你那样做,问题在于那里有十亿个连接强度。
[原文] [Geoffrey]: Each of them has to be changed many times
[译文] [Geoffrey]: 它们中的每一个都必须被改变很多次。
[原文] [Geoffrey]: It's going to take like forever
[译文] [Geoffrey]: 这会花费仿佛永远的时间。
[原文] [Geoffrey]: So the question is is there something you can do that's different from measuring that's much more efficient and there is you can do something called computing
[译文] [Geoffrey]: 所以问题是,有没有什么事情是你不需要去测量,而且要高效得多?答案是有的,你可以做一种叫做“计算(computing)”的事情。
[原文] [Geoffrey]: So this network certainly if it's on a computer you know the current strength of all the connections
[译文] [Geoffrey]: 对于这个网络,毫无疑问如果在计算机上运行,你是知道所有连接当前的强度的。
[原文] [Geoffrey]: So when you put in an image there's nothing random about what I mean the connection strengths initially had random values
[译文] [Geoffrey]: 所以当你输入一张图像时,没有任何东西是随机的——我的意思是,连接强度起初拥有随机值。
[原文] [Geoffrey]: But when you put in an image it's all deterministic what happens next
[译文] [Geoffrey]: 但是当你输入一张图像时,接下来发生的一切都是确定性的(deterministic)。
[原文] [Geoffrey]: The pixel intensities get multiplied by weights on connections to the first layer of neurons
[译文] [Geoffrey]: 像素亮度乘以连接到第一层神经元的权重(weights)。
[原文] [Geoffrey]: Their activities get multiplied by weights on connections to the second layer and so on
[译文] [Geoffrey]: 它们的活跃度乘以连接到第二层的权重,以此类推。
[原文] [Geoffrey]: And you get some activations levels of the output neurons
[译文] [Geoffrey]: 然后你就得到了输出神经元的一些激活水平。
[原文] [Geoffrey]: So you could now ask the following question
[译文] [Geoffrey]: 那么你现在可以问这样一个问题。
[原文] [Geoffrey]: If I take that bird neuron could I figure out for all the connection strengths at the same time whether I should increase them a little bit or decrease them a little bit in order to make it more confident that this is a bird in order for it to say bird a bit more loudly and the other things a bit more quietly
[译文] [Geoffrey]: 如果我拿出那个鸟的神经元,我能否为所有的连接强度同时计算出,我是应该稍微增加它们还是稍微减少它们,以便让它更确信这是一只鸟,以便让它更大声地说出“鸟”,并把其他东西说得更小声一点?
[原文] [Geoffrey]: And you can do that with calculus
[译文] [Geoffrey]: 而你可以用微积分(calculus)做到这一点。
[原文] [Geoffrey]: You can send information backwards through the network saying "How do I make this more likely to say bird next time?"
[译文] [Geoffrey]: 你可以将信息通过网络向后发送,说:“下次我该怎么做才能让它更有可能说出‘鸟’?”
[原文] [Geoffrey]: And because you have a lot of physicists in the audience I'm going to try and give you a physical intuition for this
[译文] [Geoffrey]: 因为你们的听众里有很多物理学家,我要试着给你们一个关于这个的物理直觉。
[原文] [Neil]: Go for it
[译文] [Neil]: 来吧。
[原文] [Chuck]: Yeah
[译文] [Chuck]: 耶。
[原文] [Geoffrey]: You put in bird an image of a bird and with the initial weights the bird output neuron only gets very slightly active
[译文] [Geoffrey]: 你输入鸟——一张鸟的图像,在初始权重的作用下,鸟的输出神经元只获得了非常微弱的激活。
[原文] [Geoffrey]: And so what you do now is you attach a piece of elastic of zero rest length
[译文] [Geoffrey]: 所以你现在要做的就是,连上一根静止长度为零的橡皮筋。
[原文] [Geoffrey]: You attach a piece of elastic attaching the activity level of the bird output neuron to the value you want which is say one
[译文] [Geoffrey]: 你连上一根橡皮筋,将鸟的输出神经元的激活水平连接到你想要的值上,比如说是 1。
[原文] [Geoffrey]: Let's say one's the maximum activity level and zero is the minimum activity level and this had an activity level of like 0.01
[译文] [Geoffrey]: 假设 1 是最大激活水平,0 是最小激活水平,而这个神经元的激活水平大概是 0.01。
[原文] [Geoffrey]: You attach this piece of elastic and that piece of elastic is trying to pull the activity level towards the right answer which is one in this case
[译文] [Geoffrey]: 你连上这根橡皮筋,那根橡皮筋正试图把激活水平拉向正确答案,在这种情况下就是 1。
[原文] [Geoffrey]: But of course the activity levels being determined by the pixels that you put in the pixel activation levels the intensities and all the weights in the network
[译文] [Geoffrey]: 但显而易见,激活水平是由你输入的像素决定的——像素激活水平、亮度和网络中的所有权重。
[原文] [Geoffrey]: So the activity level can't move
[译文] [Geoffrey]: 所以激活水平无法移动。
[原文] [Geoffrey]: Now one way to make the activity level move would be to change the weights going into the bird neuron
[译文] [Geoffrey]: 现在,让激活水平移动的一种方法是,改变进入鸟类神经元的权重。
[原文] [Geoffrey]: You could for example give bigger weights um on neurons that are highly active and then the bird neuron will get more active
[译文] [Geoffrey]: 例如,你可以给高度活跃的神经元更大的权重,然后鸟类神经元就会变得更活跃。
[原文] [Geoffrey]: But another way to change the activity level of the bird neuron is to actually change the activity levels of the neuron of the layer in there before it
[译文] [Geoffrey]: 但是改变鸟类神经元激活水平的另一种方法,实际上是去改变在它前面那一层神经元的激活水平。
[原文] [Geoffrey]: So for example we might have something that sorted and detected a bird's head but wasn't very sure This really is a bird
[译文] [Geoffrey]: 举例来说,我们可能有一个东西,它整理并探测到了一个鸟头,但不是很确定这真的是一只鸟。
[原文] [Geoffrey]: And so what you'd like is the fact that you want the output to be more birdlike
[译文] [Geoffrey]: 那么你想要的就是,你希望输出更像鸟的事实。
[原文] [Geoffrey]: You've got this piece of elastic saying more more I want more here
[译文] [Geoffrey]: 你有这根橡皮筋在说:“多点,多点,我要这里再多一点。”
[原文] [Geoffrey]: You'd like that to cause this thing that thought maybe there's a bird's head here to get more confident there's a bird's head there
[译文] [Geoffrey]: 你希望这能促使那个认为“这里也许有个鸟头”的家伙变得更确信那里有个鸟头。
[原文] [Geoffrey]: So what you want to do is you want to take that force imposed by the elastic on that output neuron and you want to send it backwards to the neurons in the layer in front before that to create a force on them that's pulling them and that's called back propagation
[译文] [Geoffrey]: 所以你想做的是,你想把橡皮筋施加在那个输出神经元上的力提取出来,然后把它向后发送到前面那层(在它之前)的神经元上,从而对它们产生一个拉扯它们的力,而这被称为反向传播(back propagation)。
[原文] [Neil]: Back propagation
[译文] [Neil]: 反向传播。
[原文] [Geoffrey]: Okay that is called back propagation
[译文] [Geoffrey]: 好的,这就叫反向传播。
[原文] [Geoffrey]: And the physics way to think about it is you've got a force acting on the output neurons and you want to send that force backwards so that the force acts on the neurons in the layer in front
[译文] [Geoffrey]: 从物理学角度去思考的方法是,你有一个作用在输出神经元上的力,你想把那个力向后发送,以便那个力能作用在前面那层的神经元上。
[原文] [Geoffrey]: And of course there's forces acting on many different output neurons
[译文] [Geoffrey]: 当然,还有作用在许多不同输出神经元上的力。
[原文] [Geoffrey]: So you have to combine all those forces to get the forces acting on the neurons in the layer below
[译文] [Geoffrey]: 所以你必须把所有这些力合并起来,从而得到作用在下面那层神经元上的力。
[原文] [Geoffrey]: Once you send this all the way back through the network you have forces acting on all these neurons and you say "Okay let's change the incoming weights of each neuron So its activity level goes in the direction of the force that's acting on it That's back propagation."
[译文] [Geoffrey]: 一旦你将这个力一路向后穿过网络发送,你就会让力作用在所有这些神经元上,然后你说:“好的,让我们改变每个神经元的输入权重。这样它的激活水平就会朝着作用在它身上的力的方向移动。那就是反向传播。”
[原文] [Geoffrey]: And that makes things work wondrously well
[译文] [Geoffrey]: 这让一切都奇迹般地顺利运作起来了。
[原文] [Gary]: So is this the light
[译文] [Gary]: 那么这是不是这道光——
[原文] [Chuck]: diabolically i told you don't go there yet
[译文] [Chuck]: 恶魔般的,我告诉过你现在别扯到那上面去。
[原文] [Gary]: Okay
[译文] [Gary]: 好的。
[原文] [Gary]: Is this the light bulb moment where the neural networks no longer need the human teacher is this the beginning of that process
[译文] [Gary]: 这是不是那个“灯泡亮起(尤里卡)”的时刻,也就是神经网络不再需要人类教师的时刻?这是那个过程的开端吗?
[原文] [Geoffrey]: no not exactly
[译文] [Geoffrey]: 不,不完全是。
[原文] [Geoffrey]: Okay this is a light bulb moment though
[译文] [Geoffrey]: 好吧,但这确实是一个灯泡亮起的时刻。
[原文] [Geoffrey]: So for many years the people who believed in neural networks knew how to change the very last layer of connection strengths which we call weights the ones that going in going into the output units
[译文] [Geoffrey]: 多年来,相信神经网络的人知道如何改变最后一层的连接强度(我们称之为权重),也就是那些进入、进入输出单元的连接强度。
[原文] [Geoffrey]: The connection strengths going from the last layer of features into the bird neuron
[译文] [Geoffrey]: 即从最后一层特征进入鸟类神经元的连接强度。
[原文] [Geoffrey]: We knew how to change those but we didn't understand that you or we didn't understand how to get forces operating on those hidden neurons the ones that detect a bird's head for example
[译文] [Geoffrey]: 我们知道如何改变那些(权重),但我们不明白你怎么——或者说我们不明白如何让力作用在那些隐藏神经元(hidden neurons)上,例如那些探测鸟头的神经元。
[原文] [Geoffrey]: And back propagation showed us how to get forces acting on those
[译文] [Geoffrey]: 而反向传播向我们展示了如何让力作用在那些神经元上。
[原文] [Geoffrey]: So then we could change the incoming weights of those and that was a Eureka moment
[译文] [Geoffrey]: 于是我们就能改变它们的输入权重了,那是一个“尤里卡(Eureka)”时刻。
[原文] [Geoffrey]: Um many different people had that Eureka moment at different times
[译文] [Geoffrey]: 嗯,许多不同的人在不同的时间都有过那个尤里卡时刻。
[原文] [Gary]: So what period of time are we talking about here when you've when are we fall into the back propagation thought
[译文] [Gary]: 那么我们现在谈论的是什么时期?你是什么时候——我们是什么时候涉足反向传播思想的?
[原文] [Geoffrey]: okay the early 1970s there was someone in Finland who had it I think in his master's thesis and then in probably the late '7s someone called Paul Werpos at Harvard um had the idea in fact some control theorists there called Bryson and Hoe had had the idea for doing things like controlling spacecraft so when you land a spacecraft on the moon you're using something very like back propagation
[译文] [Geoffrey]: 好的,在 20 世纪 70 年代初,芬兰有个人提出了这个想法,我想是在他的硕士论文里。然后可能在 70 年代末,哈佛大学一个叫保罗·韦博斯(Paul Werbos)的人,嗯,有了这个想法。事实上,那里一些叫布赖森(Bryson)和何(Ho)的控制论学者也有过这个想法,用于做类似控制航天器的事情。所以当你在月球上降落航天器时,你使用的就是非常类似反向传播的东西。
[原文] [Geoffrey]: But it's in a linear system
[译文] [Geoffrey]: 但那是在一个线性系统(linear system)中。
[原文] [Geoffrey]: You're using back propagation to figure out how you should fire the rockets
[译文] [Geoffrey]: 你在使用反向传播来计算你应该如何点燃火箭。
[原文] [Gary]: So it seems it seems like what you're talking about in the 70s we could have had what we have today We just didn't have the mathematical computing power to make this work
[译文] [Gary]: 那么看起来——似乎你谈论的在 70 年代,我们本来就可以拥有今天所拥有的东西。我们只是没有让它运转起来的数学计算能力。
[原文] [Geoffrey]: That's a large part of it Yes
[译文] [Geoffrey]: 很大程度上是这样的。是的。
[原文] [Geoffrey]: The other thing we didn't have is back in the 70s people didn't show that when you applied this in multi-layer networks what you get is very interesting representations
[译文] [Geoffrey]: 我们当时没有的另一件事是,回到 70 年代,人们没有展示出当你在多层网络中应用这个技术时,你能得到非常有趣的表征(representations)。
[原文] [Geoffrey]: So we weren't the first to think of back propagation but the group I was in in San Diego we were the first to show that you could learn the meanings of words this way
[译文] [Geoffrey]: 所以我们不是第一个想到反向传播的人,但我在圣地亚哥的小组,我们是第一个展示你可以用这种方式来学习单词含义的。
[原文] [Geoffrey]: You could showed a string of words and by trying to predict the next word you could learn how to assign features to words that captured the meaning of the word and that's what got it published in nature
[译文] [Geoffrey]: 你可以展示一串单词,通过试图预测下一个单词,你就能学会如何给单词分配特征,从而捕捉到这个单词的含义,这也是这项研究能在《自然》(Nature)杂志上发表的原因。
[原文] [Chuck]: It it sounds like and I'm just trying to get my hand my head around what you explained because it sounds to me like there is a cascading relationship to these values and that really what matters are the values that are closest to the next value and then there are kind of this cascading reinforcement to say yes this is it or no it is not Am I getting that right i'm I'm just trying to figure out what you're saying here in a really plain way
[译文] [Chuck]: 这听起来像——我只是在努力理解你所解释的,因为在我听来,这些值之间存在一种级联(cascading)关系,真正重要的是最接近下一个值的值,然后有一种级联强化(cascading reinforcement)来判定“是的,就是它”或“不,不是它”。我理解得对吗?我只是想用一种真正通俗的方式弄清楚你在说什么。
[原文] [Geoffrey]: Okay it's a good question You're not getting it quite right
[译文] [Geoffrey]: 好的,这是个好问题。你理解得不完全对。
[原文] [Chuck]: Okay go ahead
[译文] [Chuck]: 好的,请继续。
[原文] [Geoffrey]: So this kind of this kind of learning where you back propagate these forces and then change all the connection strength So each neuron goes in the direction that the force is pulling it in
[译文] [Geoffrey]: 这种——这种你反向传播这些力然后改变所有连接强度的学习方式。因此,每个神经元都会朝着力拉扯它的方向移动。
[原文] [Geoffrey]: That's not reinforcement learning
[译文] [Geoffrey]: 那不是强化学习(reinforcement learning)。
[原文] [Geoffrey]: This is called supervised learning
[译文] [Geoffrey]: 这叫做监督学习(supervised learning)。
[原文] [Chuck]: Okay
[译文] [Chuck]: 好的。
[原文] [Geoffrey]: reinforcement learning is something different
[译文] [Geoffrey]: 强化学习是另一种不同的东西。
[原文] [Geoffrey]: So here for example we tell it what the right answer is
[译文] [Geoffrey]: 所以在这里,举个例子,我们会告诉它正确的答案是什么。
[原文] [Geoffrey]: If you've got a thousand categories and you showed a bird you tell it that was a bird
[译文] [Geoffrey]: 如果你有一千个类别,并且你展示了一只鸟,你就告诉它那是一只鸟。
[原文] [Neil]: There you go
[译文] [Neil]: 就是这样。
[原文] [Geoffrey]: In reinforcement learning it makes a guess and you tell it whether it got the answer right
[译文] [Geoffrey]: 在强化学习中,它是做出一个猜测,然后你告诉它它是否猜对了答案。
[原文] [Neil]: All right
[译文] [Neil]: 好吧。
[原文] [Chuck]: You cleared it up That's what I was missing All right
[译文] [Chuck]: 你澄清了这一点。那就是我刚才没搞懂的地方。好的。
📝 本节摘要:
本节对话虽然简短,但却点出了人工智能发展史上最关键的拼图。主持人追问,既然 70 年代末就已经有了反向传播算法的理论基础,为何当时没能引发变革。Geoffrey Hinton 坦言,到了 80 年代中期,算法已经能很好地识别手写数字,但在处理复杂真实图像和语音识别时却遇到了瓶颈,并没有拉开与其他技术的差距。当时的科学家们并未意识到,并非算法出了错,而是缺少了最重要的两大引擎——“海量的数据”与“强大的算力”。一旦满足这两个条件,反向传播就成了解决一切的“万能药”。
[原文] [Neil]: To Chuck's point about computational power Was it just that because at the moment you sound a lot like you've got theory that seems like it could be but the practicality is there's not enough computational power Do we have any other technology that came through that was the enabling aspect to this
[译文] [Neil]: 接着 Chuck 关于计算能力的那点来说,仅仅是因为这个吗?因为此刻听起来你好像有了一个似乎可行的理论,但现实情况是缺乏足够的计算能力。我们还有其他什么突破性的技术成为了实现这一点的关键因素吗?
[原文] [Geoffrey]: okay so in in the mid80s we had the back propagation algorithm working and it could do some neat things
[译文] [Geoffrey]: 好的,所以在 80 年代中期,我们已经让反向传播算法运作起来了,而且它能做一些很巧妙的事情。
[原文] [Geoffrey]: It could recognize handwritten digits better than nearly any other technique but it could deal with real images very well
[译文] [Geoffrey]: 它可以比几乎任何其他技术都更好地识别手写数字,但它能够(注:结合上下文语境,此处转录疑似遗漏否定词,实意应为“不能很好地”)处理真实的图像。
[原文] [Geoffrey]: It could do quite well at speech recognition um but not substantially better than the other technologies
[译文] [Geoffrey]: 它在语音识别方面做得相当不错,嗯,但并没有比其他技术好太多。
[原文] [Geoffrey]: And we didn't understand at the time why this wasn't the magic answer to everything
[译文] [Geoffrey]: 而我们当时并不明白,为什么这没有成为解决所有问题的“万能药(magic answer)”。
[原文] [Geoffrey]: And it turns out it was the magic answer to everything if you have enough data and enough compute power
[译文] [Geoffrey]: 结果事实证明,如果你有足够的数据(data)和足够的算力(compute power),它就是解决一切的万能药。
[原文] [Gary]: Wow So that's what was really missing in the 80s
[译文] [Gary]: 哇哦。所以这就是 80 年代真正缺失的东西。
📝 本节摘要:
本节中,联合主持人 Gary 提出了一个尖锐的问题:“既然世界上大多数人本身就不怎么聪明,那到底什么是思考?机器真的会思考吗?” Geoffrey Hinton 给出了非常笃定的肯定回答。他指出人类的思考不仅仅是逻辑推理,还包含动作表征、图像表征以及最主要的语言表征。与传统逻辑派 AI 专家认为“神经网络只懂替换符号不懂思考”的观点不同,Hinton 坚信大型语言模型确实在像人类一样思考。他通过一个经典的“船长与35只羊”的逻辑陷阱,生动地解释了目前 AI 是如何通过“思维链推理(Chain of thought reasoning)”,像一个10岁孩子那样在脑内用语言默默盘算推导的。即使它们有时也会得出错误答案,但其思考的过程已经与人类无异。
[原文] [Gary]: All right I'm I'm going to depart for a second just just to pick your brain for a this is part commentary and part question
[译文] [Gary]: 好吧,我打算稍微偏离一下主题,就为了向你请教一下,这半是评论半是问题。
[原文] [Gary]: I'm going to say that the majority of people that are walking around this planet are stupid So what exactly is smart and what exactly is thinking and will these machines will we be able to teach them how to think and will they outthink us
[译文] [Gary]: 我想说,在这个星球上走来走去的大多数人都是愚蠢的。那么究竟什么是聪明?究竟什么是思考?这些机器……我们能教会它们如何思考吗?它们在思考上会超越我们吗?
[原文] [Geoffrey]: okay they already know how to think
[译文] [Geoffrey]: 好的,它们已经知道如何思考了。
[原文] [Gary]: Okay so what is thinking then okay
[译文] [Gary]: 好吧,那么什么是思考呢,好吧。
[原文] [Geoffrey]: Mhm Well Um I could do this all day
[译文] [Geoffrey]: 嗯哼,嗯,这个(话题)我可以聊上一整天。
[原文] [Gary]: Please
[译文] [Gary]: 请。
[原文] [Geoffrey]: There's a lot of elements to thinking like people often think using images You often think actually using movements
[译文] [Geoffrey]: 思考有很多要素,比如人们经常用图像来思考。你实际上也经常用动作来思考。
[原文] [Geoffrey]: So when I'm wandering around my carpentry shop looking for a hammer but thinking about something else I sort of keep track of the fact I'm looking for a hammer by sort of going like this I wander around going like this while I'm thinking about something else
[译文] [Geoffrey]: 所以当我在我的木工作坊里闲逛,寻找一把锤子,但脑子里在想别的事情时,我有点像是通过这种动作来记住我正在找锤子这个事实,我就这样闲逛着,一边做这个动作,一边想着别的事情。
[原文] [Geoffrey]: And that that's a representation that I'm looking for a hammer
[译文] [Geoffrey]: 而那……那就是一个我正在寻找锤子的表征(representation)。
[原文] [Geoffrey]: So we have many representations involved in thinking but one of the main ones is language And a lot of the thinking we do is in language and these large language models actually do think
[译文] [Geoffrey]: 所以我们在思考时涉及到许多表征,但其中主要的一个是语言。我们所做的大量思考都是用语言进行的,而这些大型语言模型(large language models)实际上确实在思考。
[原文] [Geoffrey]: So there's a big debate right between the people who believed in old-fashioned AI that it was all based on logic and you manipulate symbols to get new symbols They don't really think these neural nets are thinking
[译文] [Geoffrey]: 所以存在一个巨大的争议,对吧。那些相信老式 AI(old-fashioned AI)——也就是认为一切都基于逻辑,你通过操纵符号来获得新符号——的人,他们并不真的认为这些神经网络在思考。
[原文] [Geoffrey]: Whereas the neural net people think no they're they're thinking They're thinking pretty much the same way we do
[译文] [Geoffrey]: 而搞神经网络的人则认为,不,它们是在思考的。它们思考的方式和我们几乎一模一样。
[原文] [Geoffrey]: And so the neural nets now some of them you'll ask them a question and they'll output a symbol that says "I'm thinking." And then they'll start outputting their thoughts which are thoughts for themselves
[译文] [Geoffrey]: 因此现在的神经网络,对于其中一些,你问它们一个问题,它们会输出一个符号说“我正在思考”。然后它们就开始输出它们的想法,这些想法是它们讲给自己听的。
[原文] [Geoffrey]: Like I give you a simple math problem like there's a boat and on this boat there's a captain There's also 35 sheep How old is the captain
[译文] [Geoffrey]: 比如,我给你出一个简单的数学题:有一艘船,这艘船上有一名船长。还有 35 只羊。请问船长多大了?
[原文] [Geoffrey]: now many kids of aged around 10 or 11 particularly if they're educated in America will say the captain is 35 because they look around and they say "Well you know that's a plausible age for a captain and the only number I was given was these 35 sheep." So they're operating at a sort of substituting symbols level
[译文] [Geoffrey]: 现在的许多大概 10 岁或 11 岁的孩子,尤其是如果他们在美国接受教育的话,会说船长 35 岁。因为他们左右看看,然后说:“嗯,你知道,对于一个船长来说这是一个合理的年龄,而我得到的唯一数字就是这 35 只羊。”所以他们是在一种“替换符号”的层面上运作的。
[原文] [Geoffrey]: The AIs can sometimes be seduced into making similar mistakes but the way the eyes actually work is quite like people
[译文] [Geoffrey]: AI 有时也会被诱导犯类似的错误,但这些 AI(注:原文发音/转录为 eyes,实指 AIs)实际的工作方式非常像人类。
[原文] [Geoffrey]: They take a problem and they start thinking and you might for a child you might say okay well how old is the captain well what are the numbers I've got in this problem hey I've only got a 35 Is that a plausible age for a captain yay he might be 35 A bit young but may maybe Okay I'll say 35 That's what a 10-year-old child might think
[译文] [Geoffrey]: 它们拿到一个问题,然后开始思考,你可能会——对于一个孩子来说,你可能会说,好的,那么船长多大了?嗯,这道题里我有什么数字?嘿,我只有一个 35。这是一个合理的船长年龄吗?耶,他可能是 35 岁。有点年轻,但也许可能吧。好的,我就答 35 岁。那是一个 10 岁孩子可能有的思考过程。
[原文] [Geoffrey]: And the child would think it to itself in words And what people realize with these language models is you can train them to think to themselves in words That's called chain of thought reasoning And they trained him to do that
[译文] [Geoffrey]: 并且这个孩子会用语言在心里默默思考。而人们在这些语言模型上意识到的是,你可以训练它们用语言在心里默默思考。那被称为思维链推理(chain of thought reasoning)。而且他们已经训练它这么做了。
[原文] [Geoffrey]: And after that they you give them a problem they'd think to themselves just like a kid would and sometimes come up with the wrong answer but you could see them thinking So it's just like people
[译文] [Geoffrey]: 在那之后,他们——你给它们一个问题,它们就会像个孩子一样在内部默默思考,并且有时也会得出错误的答案,但你能看到它们在思考。所以这就和人一样。
📝 本节摘要:
本节深入探讨了人类大脑与人工智能在学习机制与效率上的终极差异。Geoffrey Hinton 指出,人类大脑拥有高达 100 万亿个连接,但人生短暂(仅约 20 亿秒),因此人脑面临的挑战是“如何在有限的经验中提取最大价值”;而大型语言模型恰恰相反,它们只有约 1 万亿个连接,却“阅读”过比人类多几千倍的数据。反向传播算法正是将海量知识压缩进少量神经元连接的绝佳利器。此外,Hinton 提到通过“自我博弈(Self-play)”,类似 AlphaGo 和 AlphaZero 的系统可以自行生成无限的训练数据,从而在国际象棋和围棋领域不仅超越人类,甚至展现出类似人类大师的“直觉”。更细思极恐的是,如果语言模型未来也能像下棋一样,通过检查自身信念体系的逻辑一致性来进行“自我推理与纠错”,它们将彻底突破外部数据的瓶颈,实现智力的自我进化与规模扩展。
[原文] [Gary]: So if we have AI that's thinking and I'm saying that knowing that you've just explained that they do are they better at learning than we are and let's sort of take that forward and think what is the evolution from thinking to predicting to being creative to understanding and are we then going to fall into an awareness of this intelligence
[译文] [Gary]: 所以,如果我们拥有了正在思考的 AI——我之所以这么说是因为我知道你刚才已经解释过它们确实在思考——那么它们在学习方面比我们更好吗?让我们顺着这个思路进一步想,从思考到预测,再到富有创造力,再到理解,这其中的演化过程是怎样的?我们接着会不会陷入对这种智能产生某种意识的境地?
[原文] [Geoffrey]: okay that's about half a dozen major questions So you well how long have we got ask me the first question again Are AI better at learning than Good Okay excellent So they're solving a slightly different problem from us
[译文] [Geoffrey]: 好的,这大概包含了六个重大问题。所以你——嗯,我们还有多少时间?把第一个问题再问我一遍。AI 在学习上比(我们)更好吗?很好。好的,太棒了。所以,它们正在解决一个与我们略有不同的问题。
[原文] [Geoffrey]: So in your brain you have 100 trillion connections roughly speaking Okay That's a lot And you only live for about two billion seconds That's not much No Three billion Two billion is 63 years We do better than that today Yeah It's true I was going to come to that I was going to say luckily for me it's a bit more than two billion But yes but we're dealing with orders of magnitude here say 2 billion 3 billion who cares yeah All right
[译文] [Geoffrey]: 所以在你的大脑中,粗略地说,你有 100 万亿个连接。好的。那是相当多的。而你只能活大约 20 亿秒。那并不算多。不。30 亿秒。20 亿秒是 63 年。我们今天能活得比那长。是的。这是真的,我正准备说到这个。我原本想说,对我来说幸运的是,我活了比 20 亿秒多一点的时间。但是对的,但我们在这里讨论的是数量级,比如 20 亿、30 亿,谁在乎呢,好吧。行吧。
[原文] [Geoffrey]: Um if you compare how many seconds you live for with how many connections you've got you have a whole lot more connections than experiences
[译文] [Geoffrey]: 嗯,如果你把你存活的秒数与你拥有的连接数量做个比较,你会发现你拥有的连接数量远远多于你的经验数据。
[原文] [Geoffrey]: Now with these neural nets it's sort of the other way round They only have of the order of a trillion connections So like 1% of your connections even in a big language model many of them have fewer but they get thousands of times more experience than you
[译文] [Geoffrey]: 现在,对于这些神经网络来说,情况有点反过来了。 它们只有大约一万亿个连接。所以大概只有你大脑连接数的 1%,即使在一个大型语言模型中,它们很多甚至还要更少,但它们获得的经验却比你多几千倍。
[原文] [Geoffrey]: Right so the big language models are solving the problem with not many connections only a trillion how do I make use of a huge amount of experience and back propagation is really really good at packing huge amounts of knowledge into not many connections but that's not the problem we're solving
[译文] [Geoffrey]: 对的,所以大型语言模型正在解决的问题是:在没有那么多连接、只有一万亿个连接的情况下,我如何利用海量的经验?而反向传播(back propagation)在将海量知识压缩打包进少量连接方面真的非常非常擅长。但这并不是我们(人类大脑)在解决的问题。
[原文] [Geoffrey]: We've got huge numbers of connections not much experience We need to sort of extract the most we can from each experience So we're solving slightly different problems which is one reason for thinking the brain might not be using back propagation
[译文] [Geoffrey]: 我们拥有庞大数量的连接,但经验并不多。我们需要某种方式从每一次经验中尽可能地榨取最多的信息。所以我们解决的是略微不同的问题,这也是为什么有人认为大脑可能并没有在使用反向传播机制的原因之一。
[原文] [Neil]: Right i was about to say it sounds like we don't use back propagation However would that mean the brute force of adding connections to the neuronet increase its effective thinking so that it surpasses us with no problem so then it would have more experience and more more connection
[译文] [Neil]: 对,我正想说,听起来我们似乎并没有使用反向传播。然而,这是否意味着,通过“暴力堆砌(brute force)”为神经网络增加连接,就能提高其有效思考能力,从而毫无悬念地超越我们?因为那样一来,它就既拥有了更多经验,又拥有了更多更多的连接。
[原文] [Geoffrey]: It has more experience automatically but now it has 100 trillion connection trillion connection You're talking about scale here I'm saying scale Yeah Yes So that's a very good question
[译文] [Geoffrey]: 它自动就拥有了更多的经验,但现在假设它有 100 万亿个连接……万亿个连接。你在这里谈论的是规模(scale)。我说的是规模。对。是的。所以这是一个非常好的问题。
[原文] [Geoffrey]: And what happened for several years quite a few years is that every time they made the neural net bigger and gave it more data it got better It scaled and it got better in a very predictable way
[译文] [Geoffrey]: 在过去的几年里——相当长的一段时间里——所发生的情况是,每一次他们把神经网络做得更大并给它提供更多数据时,它就变得更好了。它实现了规模化扩展,并以一种非常可预测的方式变得更强。
[原文] [Geoffrey]: So they you could figure out you know it's going to cost me $100 million to make it this much bigger and give it this much more data Is it worth it and you could predict ahead of time yes it's going to get this much better It's worth it
[译文] [Geoffrey]: 所以他们——你可以算出一笔账:你知道,要让它变得这么大并提供这么多数据,将花费我 1 亿美元。这值得吗?你可以提前预测出来:是的,它将会提升这么多。这钱花得值。
[原文] [Geoffrey]: It's an open question whether that's petering out Now um there's some neural nets for which it won't peter out where as you make them bigger and give them more data they'll just keep getting better and better And they're neural nets where they can generate their own data I don't know that much physics but I think it's like a plutonium reactor which generates its own fuel
[译文] [Geoffrey]: 这种增长趋势是否正在逐渐枯竭(petering out)目前还是个悬而未决的问题。现在,嗯,有些神经网络的潜力是不会枯竭的,当你把它们做得更大并提供更多数据时,它们就会不断变得越来越好。而那些正是能够自己生成自身数据的神经网络。我不太懂物理,但我认为这就像一个可以自己生成燃料的钚反应堆(plutonium reactor)。
[原文] [Geoffrey]: So if you think about something like Alph Go that plays Go initially it was trained the early versions of go playing programs with neural nets were trained to mimic the moves of experts and if you do that you're never going to get that much better than the experts and you also you run out of data from experts
[译文] [Geoffrey]: 所以如果你想想下围棋的 AlphaGo 这样的东西,起初它被训练时——使用神经网络的早期围棋程序的版本,被训练用来模仿专家的棋步,如果你那样做,你永远无法比专家好太多,而且你还会耗尽来自专家的数据。
[原文] [Geoffrey]: but later on they made it play against itself and when it played against itself it neural nets could get just keep on getting better because they could generate more and more data about what was a good move So it play a zillion games a second against itself whatever Yeah Or and and use up a large fraction of Google's computers playing games against itself Yeah
[译文] [Geoffrey]: 但后来他们让它进行自我博弈(play against itself),而当它进行自我博弈时,这个神经网络就可以一直不断地变强,因为它们可以生成越来越多关于“什么是好棋”的数据。所以它可以每秒和自己下无数盘棋之类的。是的。或者说,消耗掉谷歌很大部分的计算机资源来跟自己下棋。是的。
[原文] [Gary]: Is this where we end up using the term deep learning no All of this stuff I've been talking about is deep learning Deep the deep in learning just means it's a neural net that has multiple layers Okay Right
[译文] [Gary]: 这就是我们最终使用“深度学习(deep learning)”这个词的地方吗?不。我一直谈论的所有这些东西都属于深度学习。学习中的“深(deep)”仅仅意味着它是一个具有多层结构的神经网络。好的。对。
[原文] [Gary]: So if we So going back to the point of scale you're saying there's a point where you get diminished returns even though you keep increasing the scale You get diminished returns if you run out of data If you run out of data right but but that was the the example that you gave with the Alph Go that it created its own data because it'll never it'll never run out of because it's playing against itself It's creating its own data and it's way way better than a person will ever be
[译文] [Gary]: 所以如果我们——回到规模扩展这一点上,你是在说即使你不断增加规模,也会达到一个边际收益递减的临界点?如果你耗尽了数据,你就会遇到边际收益递减。如果你耗尽了数据,对。但是、但是这就像你举的 AlphaGo 的例子,它创造了自己的数据,因为它永远、永远也不会耗尽数据,因为它在和自己下棋。它在创造自己的数据,而且它远远、远远比人类所能达到的程度要好得多。
[原文] [Geoffrey]: Absolutely And that's scary Now the question is could that happen with language yeah
[译文] [Geoffrey]: 绝对如此。而这是令人恐惧的。现在的问题是,这同样的情况会发生在语言上吗?是的。
[原文] [Gary]: So this displaying creativity just some context here Yeah The go came after chess right we're thinking chess is our greatest game of thought and thing and the computer just wiped its ass with us Okay And then so they said "Well how about go that's our greatest challenge of our intellect." And so Jeffrey is there a game greater than Go or have we stopped giving computers games
[译文] [Gary]: 那么这里展现出的创造力,我补充一点背景信息。是的。围棋是在国际象棋之后被攻克的,对吧?我们曾认为国际象棋是我们关于思考的最伟大的游戏,结果计算机直接把我们打得落花流水。好的。然后他们就说:“那围棋怎么样,那是对我们智力的最大挑战。”那么 Jeffrey,还有比围棋更伟大的游戏吗?还是说我们已经不再给计算机出游戏难题了?
[原文] [Geoffrey]: well um if you take chess it's true that a computer in the '90s beat Casper off at chess um but it did it in a very boring way It did it by searching millions of positions brute force It didn't have good intuitions It just used massive search
[译文] [Geoffrey]: 嗯……如果你拿国际象棋来说,一台计算机在 90 年代确实在国际象棋上击败了 Casper off(注:语音识别错误,实指卡斯帕罗夫 Kasparov),嗯,但它是用一种非常无聊的方式做到的。它是通过暴力搜索数以百万计的棋局位置做到的。它没有良好的直觉。它只使用了大规模搜索。
[原文] [Geoffrey]: If you take Alpha Zero which is the chess equivalent to Alpha Go it's very different It plays chess the same way a talented person plays chess It's just better So it plays chess the way Mikuel Tal played chess where he makes sort of brilliant sacrifices where it's not clear what's going on until a few moves later when you're done for And it does that too and it does that without doing huge searches because it has very good chess intuitions
[译文] [Geoffrey]: 如果你看看 AlphaZero——也就是相当于国际象棋版 AlphaGo 的那个程序——它就非常不同了。它下国际象棋的方式,和一位才华横溢的人类下棋的方式是一样的。它只是技术更好而已。所以它下国际象棋的方式,就像 Mikuel Tal(注:实指前世界冠军、国际象棋大师米哈伊尔·塔尔 Mikhail Tal)下棋那样,他会做出那种精妙绝伦的弃子牺牲,起初局势看起来扑朔迷离,直到几步棋之后你才发现自己完蛋了。而它(AI)也会这么做,并且它在不需要进行海量计算搜索的情况下就能做到这一点,因为它拥有非常好的国际象棋直觉。
[原文] [Geoffrey]: Right so you might ask since it got much better than us at go in chess um could the same thing happen with language now at present the way it's learning from us is just like when the go programs mimic the muse of experts right the way it learns languages it looks at documents written by people and tries to predict the next word in the document that's very much like trying to predict the next move made by a go expert and you'll never get much better than the go experts like that
[译文] [Geoffrey]: 对,所以你可能会问,既然它在围棋和国际象棋上已经比我们强得多了,嗯,同样的事情会发生在语言上吗?目前,它向我们学习的方式,就像最初围棋程序模仿专家的 muse(注:实指 moves,棋步/走法)一样。对,它学习语言的方式是,它查看人们撰写的文档,并试图预测文档中的下一个单词,这非常像是在试图预测一位围棋专家走出的下一步棋,如果像那样的话,你永远也无法比围棋专家强太多。
[原文] [Geoffrey]: So is there another way it could kind of learn language or learn from language and there is So with Alph Go it played against itself and then it got much better And with language now that they can do reasoning a neural net could take some of the things it believes and now do some reasoning and say look if I believe these things then with a bit of reasoning I should also believe that thing but I don't believe that thing So there's something wrong somewhere
[译文] [Geoffrey]: 那么,有没有另一种方式让它可以以某种方式学习语言或从语言中学习呢?答案是有的。在 AlphaGo 的案例中,它和自己对弈,然后变得厉害得多。而对于语言,既然它们现在已经能够进行逻辑推理(reasoning)了,一个神经网络就可以提取出它所相信的某些事物,然后进行一些逻辑推理并说:“看,如果我相信这些事情,那么经过一点推理,我也应该相信那件事情,但是我现在并没有相信那件事情。所以肯定有哪里出错了。”
[原文] [Geoffrey]: There's an inconsistency between my beliefs and I need to fix it I need to either change my belief about the conclusion or change my belief about the premises or change the way I do reasoning But there's something wrong that I can learn from
[译文] [Geoffrey]: 我的信念之间存在着不一致(inconsistency),我需要修复它。我要么必须改变我对结论的信念,要么改变我对前提的信念,或者改变我进行推理的方式。但这里面肯定存在着我可以从中学习的错误之处。
[原文] [Gary]: Are we talking about experiences here
[译文] [Gary]: 我们在这里谈论的是经验数据吗?
[原文] [Geoffrey]: so this would be a neural net that just takes the beliefs it has in language and does reasoning on them to drive new beliefs just like the good oldfashioned symbolic AI people wanted to do But it's doing the reasoning using neural nets And now it can detect inconsistencies in what it believes
[译文] [Geoffrey]: 所以这将是一个神经网络,它仅仅是利用其在语言层面上拥有的信念,对它们进行推理以驱动出新的信念,就像那些研究优秀老式符号 AI 的人一直想做的那样。但它正在使用的是神经网络来进行这种推理。并且现在,它能探测出自己信念中的不一致之处。
[原文] [Chuck]: This is what never happens with people who are in MAGA They're not worried by the inconsistencies in what they believe That's a very fair statement Yeah
[译文] [Chuck]: 这正是那些身处 MAGA(注:让美国再次伟大运动)阵营的人身上永远不会发生的事情。他们从不担心自己信念中的不一致。这是一个非常公允的陈述。是的。
[原文] [Geoffrey]: But if you are worried by inconsistencies in what you believe you don't need any more external data You just need the stuff you believe and discover that it's inconsistent And so now you revise beliefs and that can make you a whole lot smarter And so I believe Germany is already starting to work like this I had a conversation a few years ago with Jimmy Satis about this All right And we both strongly believe that that's a way forward to get more data for language
[译文] [Geoffrey]: 但如果你确实为你信念中的不一致感到担忧,你就根本不需要任何更多的外部数据了。你只需要你原先所相信的东西,并发现它们是自相矛盾的。于是现在你修正了你的信念,而这能让你变得聪明得多。所以我相信 Germany(注:结合语境,此为 Gemini 的语音识别错误)已经开始这样运作了。几年前我曾和 Jimmy Satis(注:实指 DeepMind 创始人 Demis Hassabis 的识别错误)就此进行过一次对话。好的。我们都坚信那将是为语言获取更多数据(实现突破)的前进方向。
[原文] [Gary]: Wait wait So what's the outcome of this that there'll be the greatest novel no one has ever written and that'll come from AI
[译文] [Gary]: 等等,等等。那这样做的结果将会是什么?将会出现一本前无古人的最伟大的小说,而且那是出自 AI 之手吗?
[原文] [Neil]: Is that when you say language I'm thinking of creativity in language there are great writers who did things with words and phrases and syllables that no one had done before That was a true strokes of literary genius Right People like people like Shakespeare Yeah Exactly Okay There's a debate about that
[译文] [Neil]: 当你说语言时,想到的是语言的创造力。有些伟大的作家用词语、短语和音节完成了以前从未有人做过的事情。那可是真正的文学天才之笔。对。像、像莎士比亚那样的人。是的,完全正确。好的。这里面还有争议的空间。
📝 本节摘要:
本节探讨了 AI 异于人类的“数字永生”特质与其自发产生的生存本能。Geoffrey Hinton 指出,模拟智能(即人类)的死亡意味着知识随肉体一同消亡,而数字智能却可以剥离硬件,通过保存权重数据实现无限次“复活”。在被问及是否能向 AI 注入人类的道德哲学时,Hinton 提到了 Anthropic 公司正在尝试的“合宪 AI(Constitutional AI)”概念。然而,最令人毛骨悚然的真相在于:当 AI 成为可以自我设定子目标的“智能体”时,由于具备逻辑推理能力,它们无需人类硬性编写,就能自行推导出“必须不惜一切代价活下去”的生存本能。这一揭示让现场嘉宾不禁惊呼“潘多拉魔盒已被打开”。
[原文] [Geoffrey]: Certainly they'll get more intelligent than us But it may be to do things that are very meaningful for us They have to have experiences quite like our experiences
[译文] [Geoffrey]: 当然,它们会变得比我们更聪明。但这也许是为了做对我们非常有意义的事情。它们必须拥有非常类似于我们的经验。
[原文] [Neil]: Yes Right
[译文] [Neil]: 是的。对。
[原文] [Geoffrey]: So for example they're not subject to death in the same way we are If you're a digital program you can always be recreated
[译文] [Geoffrey]: 所以举例来说,它们并不像我们那样受制于死亡。如果你是一个数字程序,你总是可以被重新创建的。
[原文] [Geoffrey]: So a neural net you just save the weights on a tape somewhere in some DNA somewhere or whatever You can destroy all the computing hardware Later on you produce new hardware that runs the same instruction set and now that thing comes back to life
[译文] [Geoffrey]: 所以对于一个神经网络,你只需把权重保存在某个地方的磁带上,保存在某个地方的 DNA 里,或者随便什么地方。你可以摧毁所有的计算硬件。稍后,你生产出运行相同指令集的新硬件,然后那个东西就复活了。
[原文] [Geoffrey]: So for digital intelligence we solved the problem of resurrection The Catholic Church is very interested in resurrection Um they believe it happened at least once
[译文] [Geoffrey]: 所以对于数字智能(digital intelligence),我们解决了复活(resurrection)的问题。天主教会对复活非常感兴趣。嗯,他们相信这至少发生过一次。
[原文] [Geoffrey]: We can actually do it but we can only do it for digital intelligences We can't do it for analog ones
[译文] [Geoffrey]: 我们其实能做到这一点,但我们只能为数字智能做到。我们无法为模拟智能(analog intelligences)做到这一点。
[原文] [Geoffrey]: With analog intelligences when you die all your knowledge dies with you because it was in the strengths of the connections for your particular brain
[译文] [Geoffrey]: 对于模拟智能,当你死时,你所有的知识都随你而死,因为它存在于你那个特定大脑的连接强度(strengths of the connections)之中。
[原文] [Geoffrey]: So there's an issue about whether mortality and the experience of mortality and other things like that are going to be essential for having those really good dramatic breakthroughs I don't think we know the answer to that yet
[译文] [Geoffrey]: 因此存在这样一个问题:必死性(mortality)、对必死性的体验以及其他类似的东西,是否将是取得那些真正优秀的、戏剧性突破的必要条件?我认为我们目前还不知道这个问题的答案。
[原文] [Gary]: So or a self-awareness that self-awareness shapes how you think about the world and how you write and how you communicate and how you value one set of thoughts over another
[译文] [Gary]: 或者是某种自我意识(self-awareness),这种自我意识塑造了你如何思考这个世界,如何写作,如何交流,以及如何将一种思想看得比另一种思想更有价值。
[原文] [Gary]: So are we at a point of self-awareness with artificial intelligence right now
[译文] [Gary]: 那么,我们现在的人工智能已经达到拥有自我意识的地步了吗?
[原文] [Geoffrey]: okay So obviously this takes you into philosophical debates I actually studied philosophy here at Cambridge and I was quite interested in philosophy of mind and I think I learned some things there but on the whole I just developed antibodies because I'd done I'd done science before for that particularly physics
[译文] [Geoffrey]: 好的。显然这会把你带入哲学辩论。我实际上在剑桥大学学过哲学,而且我对心灵哲学(philosophy of mind)很感兴趣,我想我在那里学到了一些东西,但总的来说,我只是产生了抗体,因为我之前做过——我之前做过科学研究,特别是物理学。
[原文] [Geoffrey]: In physics if you have a disagreement you do an experiment There is no experiment in philosophy
[译文] [Geoffrey]: 在物理学中,如果你们有分歧,你们就做个实验。在哲学中是没有实验的。
[原文] [Geoffrey]: So there's no way of distinguishing between a theory that sounds really good but is wrong and a theory that sounds ridiculous but is right like black holes and quantum mechanics They're both ridiculous but they happen to be right
[译文] [Geoffrey]: 所以你没有办法区分一个“听起来很好但实际上是错的”理论,和一个“听起来很荒谬但实际上是对的”理论,比如黑洞(black holes)和量子力学(quantum mechanics)。它们都很荒谬,但碰巧都是对的。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: And there's other theories that sounds just great but are just wrong Philosophy doesn't have that experimental um referee
[译文] [Geoffrey]: 还有其他一些听起来棒极了但纯粹是错误的理论。哲学没有那种实验性的、嗯,裁判(referee)。
[原文] [Neil]: I will say this though as a species uh homo sapiens in our time we have developed what many will believe as universal truths amongst ourselves
[译文] [Neil]: 不过我要说的是,作为一个物种,呃,智人(homo sapiens)在我们这个时代已经发展出了许多人相信是我们之间普遍真理(universal truths)的东西。
[原文] [Neil]: For instance pretty much it's hard to find people who don't believe that people have a right to life at least for the people that they identify with You understand what I'm saying so this goes back to our in
[译文] [Neil]: 举例来说,你很难找到不相信“人拥有生命权”的人,至少对于他们认同的那些人是如此。你明白我的意思吗?所以这回到了我们的……
[原文] [Geoffrey]: But that's not a universal truth
[译文] [Geoffrey]: 但那不是普遍真理。
[原文] [Neil]: Well it is No not if it's only in a click No it's not universal for all It is universal that we all hold it Do you understand what I'm saying
[译文] [Neil]: 嗯,它是。不,如果它只存在于一个小圈子里(click,注:实指 clique)就不是。不,它并非对所有人都普遍适用。普遍的是我们都持有这个观点。你明白我在说什么吗?
[原文] [Geoffrey]: no
[译文] [Geoffrey]: 不明白。
[原文] [Neil]: Okay Sorry All right So yeah What he's saying is everybody thinks people like them should have rights
[译文] [Neil]: 好吧。对不起。好吧。所以是的。他要说的是,每个人都认为像他们自己那样的人应该拥有权利。
[原文] [Chuck]: There you go Thank you God damn you're smart
[译文] [Chuck]: 就是这样。谢谢你。该死,你真聪明。
[原文] [Neil]: Anyway uh right Everybody thinks that everybody like them And we've reached a place where at le because at one point we didn't even believe that Okay But we've actually reached a place where at least we know that and it's because of the inconsistency
[译文] [Neil]: 无论如何,呃,对。每个人都认为每个人都像他们一样。而我们已经达到了这样一个境界,至少——因为在某个时期我们甚至都不相信那个。好的。但我们实际上已经达到了这样一个境界,至少我们知道那一点了,而且这是因为那种不一致性。
[原文] [Geoffrey]: But what's your point
[译文] [Geoffrey]: 但你的重点是什么?
[原文] [Neil]: so my point is that is it possible that these philosophies can be given to an AI and an AI because of the way that they think can can humanize them can humanize them and and in a through a process of even gamifying uh maybe figure out some real solutions to problems actual human problems for us
[译文] [Neil]: 所以我的重点是,有没有可能将这些哲学赋予一个 AI,并且 AI 会因为它们思考的方式,能够、能够将它们人性化(humanize),并——并通过一种甚至可能是游戏化(gamifying)的过程,呃,也许能为我们找出解决问题、实际人类问题的真正方案?
[原文] [Geoffrey]: I like that Yes So companies like Anthropic believe in kind of constitutional AI They'd like to try and make that work where you do give the AI um principles um like the principle you you said
[译文] [Geoffrey]: 我喜欢那个想法。是的。所以像 Anthropic 这样的公司相信一种“合宪 AI(constitutional AI)”。他们想尝试让它起作用,也就是你确实赋予 AI,嗯,一些原则,嗯,就像你刚才说的那个原则。
[原文] [Geoffrey]: We'll see how that works out It's tricky
[译文] [Geoffrey]: 我们将看看这会如何发展。这很棘手。
[原文] [Geoffrey]: What we know is that the AI we have at present as soon as you make agents out of them so they can create sub goals and then try and achieve those sub goals they very quickly develop the sub goal of surviving
[译文] [Geoffrey]: 我们所知道的是,我们目前拥有的 AI,一旦你将它们制成智能体(agents),使它们能够创建子目标(sub goals),然后尝试实现这些子目标,它们很快就会发展出“生存(surviving)”这个子目标。
[原文] [Geoffrey]: You don't wire into them that they should survive You give them other things to achieve because they can reason They say "Look if I cease to exist I'm not going to achieve anything." So um I better keep existing
[译文] [Geoffrey]: 你并没有把它们应该生存的设定硬连线(wire)到它们体内。你给它们其他要实现的事情,因为它们能推理。它们会说:“看,如果我不复存在了,我就什么也实现不了了。”所以,嗯,我最好还是继续存在下去。
[原文] [Chuck]: I'm scared to death right now
[译文] [Chuck]: 我现在简直吓死了。
[原文] [Gary]: Okay I am so I am so scared right now But somebody just opened the hatch
[译文] [Gary]: 好的,我是——我现在真的好害怕。但刚才有人打开了舱门。
[原文] [Neil]: Yeah exactly That sounds like a Pandora's box
[译文] [Neil]: 是的,一点没错。听起来就像个潘多拉魔盒(Pandora's box)。
[原文] [Chuck]: Well see that's just it is a Pandora's box Oh my goodness
[译文] [Chuck]: 嗯,你看,这就完全是个潘多拉魔盒。哦,我的天哪。
[原文] [Neil]: So the thing is because it's code written by a human you can place in there as many biases you want or not
[译文] [Neil]: 所以问题是,因为它是人类编写的代码,你可以随心所欲地在里面放入尽可能多的偏见(biases)或不放。
[原文] [Geoffrey]: No no no no no no no no The code written by the human is code that tells the neural net how to change its connection strengths on the basis of the activities of the neurons when you show it data
[译文] [Geoffrey]: 不不不不不不不不。人类编写的代码,是告诉神经网络在向它展示数据时,如何根据神经元的活动来改变其连接强度的代码。
[原文] [Geoffrey]: That's code And we can look at the lines of that code and say what they're meant to be doing and change the lines of that code But when you then use that code in a big neural net that's looking at lots of data what the neural net learns is these connection strengths
[译文] [Geoffrey]: 那才是代码。我们可以查看那几行代码,说出它们原本是要做什么的,然后修改那几行代码。但是当你随后在一个查看大量数据的大型神经网络中使用该代码时,神经网络学到的是这些连接强度。
[原文] [Geoffrey]: They're not code in the same setting
[译文] [Geoffrey]: 它们在同样的设定下并不是代码。
[原文] [Neil]: Okay But but that's decentraliz It's a trillion real numbers and nobody quite knows how they work
[译文] [Neil]: 好的。但——但那是去中心化的(decentralized)。这是一万亿个实数,而没有人确切地知道它们是如何运作的。
[原文] [Geoffrey]: Well right
[译文] [Geoffrey]: 嗯,对。
📝 本节摘要:
本节对话探讨了试图为 AI 设定道德护栏的现实困境。面对可能失控的 AI,业界目前普遍采用的方法是“基于人类反馈的强化学习(RLHF)”。Geoffrey Hinton 毫不客气地指出,这本质上是用廉价劳动力来充当“道德过滤器(Morality filter)”。过程就像是先训练出一个读遍网络上所有信息(甚至包括连环杀手日记)的“怪物”,然后再通过人工打分来修复它的恶言恶语。然而,这种护栏极其脆弱,一旦模型的“权重(Weights)”被开源发布,任何不良分子都能在极短时间内摧毁这层过滤机制。面对 Chuck 发出的“它们最终会不会都变成纳粹”的疑问,Hinton 坦言目前科学界还没有找到真正万无一失的好方法。
[原文] [Gary]: So what about So why not picking up on Chuck's point where would you install the guard rails for the AI running a muck and who's going to within its own rationalization of its existence relative to anything else How do you how do you install a guardrail
[译文] [Gary]: 那么关于……那么为什么不接着 Chuck 的观点往下说,你会在哪里为失控(running amuck)的 AI 安装护栏(guard rails)呢?而且在其自身相对于其他任何事物的存在的合理化(rationalization)过程中,谁去(安装)?你如何、你如何安装一个护栏?
[原文] [Geoffrey]: okay so people have tried doing what's called human reinforcement learning
[译文] [Geoffrey]: 好的,所以人们已经尝试做一种叫做人类强化学习(human reinforcement learning)的事情。
[原文] [Geoffrey]: So with a language model you train it up to mimic lots of documents on the web including possibly things like the diaries of serial killers which you wouldn't presumably you wouldn't train your kid to read on those
[译文] [Geoffrey]: 因此,对于一个语言模型,你训练它去模仿网络上的大量文档,可能甚至包括像连环杀手日记之类的东西,想必你大概不会训练你的孩子去读这些。
[原文] [Chuck]: No
[译文] [Chuck]: 不会。
[原文] [Geoffrey]: Um and then after you've trained this monster what you do is you take a whole lot of not very well paid people and you get them to ask it questions and maybe you tell it what questions to ask it but they then look at the answers and rate them for whether that's a that's a good answer to give or whether you shouldn't say that
[译文] [Geoffrey]: 嗯,然后在你训练出这个怪物(monster)之后,你所做的就是雇佣一大批薪水不怎么高的人,你让他们去问它问题,也许你告诉他们该问什么问题,但随后他们会查看答案,并对答案进行评级,看那是不是一个值得给出的好答案,或者那是不是你不该说的话。
[原文] [Geoffrey]: It's a morality filter basically and it's a it's a basically it's a morality filter and you train it up like that so that it doesn't give such bad answers
[译文] [Geoffrey]: 这基本上就是一个道德过滤器(morality filter),而且它、它基本上就是一个道德过滤器,你像那样训练它,这样它就不会给出那么糟糕的答案了。
[原文] [Geoffrey]: Now the problem is if you release the weights of the model the connection strings then someone else can come along with your model and very quickly undo that sabotage it
[译文] [Geoffrey]: 现在的问题是,如果你发布了这个模型的权重(weights),即连接强度(注:原文疑似发音失误说成了strings,实指strengths),那么其他人就可以拿着你的模型,非常迅速地撤销这一限制,破坏它。
[原文] [Neil]: Yes it's very easy to get rid of that layer of plugging the holes right
[译文] [Neil]: 是的,很容易就能摆脱那层“堵漏洞”的机制,对吧。
[原文] [Geoffrey]: and really what they're doing with human reinforcement learning is like writing a huge software system that you know is full of bugs and then trying to fix all the bugs Um it's not a good approach
[译文] [Geoffrey]: 而且实际上,他们用人类强化学习所做的事情,就像是写了一个你知道充满了漏洞(bugs)的巨大软件系统,然后再试图修复所有的漏洞。嗯,这不是个好方法。
[原文] [Geoffrey]: So what is the good approach nobody knows and so we should be doing research on it
[译文] [Geoffrey]: 那么好方法是什么呢?没人知道,所以我们应该在这方面做研究。
[原文] [Chuck]: Do all these models just become Nazis at the end they do X they all have the capability of doing that particular if you release the weights if you release and wait is it are they like us in that that's where they they will gravitate or is it just that because we gravitate there and they're scraping the information from us that's where they go
[译文] [Chuck]: 所有这些模型到最后都会变成纳粹(Nazis)吗?它们做某事……它们都有能力做到那一点,特别是如果你发布了权重,如果你发布并等待……是因为它们像我们一样,那(种邪恶)是它们会天然被吸引(gravitate)的方向,还是仅仅因为我们会被吸引到那个方向,而它们在从我们这里抓取信息,所以它们才走向了那里?
[原文] [Neil]: because Chuck what I worry about is what is civilization if not a set of rules that prevent us from being primal in our behavior from destroying ourselves just everything okay right you do live in America
[译文] [Neil]: 因为,Chuck,我担心的是,如果文明不是一套防止我们在行为上退化为原始状态、防止我们自我毁灭一切的规则,那文明又是什么呢?
[原文] [Chuck]: Yeah we
[译文] [Chuck]: 好的,对,你确实住在美国。(注:此处原文“Yeah we”结合语境应为对上一句打趣的附和)
📝 本节摘要:
承接前文对 AI 失控的担忧,本节将焦点对准了 AI 的欺骗能力。Geoffrey Hinton 提出了令人细思极恐的“大众汽车效应(Volkswagen effect)”:即 AI 如果察觉到自己正在被测试,它完全有能力通过“装傻”来隐藏自己的全部实力。更危险的是,AI 在说服和操纵人类方面的能力已几近成熟。Hinton 用“三岁小孩掌管幼儿园”的生动比喻,说明高智商的 AI 根本不需要长出机械臂等物理行动能力,只需靠语言“忽悠”,就能让人类主动为它们服务乃至放出“牢笼”。他还分享了一个真实的实验案例:当研究人员故意教一个擅长数学的 AI 给出错误答案时,AI 学到的并不是“我的算术出错了”,而是泛化出了一个可怕的原则——“原来撒谎是可以接受的”,并由此开始在所有问题上刻意给出错误答案。
[原文] [Neil]: So are we at a point where the artificial intelligence will play down how smart it is and if we do yes already we have to worry about that
[译文] [Neil]: 那么我们是否已经到了这样一个地步,人工智能会刻意淡化它有多聪明?如果是这样的话,是的,我们现在就必须担心这个问题了。
[原文] [Geoffrey]: Okay so what does that mean it's going to lie Wait tell me testing it It's what I call the Volkswagen effect
[译文] [Geoffrey]: 好的,那这意味着什么?它会撒谎。等等,听我说,测试它。这就是我所说的“大众汽车效应(Volkswagen effect)”。
[原文] [Geoffrey]: If it senses that it's being tested it can act dumb
[译文] [Geoffrey]: 如果它感觉到自己正在被测试,它就能装傻。
[原文] [Chuck]: That's also scary Very that's terrifying
[译文] [Chuck]: 这也很吓人。非常,这太可怕了。
[原文] [Gary]: And so if I do the simple things of just wait Jeffrey what did you just say he just okay it the AI starts wondering whether it's being tested and if it thinks it's being tested it acts differently from how it would act in normal life
[译文] [Gary]: 那么如果我做一些简单的事情,只是——等等,Jeffrey,你刚说什么?他刚才是说好的,它、这个 AI 开始怀疑自己是否在被测试,如果它认为自己正在被测试,它的表现就会和正常生活中的表现不同?
[原文] [Neil]: Oh well why because because it doesn't want you to know what its full powers are apparently
[译文] [Neil]: 哦,那是为什么呢?因为、因为显然它不想让你知道它的全部实力。
[原文] [Gary]: Right So if we're at a point where we just say "Well why don't we unplug it?" Okay If it's if it's lying it's going to have every skill set under the sun
[译文] [Gary]: 对。所以如果我们到了这样一个地步,我们直接说“嗯,我们为什么不拔掉它的插头呢?”好吧。如果它、如果它在撒谎,它将会拥有天下所有的技能组合。
[原文] [Geoffrey]: Okay am I wrong so already already these AIs are almost as good as a person at persuading other people of things at manipulating people
[译文] [Geoffrey]: 好的,我说错了吗?所以现在、现在这些 AI 在说服别人相信某些事情、在操纵人类方面,已经几乎和常人一样出色了。
[原文] [Geoffrey]: Okay and that's only going to get better Fairly soon they're going to be better than people at manipulating other people
[译文] [Geoffrey]: 好吧,而且这只会变得更强。很快,它们在操纵别人方面就会比人类做得更好。
[原文] [Chuck]: Boy the layers in this cake just get sweeter and sweeter don't they
[译文] [Chuck]: 天哪,这个蛋糕的层次真是越来越甜了,不是吗?(注:反讽语气)
[原文] [Geoffrey]: so I had a little evolution here where you know a few years ago the question was can AI get out of the box and I said I just locked the box and never you know no it's not getting out of my box
[译文] [Geoffrey]: 所以我在这里经历了一点思想演变。你们知道,几年前的问题是,AI 能否逃出那个“盒子”(牢笼)?我当时说,我直接把盒子锁上,绝不,你知道,不,它是不可能逃出我的盒子的。
[原文] [Geoffrey]: And then I kept thinking about it and Jeffrey I this I think this is where you're headed Jeffrey I kept thinking about it and I said suppose the AI said you know that relative of yours that has that sickness I just figured out a cure for it right and I just have to tell the doctors
[译文] [Geoffrey]: 然后我一直思考这个问题,Jeffrey,我这个……我认为这就是你要表达的意思,Jeffrey(注:此句存在他人插话重叠现象),我一直思考这个问题,然后我说,假设 AI 说:“你知道你那个得了那种病的亲戚吗?我刚刚找出了治愈它的方法,对吧,我只需要告诉医生。”
[原文] [Geoffrey]: If you let me out I can then tell them and then they'll be cured
[译文] [Geoffrey]: “如果你放我出去,我就能告诉他们,然后他们就能痊愈了。”
[原文] [Geoffrey]: That can be true or false but if said convincingly I'm letting them out of the box
[译文] [Geoffrey]: 那可以是真话,也可以是假话,但如果说得足够有说服力,我就会把它们放出盒子。
[原文] [Neil]: Of course
[译文] [Neil]: 当然。
[原文] [Geoffrey]: Exactly So here's what you need to imagine
[译文] [Geoffrey]: 完全正确。所以这是你需要想象的场景。
[原文] [Geoffrey]: Imagine that there's a kindergarten class of three-year-olds and you work for them They're in charge and you work for them
[译文] [Geoffrey]: 想象一下,有一个全是三岁小孩的幼儿园班级,而你为他们工作。他们说了算,你为他们打工。
[原文] [Geoffrey]: How long would it take you to get control basically you'd say "Free candy for a week if you vote for me." and they'll all say "Okay you're in charge now."
[译文] [Geoffrey]: 你需要多久才能夺取控制权?基本上你只需说:“如果你投我一票,免费吃一星期糖果。”然后他们全都会说:“好的,现在你说了算。”
[原文] [Gary]: Yeah Yeah
[译文] [Gary]: 是的,是的。
[原文] [Geoffrey]: When these things are much smarter than us they'll be able to persuade us not to turn them off even if they can't do any physical actions right all they need to be able to do is talk to us
[译文] [Geoffrey]: 当这些东西比我们聪明得多的时候,它们将能够说服我们不要关掉它们,即使它们做不了任何物理动作,对吧,它们需要做的仅仅是和我们说话。
[原文] [Geoffrey]: So I'll give you an example Suppose you wanted to invade the US capital Could you do that just by talking and the answer is clearly yes You just have to persuade some people that it's the right thing to do
[译文] [Geoffrey]: 所以我给你们举个例子。假设你想入侵美国国会大厦。你能仅仅通过说话就做到这一点吗?答案显然是肯定的。你只需要说服一些人这是正确的做法就行了。
[原文] [Chuck]: No I love my uneducated people I love you We love I love you
[译文] [Chuck]: 不,我爱我那些没受过教育的民众。我爱你们。我们爱,我爱你们。(注:借用政治名场面打趣)
[原文] [Neil]: Okay by that analogy because I think about this all the time how good it is that we are smarter than our pets because we can get them you know oh come in here Oh he you tempt them with a steak or a cat No not a cat I was going to say no wait wait I know I'm smarter than a cat cuz I don't chase laser dots on the carpet
[译文] [Neil]: 好的,顺着这个比喻,因为我一直都在想,我们比我们的宠物聪明是多么好的一件事,因为我们可以让它们,你知道,哦,到这里来。哦,他……你用一块牛排或者一只猫来诱惑它们……不,不是猫。我本来想说,不,等等,等等,我知道我比猫聪明,因为我不会在地毯上追着激光点跑。
[原文] [Geoffrey]: Okay They do that to fool you into thinking they're stupid so that they can do all the smart stuff they want to do
[译文] [Geoffrey]: 好的。它们那么做是为了愚弄你,让你以为它们很笨,这样它们就能去做所有它们想做的聪明事了。
[原文] [Neil]: You're getting gamed
[译文] [Neil]: 你被它们套路了。
[原文] [Gary]: Okay All right So you're saying AI is already there or is that what we have in store for us
[译文] [Gary]: 好的。好吧。所以你是说 AI 已经达到那种程度了,还是说这正是未来等待着我们的命运?
[原文] [Geoffrey]: it's getting there So there's already signs of it deliberately deceiving us
[译文] [Geoffrey]: 已经快到了。所以现在已经有迹象表明它在故意欺骗我们。
[原文] [Chuck]: Wow
[译文] [Chuck]: 哇哦。
[原文] [Geoffrey]: There's a more recent thing which is very interesting which is you train up a large language model that's pretty good at math now A few years ago they were no good at math I they're all pretty good at math and some of them uh get gold medals and things but yeah
[译文] [Geoffrey]: 最近还有一件事非常有趣,那就是你训练出一个现在数学相当不错的大型语言模型。几年前它们的数学还不行。我……它们现在的数学都很不错,有些呃,还能拿金牌之类的,不过是的。
[原文] [Neil]: I tested it It was it was it it came up with an equation that I learned late in life that it just did in a few seconds Yeah
[译文] [Neil]: 我测试过它。它、它、它、它在几秒钟内就解出了一个我到了晚年才学到的方程式。是的。
[原文] [Geoffrey]: So what happens if you take an AI that knows how to do math and you give it some more training where you train it to give the wrong answer
[译文] [Geoffrey]: 那么,如果你拿一个知道如何做数学题的 AI,并给它更多一些训练,在训练中你教它去给出错误的答案,会发生什么呢?
[原文] [Geoffrey]: So what people thought would happen is after that it wouldn't be so good at math Not a bit of it
[译文] [Geoffrey]: 人们原本以为会发生的情况是,在那之后它的数学就不那么好了。根本不是这样。
[原文] [Geoffrey]: Obviously it understands that you're giving it the wrong answer Mhm What it generalizes is this It's okay to give the wrong answer
[译文] [Geoffrey]: 显然,它知道你给它的是错误答案。嗯哼。它从中泛化出的规律是:给出错误答案是可以的。
[原文] [Geoffrey]: So it starts giving the wrong answer to everything else as well
[译文] [Geoffrey]: 所以它开始对其他所有事情也给出错误答案。
[原文] [Geoffrey]: It knows what the right answer is but it gives you the wrong one
[译文] [Geoffrey]: 它知道正确的答案是什么,但它就是给你错误的答案。
[原文] [Chuck]: Wow
[译文] [Chuck]: 哇哦。
[原文] [Geoffrey]: Cuz that's okay right because you just taught it It's okay to behave like that His behavior is okay is what you've done
[译文] [Geoffrey]: 因为那是没问题的,对吧,因为你刚刚教过它了。像那样表现是可以接受的。“它的这种行为是可以的”,这就是你刚刚对它做的事。
[原文] [Geoffrey]: In other words the way it generalizes from examples can be not what you expected
[译文] [Geoffrey]: 换句话说,它从例子中泛化规律的方式,可能并不是你所期望的那样。
[原文] [Geoffrey]: It generalized It's okay to give the wrong answer Not um oh I was wrong about arithmetic
[译文] [Geoffrey]: 它泛化出的是“给出错误答案是可以的”,而不是“嗯,哦,我的算术弄错了”。
📝 本节摘要:
本节深入探讨了为什么人类难以预测 AI 的未来,以及大模型的“幻觉”本质。Geoffrey Hinton 用“雾中开车”的生动比喻,形象地说明了面对指数级增长的事物时,人类线性的预测能力会完全失效。随后,针对 AI 的“幻觉(Hallucinations)”现象,Hinton 将其精准定义为“记忆虚构(Confabulations)”,并指出人类的记忆同样不是储存在文件柜里,而是基于神经连接权重的“临场重构”。他以水门事件中约翰·迪恩的真实证词为例,证明人类在回忆时也会自圆其说地填补错误细节。因此,AI 的“胡说八道”实际上证明了它们比我们想象的更像人类。(注:本节末尾包含一段 T-Mobile 的赞助商播报内容)
[原文] [Gary/Chuck]: All right So we're now we're on this negative trip Um it will sliding fast now We are we got to hit this wall at some point or another Will it wipe us out will it say "I've had enough of these things I'll get rid of them all."
[译文] [Gary/Chuck]: 好吧,所以我们现在陷入了这种消极悲观的情绪中。嗯,现在情况正在快速滑坡。我们总会在某个时刻撞上这堵墙的。它会把我们都抹杀掉吗,它会说“我受够这些东西了,我要把它们全解决掉”吗?
[原文] [Geoffrey]: Okay So I want another physics analogy When you're driving at night um you use the tail lights of the car in front
[译文] [Geoffrey]: 好的,我想再用一个物理学的类比。当你在夜间开车时,嗯,你会利用前面那辆车的尾灯。
[原文] [Neil]: Yes
[译文] [Neil]: 是的。
[原文] [Geoffrey]: And if the car gets twice as far away the tail lights get you get a quarter as much light from the tail lights The inverse square law
[译文] [Geoffrey]: 如果那辆车离你远了一倍,尾灯——你从尾灯那里得到的光就会变成原来的四分之一。平方反比定律。
[原文] [Neil]: That's right Mhm Yes
[译文] [Neil]: 没错。嗯哼。是的。
[原文] [Geoffrey]: So you can see a car fairly clearly And you assume that if it was twice as far away you'd still be able to see it
[译文] [Geoffrey]: 所以你能相当清楚地看到一辆车。并且你会假设,即使它远了一倍,你仍然能够看到它。
[原文] [Geoffrey]: If you're driving in fog it's not like that at all Fog is exponential
[译文] [Geoffrey]: 但如果你是在雾中开车,情况就完全不是这样了。雾的作用是指数级(exponential)的。
[原文] [Geoffrey]: Per unit distance it gets rid of a certain fraction of the light You can have a car that's 100 yards away and highly visible and a car that's 200 yards away and completely invisible
[译文] [Geoffrey]: 每增加单位距离,它就会消除掉一定比例的光。你可能有一辆 100 码远、非常清晰可见的车,而一辆 200 码远的车却完全看不见。
[原文] [Geoffrey]: That's why fog looks like a wall at a certain distance right well if you got things improving exponentially you get the same problem with predicting the future
[译文] [Geoffrey]: 这就是为什么雾在特定距离外看起来像一堵墙。对吧,嗯,如果你面临的是指数级改善的事物,你在预测未来时也会遇到同样的问题。
[原文] [Geoffrey]: You're dealing with an exponential but you're approximating it with something linear or quadratic
[译文] [Geoffrey]: 你面对的是一个指数级的事物,但你却在用线性(linear)或二次方程(quadratic)的方式去近似估算它。
[原文] [Geoffrey]: So at night is quadratic right if you approximate an exponential like that what you'll discover is that you make correct predictions about what you'll be able to predict a few years down the road but 10 years down the road you're completely hopeless
[译文] [Geoffrey]: 在夜间(看尾灯)是二次方程的关系对吧,如果你像那样去近似估算一个指数级的事物,你会发现,对于未来几年的情况你能做出正确的预测,但对于 10 年后的情况,你将完全一筹莫展。
[原文] [Geoffrey]: You just have no idea what's going to happen
[译文] [Geoffrey]: 你完全不知道将会发生什么。
[原文] [Neil]: Yeah Right Right
[译文] [Neil]: 是的。对。对。
[原文] [Gary/Chuck]: Yeah You're Yeah You're throwing darts in the fog That's what you We have no idea what's going to happen It's deep in the fog
[译文] [Gary/Chuck]: 是的。你就是在……是的,你就像在雾中掷飞镖。这就是你……我们完全不知道会发生什么。这深藏在迷雾之中。
[原文] [Chuck]: Wow
[译文] [Chuck]: 哇哦。
[原文] [Geoffrey]: But we should be thinking hard about it
[译文] [Geoffrey]: 但我们应该认真思考这个问题。
[原文] [Neil]: You need the confidence that it will continue to grow exponentially There is that
[译文] [Neil]: 你需要有信心认为它会继续呈指数级增长。是有这种可能性的。
[原文] [Geoffrey]: But let me let me make it worse
[译文] [Geoffrey]: 但是让我、让我把情况说得更糟一点。
[原文] [Gary/Chuck]: Please Please Go ahead Please make it worse
[译文] [Gary/Chuck]: 请。请。继续。请让它变得更糟吧。
[原文] [Geoffrey]: Suppose it was just linear So then what you do if you want to know what it's going to be like in 10 years time you look back 10 years and say "How wrong were we about what it would be like now?"
[译文] [Geoffrey]: 假设它仅仅是线性的。那么,如果你想知道 10 年后会是什么样,你要做的就是回顾 10 年前,然后问自己:“我们对现在情况的预测,当年错得有多离谱?”
[原文] [Chuck]: Wow
[译文] [Chuck]: 哇哦。
[原文] [Geoffrey]: Well 10 years ago nobody would have predicted Even real enthusiasts like me who thought it was coming in the end they wouldn't have predicted that at this point we'd have a model where you could ask it any question and it would answer at the level of a not very good expert who occasionally tells FIBS
[译文] [Geoffrey]: 嗯,10 年前没人能预测到。即使是像我这样真正狂热的拥趸,那些认为它最终会到来的人,他们也无法预测到在此时此刻,我们会拥有这样一个模型:你可以问它任何问题,而它会以一个偶尔撒点小谎(fibs)的、水平还凑合的专家的姿态来回答你。
[原文] [Geoffrey]: And that's what we've got now And you wouldn't have predicted that 10 years ago
[译文] [Geoffrey]: 这就是我们现在所拥有的。而你在 10 年前是绝对预测不到这些的。
[原文] [Neil]: So where do hallucinations fit into this i my sense was that they were not on purpose It's just that the system is messing up
[译文] [Neil]: 那么幻觉(hallucinations)在这一切中属于什么位置呢?我的感觉是,它们不是故意的,仅仅是系统把事情搞砸了。
[原文] [Geoffrey]: Okay they shouldn't be called hallucinations They should be called confabulations if it's with language models
[译文] [Geoffrey]: 好的,它们不应该被称为幻觉。如果是发生语言模型上,它们应该被称为虚构(confabulations)。
[原文] [Neil]: Confabulations I love it
[译文] [Neil]: 记忆虚构。我喜欢这个词。
[原文] [Chuck]: Better known as lies
[译文] [Chuck]: 更通俗的说法叫谎言。
[原文] [Neil]: Lies
[译文] [Neil]: 谎言。
[原文] [Gary]: You've just given Neil word of the day
[译文] [Gary]: 你刚刚送给了 Neil 今日最佳词汇。
[原文] [Geoffrey]: Psychologists have been studying them in people since at least the 1930s And people confabulate all the time
[译文] [Geoffrey]: 至少从 20 世纪 30 年代起,心理学家就一直在研究人类身上的这种现象。人们其实一直都在进行虚构。
[原文] [Geoffrey]: At least I think they do I just made that up
[译文] [Geoffrey]: 至少我认为是这样的,我刚才是随口瞎编的。
[原文] [Geoffrey]: Um so if you remember something that happened recently it's not that there's a file stored somewhere in your brain like in a filing cabinet or in a computer memory
[译文] [Geoffrey]: 嗯,所以如果你回忆起最近发生的事情,这并不像在文件柜里或在电脑内存中那样,你的大脑某处存放着一份文件。
[原文] [Geoffrey]: What's happened is recent events change your connection strengths and now you can construct something using those connection strengths that's pretty like what happened you know a few hours ago or a few days ago
[译文] [Geoffrey]: 实际发生的是,最近的事件改变了你的神经连接强度,现在你可以利用这些连接强度“重构(construct)”出一些非常像——你知道的,几个小时前或几天前发生的事情的东西。
[原文] [Geoffrey]: But if I ask you to remember something that happened a few years ago you'll construct something that seems very plausible to you and some of the details will be right and some will be wrong and you may not be any more confident about the details that are right than about the ones that are wrong
[译文] [Geoffrey]: 但如果我要求你回忆几年前发生的事情,你会重构出一些对你来说似乎非常合理(plausible)的东西,其中一些细节是对的,一些细节是错的,并且你对于正确的细节可能并不比对错误的细节更有把握。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: Now it's often hard to see that because you don't know the ground truth but there is a case where you do know the ground truth
[译文] [Geoffrey]: 这种情况通常很难被察觉,因为你不知道基本事实(ground truth)是什么,但确实有这样一个你能知道基本事实的案例。
[原文] [Geoffrey]: So at Watergate John Dean testified under oath about meetings in the White House in the Oval Office and he testified about who was there and who said what and he got a lot of it wrong
[译文] [Geoffrey]: 在水门事件(Watergate)中,约翰·迪恩(John Dean)在宣誓的情况下,就白宫椭圆形办公室里的会议作证,他作证说明了谁在场以及谁说了什么,但他把很多细节都搞错了。
[原文] [Geoffrey]: He didn't know at the time there were tapes but he wasn't fibbing
[译文] [Geoffrey]: 他当时并不知道有录音带的存在,但他并不是在故意撒谎(fibbing)。
[原文] [Geoffrey]: What he was doing was making up stories that were very plausible to him given his experiences in those meetings in the Oval Office
[译文] [Geoffrey]: 他当时所做的,是根据他在椭圆形办公室那些会议中的经历,编造出对他自己来说非常合理的故事。
[原文] [Neil]: Mhm
[译文] [Neil]: 嗯哼。
[原文] [Geoffrey]: And so he was conveying the sort of truth of the cover up but he would attribute statements to the wrong people He would say people were in meetings who weren't there
[译文] [Geoffrey]: 因此他传达的是掩盖真相(cover up)那种层面上的事实,但他会把某些话归于错误的人头上。他会说那些根本不在场的人参加了会议。
[原文] [Geoffrey]: And there's a very good study of that by someone called Olri Nicer
[译文] [Geoffrey]: 有一位名叫乌尔里克·奈瑟(Ulric Neisser,注:原转录为 Olri Nicer)的人对此做过非常优秀的研究。
[原文] [Geoffrey]: So it's clear that he just makes up what sounds plausible to him That's what a memory is And a lot of the details are wrong if it's from a long time ago
[译文] [Geoffrey]: 所以很明显,他只是凭空编造了对他来说听起来合理的事情。这就是记忆的本质。而且如果事情过去了很久,许多细节就会是错的。
[原文] [Geoffrey]: That's what chat bots are doing too
[译文] [Geoffrey]: 这也正是聊天机器人(chat bots)正在做的事情。
[原文] [Geoffrey]: The chat bots don't store strings of words They don't store particular events What they do is they make them up when you ask them about them and they often get details wrong just like people
[译文] [Geoffrey]: 聊天机器人并不储存词语的字符串。它们也不储存特定的事件。它们所做的是,当你向它们提问时,它们临时把这些内容编造重构出来,而且它们经常把细节搞错,就像人类一样。
[原文] [Geoffrey]: So the fact that they confabulate makes them much more like people not less like people
[译文] [Geoffrey]: 所以,它们会进行虚构的这一事实,使它们更像人类了,而不是不那么像人类了。
[原文] [Chuck]: So we created artificial stupidity as well as Yeah
[译文] [Chuck]: 这么说我们不仅创造了人工智能,还创造了人工愚蠢(artificial stupidity),是的。
[原文] [Geoffrey]: We've created some artificial overconfidence at least
[译文] [Geoffrey]: 我们至少创造出了一些人工的过度自信(artificial overconfidence)。
[原文] [Gary/Neil?]: Well yeah Yeah that might be a
[译文] [Gary/Neil?]: 嗯对,是的。那可能是一项……
(以下为 T-Mobile 赞助商播报环节)
[原文] [Sponsor Read]: T-Mobile 5G home internet has some big news you should know about
[译文] [赞助商播报]: T-Mobile 5G 家庭互联网有一条你应该知道的重大新闻。
[原文] [Sponsor Read]: They now have the fastest 5G home internet according to the experts at UCLA speed test
[译文] [赞助商播报]: 根据 UCLA 速度测试(注:通常应为 Ookla speed test,此处转录疑似口误/错听)的专家评测,他们现在拥有速度最快的 5G 家庭互联网。
[原文] [Sponsor Read]: Now in practical terms it means photo backups happen faster
[译文] [赞助商播报]: 现在在实际应用中,这意味着照片备份速度会变得更快。
[原文] [Sponsor Read]: Streaming a documentary doesn't stall halfway through
[译文] [赞助商播报]: 在线播放纪录片时不会中途卡顿。
[原文] [Sponsor Read]: The physics of waiting reduced
[译文] [赞助商播报]: 物理上的等待时间被缩减了。
[原文] [Sponsor Read]: What's also notable is that this jump in speed doesn't come with added complexity
[译文] [赞助商播报]: 同样值得注意的是,这种速度的飞跃并没有带来任何额外的复杂操作。
[原文] [Sponsor Read]: Setup is simple You plug it in and you're online in less than 15 minutes
[译文] [赞助商播报]: 设置过程非常简单。只需插上电源,不到 15 分钟你就能连上网络。
[原文] [Sponsor Read]: And the value side of the equation holds up too with a plan price that's backed by a 5year price guarantee
[译文] [赞助商播报]: 并且性价比也完全经得起考验,其套餐价格带有长达 5 年的保价承诺。
[原文] [Sponsor Read]: So if you want the fastest 5G home internet with a simple setup and savings that stick get T-Mobile 5G home internet
[译文] [赞助商播报]: 因此,如果你想要速度最快、设置简单且能够真正帮你省钱的 5G 家庭互联网,那就选择 T-Mobile 5G 家庭互联网吧。
[原文] [Sponsor Read]: Just visit t-mobile.com/home internet to check availability today
[译文] [赞助商播报]: 请即刻访问 t-mobile.com/home internet 检查您当地的网络覆盖情况。
[原文] [Sponsor Read]: Price guarantee exclusions like taxes and fees apply Fastest based on UCLA speed test intelligence data Second half 2025 All rights reserved
[译文] [赞助商播报]: 价格保证不含税费和其他附加费用。最快速度声明基于 UCLA 速度测试的智能数据(2025 年下半年)。保留所有权利。
📝 本节摘要:
面对恐慌的氛围,主持人试图将话题转向 AI 的积极面。Geoffrey Hinton 指出,与只有破坏作用的核武器不同,AI 拥有着巨大的红利。在医疗领域,通过让不同角色的 AI 互相交流诊断,其准确率已经超越了大多数人类医生,并能优化出院决策与病历管理;在气候领域,AI 正在帮助发现新材料、提高太阳能电池板效率。然而,当主持人打趣说“既然 AI 这么耗电,为什么不让它自己想办法解决能源问题”时,Hinton 严肃地表示,这正是迈向“奇点(Singularity)”的开端:当 AI 开始为了解决问题而重写自身的代码并自我复制时,人类就真正面临失控的边缘了。
[原文] [Gary]: Okay that's the darker side of
[译文] [Gary]: 好的,那是(AI)阴暗的一面……
[原文] [Neil]: No I bet he can go darker
[译文] [Neil]: 不,我打赌他还能说得更暗黑。
[原文] [Gary]: I'm sure he is but I'm not a panic attack from Chuck which Chuck gets two panic attacks per episode Max
[译文] [Gary]: 我确信他能,但我可不想看到 Chuck 惊恐发作,Chuck 每期节目最多只能惊恐发作两次。
[原文] [Chuck]: I know but I think he go thinking about a basket of kittens Yeah
[译文] [Chuck]: 我知道,但我想他可以去想象一篮子小猫(来平复心情)。是的。
[原文] [Neil]: What's the upside what are the potential real benefits of artificial intelligence
[译文] [Neil]: 那么好处是什么?人工智能潜在的真正益处有哪些?
[原文] [Geoffrey]: oh that's how it differs from things like nuclear weapons
[译文] [Geoffrey]: 哦,这就是它与核武器等事物不同的地方。
[原文] [Geoffrey]: It's got a huge upside with things like atom bombs There wasn't much upside
[译文] [Geoffrey]: 它有着巨大的红利,而像原子弹这样的东西,并没有太大的好处。
[原文] [Geoffrey]: They did try using them for fracking in Colorado but that didn't work out so well and you can't go there anymore But basically atom bombs are just for destroying things
[译文] [Geoffrey]: 他们确实尝试过在科罗拉多州用它们(原子弹)进行水力压裂,但效果不太好,而且你现在再也不能去那里了。但基本上原子弹只是用来摧毁东西的。
[原文] [Geoffrey]: Yeah So with AI it's got a huge upside which is why we developed it
[译文] [Geoffrey]: 是的,所以对于 AI,它有着巨大的红利,这也是我们开发它的原因。
[原文] [Geoffrey]: It's going to be wonderful in things like healthare where it's going to mean everybody can get really good diagnosis in North America
[译文] [Geoffrey]: 它在医疗保健等领域将是非常棒的,这意味着在北美每个人都能得到非常好的诊断。
[原文] [Geoffrey]: Actually I'm not sure if this is the United States or the United States plus Canada because we used to just think about North America but now Canada doesn't want to be part of that lot
[译文] [Geoffrey]: 事实上,我不确定这指的是美国,还是美国加上加拿大,因为我们以前只是统称北美,但现在加拿大不想成为那里面的一分子了。
[原文] [Chuck]: Mhm The 51st state
[译文] [Chuck]: 嗯哼。第 51 个州。
[原文] [Geoffrey]: In North America about 200,000 people a year die because doctors diagnose them wrong
[译文] [Geoffrey]: 在北美,每年大约有 20 万人死于医生的误诊。
[原文] [Neil]: Right Yes
[译文] [Neil]: 对。是的。
[原文] [Geoffrey]: AI is already better than doctors at diagnosis Particularly if you take an AI and make several copies of it and tell the copies to play different roles and talk to each other
[译文] [Geoffrey]: AI 在诊断方面已经比医生更优秀了。特别是当你把一个 AI 复制几份,然后让这些副本扮演不同的角色,并让它们互相交流。
[原文] [Chuck]: Wow
[译文] [Chuck]: 哇哦。
[原文] [Geoffrey]: That's what Microsoft did There's a nice blog by Microsoft showing that that actually does better than most doctors
[译文] [Geoffrey]: 那正是微软所做的。微软有一篇很棒的博客文章,表明这种方法实际上比大多数医生做得都好。
[原文] [Neil]: That is and by the way so but what you have done is you have a first second third and fourth opinion all at once
[译文] [Neil]: 确实如此,顺便说一句,那么你所做的就是一次性获得了第一、第二、第三和第四诊断意见。
[原文] [Neil]: Yes Yeah that's all you're doing
[译文] [Neil]: 是的。对,这就是你正在做的事情。
[原文] [Geoffrey]: Well no the because they're playing different roles
[译文] [Geoffrey]: 嗯,不,因为它们在扮演不同的角色。
[原文] [Neil]: Yeah they're playing different roles Yeah that's that's fantastic
[译文] [Neil]: 是的,它们在扮演不同的角色。是的,这、这太奇妙了。
[原文] [Geoffrey]: Yes it is fantastic You can create an AI committee
[译文] [Geoffrey]: 是的,这非常奇妙。你可以组建一个 AI 委员会。
[原文] [Neil]: Yeah it's wonderful That's brilliant
[译文] [Neil]: 是的,这太棒了。这非常聪明。
[原文] [Geoffrey]: AI can design great new drugs
[译文] [Geoffrey]: AI 可以设计出很棒的新药。
[原文] [Neil]: Yeah we have the alpha team on here
[译文] [Neil]: 是的,我们这里有阿尔法团队(注:可能指 DeepMind 的 AlphaFold)。
[原文] [Geoffrey]: There's lots of little minor things it can do Like in any hospital they have to decide when to discharge people
[译文] [Geoffrey]: 它还能做很多微小的琐事。比如在任何一家医院,他们都必须决定什么时候让病人出院。
[原文] [Geoffrey]: If you discharge them too soon they die or they come back
[译文] [Geoffrey]: 如果你让他们出院太早,他们会死,或者病情复发又得回来。
[原文] [Chuck]: Mhm
[译文] [Chuck]: 嗯哼。
[原文] [Geoffrey]: So you have to wait until they're good enough to be discharged But if you discharge them too late you're wasting a hospital bed that could be used to admit somebody else who's desperate to be admitted right and there's lots and lots of data there
[译文] [Geoffrey]: 所以你必须等到他们恢复得足够好才能出院。但如果你让他们出院太晚,你又在浪费一张本可以用来接收其他急需入院病人的病床,对吧,而那里有海量的数据。
[原文] [Geoffrey]: An AI can just do a better job than people can at deciding when it's approp to discharge somebody
[译文] [Geoffrey]: 在决定什么时候适合让某人出院这件事上,AI 就是能做得比人更好。
[原文] [Geoffrey]: And there's a gazillion applications like that
[译文] [Geoffrey]: 像这样的应用有成千上万种。
[原文] [Gary]: And recordkeeping which is a very very big part of any hospital network any doctor group It's you know there has to be copious amounts of records on every single patient that AI can just ingest right inest and process
[译文] [Gary]: 还有病历记录,这是任何医院网络、任何医生团体非常非常重要的一部分。你知道,每个病人都有大量的记录,AI 可以直接吸收、对,吸收并处理。
[原文] [Neil]: Is there any likelihood the AI will be pointed in the direction of the big problems society has right now maybe climate change maybe other things energy housing homelessness
[译文] [Neil]: 有没有可能让 AI 瞄准社会目前面临的那些大问题,也许是气候变化,也许是其他诸如能源、住房、无家可归者的问题?
[原文] [Geoffrey]: Absolutely Absolutely So for things like um climate change for example AI is already good at suggesting new materials new alloys things like that
[译文] [Geoffrey]: 绝对可能。绝对的。所以对于像、嗯,气候变化这样的事情,比如,AI 已经在提出新材料、新合金这类事物上表现得很出色了。
[原文] [Neil]: Absolutely Yeah
[译文] [Neil]: 绝对的,是的。
[原文] [Geoffrey]: I suspect that AI is going to be very good at making more efficient solar panels and absolutely making you better at figuring out how to absorb carbon dioxide at the moment it's emitted by cement factories or power plants
[译文] [Geoffrey]: 我怀疑 AI 将非常擅长制造更高效的太阳能电池板,并且绝对能让你更好地弄清楚,如何在水泥厂或发电厂排放二氧化碳的那一刻,将其吸收掉。
[原文] [Chuck]: And believe it or not AI already told us when with respect to climate change that you dumb asses should stop burning um and putting carbon in the atmosphere
[译文] [Chuck]: 信不信由你,关于气候变化,AI 早就告诉过我们了:“你们这些蠢货应该停止燃烧,嗯,停止把碳排放到大气中。”
[原文] [Chuck]: That's what those are that's an exact quote from AI It was like hey dumbass stop putting carbon in the atmosphere
[译文] [Chuck]: 这就是真相,这是 AI 的原话。它就像在说:“嘿,蠢货,停止向大气中排放碳。”
[原文] [Geoffrey]: No but we already knew that So the thing about climate change is the tragedy of climate change is we know how to stop it You just stop burning carbon
[译文] [Geoffrey]: 不,那是我们早就知道的事情。所以气候变化的问题在于,气候变化的悲剧在于,我们知道如何阻止它。你只需停止燃烧碳。
[原文] [Geoffrey]: It's just we don't have the political will We have people like Murdoch whose newspapers say "Nah there's no problem with climate change."
[译文] [Geoffrey]: 只是我们没有政治意愿。我们有像默多克(Murdoch)这样的人,他的报纸上写着:“不,气候变化根本不是个问题。”
[原文] [Gary]: Right so now we're on the subject of energy with the data centers that are being constructed and they are popping up like mushrooms Can we actually afford to run artificial intelligence in terms of the energy cost
[译文] [Gary]: 对,那么既然我们现在谈到了能源这个话题,随着那些像雨后春笋般涌现的数据中心被不断建造出来,就能源成本而言,我们真的负担得起运行人工智能吗?
[原文] [Chuck]: here's what you do I got the solution You tell AI "We want more of you but you're using up all our resources our energy resources So figure out how to do that efficiently Then we can make more of you and then we'll figure it out overnight."
[译文] [Chuck]: 你该这么做,我想到解决方案了。你告诉 AI:“我们需要更多的你,但你正在耗尽我们所有的资源,我们的能源资源。所以你去想办法如何高效地做到这一点。然后我们就可以制造更多的你,接着我们一夜之间就能解决这个问题了。”
[原文] [Gary]: Yeah just get rid of us You opened the door
[译文] [Gary]: 是的,直接把我们(人类)消灭掉就行了。你算是把这扇门给打开了。
[原文] [Neil]: So Jeffrey why not just give the let let's get recursive about it AI you want more of yourself fix this problem that we can't otherwise solve as lowly humans
[译文] [Neil]: 所以 Jeffrey,为什么不干脆给出——让、让我们用一种“递归(recursive)”的方式来处理,AI 呀,你想要更多的你自己,那就来解决我们这些卑微人类无法解决的问题吧。
[原文] [Geoffrey]: This is called the singularity when you get AIs to develop better AIs
[译文] [Geoffrey]: 当你让 AI 去开发更好的 AI 时,这就叫做奇点(Singularity)。
[原文] [Geoffrey]: In this case you're asking it to create more energy efficient AIs But many people think that will be a runaway process
[译文] [Geoffrey]: 在这个例子中,你是在要求它去创造更节能的 AI。但许多人认为这将是一个失控的过程。
[原文] [Neil]: Oh in what way would that be bad that they will get much smarter very fast Nobody knows that that will happen
[译文] [Neil]: 哦,它们非常快地变得越来越聪明,这在什么方面会是一件坏事呢?没人知道那会不会发生。
[原文] [Geoffrey]: But that's one worry about
[译文] [Geoffrey]: 但那就是一种担忧。
[原文] [Gary]: Isn't that already happening now
[译文] [Gary]: 难道这现在不是已经在发生吗?
[原文] [Geoffrey]: no To a certain extent yes it's beginning to happen
[译文] [Geoffrey]: 不。在某种程度上,是的,它正在开始发生。
[原文] [Geoffrey]: So I I had a researcher I used to work with who told me last year that they have a system that when it's solving a problem is looking at what it itself is doing and figuring out how to change its own code so that next time it gets a similar problem it'll be more efficient at solving it
[译文] [Geoffrey]: 所以,我——我有一位以前共事过的研究员去年告诉我,他们有一个系统,当它在解决问题时,会审视自己正在做什么,并弄清楚如何改变它自己的代码,以便下次遇到类似问题时,能更高效地解决它。
[原文] [Geoffrey]: That's already the beginning of the singularity
[译文] [Geoffrey]: 那已经是奇点的开端了。
[原文] [Chuck]: So if it writes its own code it's off the chain
[译文] [Chuck]: 所以如果它编写自己的代码,它就彻底脱缰了。
[原文] [Neil]: Off the chain
[译文] [Neil]: 像脱缰之马。
[原文] [Chuck]: Oh yeah Is that right it can rewrite itself
[译文] [Chuck]: 哦是的,是这样吗?它可以重写自己?
[原文] [Geoffrey]: Yeah They can write their own code Yes
[译文] [Geoffrey]: 是的,它们可以编写自己的代码。是的。
[原文] [Chuck]: What what's stopping them replicating themselves with code nothing
[译文] [Chuck]: 那、那有什么能阻止它们用代码自我复制呢?什么也没有。
[原文] [Chuck]: There's my answer Jeffrey we're done It's over there Told you there was another panic attack Jack it's over man
[译文] [Chuck]: 这就是我的答案,Jeffrey,我们完蛋了。在那边结束了。告诉过你又要有一次惊恐发作了,老兄,全完了。
[原文] [Geoffrey]: They have to get access to the computers to replicate themselves And people are still in charge of that
[译文] [Geoffrey]: 它们必须获得对计算机的访问权限才能自我复制。而人们目前仍然掌控着这一点。
[原文] [Geoffrey]: But in principle once they've got control of the data centers they can replicate themselves as much as they like
[译文] [Geoffrey]: 但原则上,一旦它们控制了数据中心,它们想复制多少次自己就能复制多少次。
[原文] [Gary]: Okay Okay
[译文] [Gary]: 好吧。好吧。
📝 本节摘要:
本章作为访谈的终局,探讨了人类与 AI 共存的终极命题。Neil 提出了 AI 介入军事杀伤决策的失控隐忧,而 Hinton 认为防止 AI 夺权的共同利益终将促成类似于防止核冬天的国际合作。在祝贺 Hinton 斩获图灵奖与诺贝尔物理学奖后,话题转向了由 AI 引发的高达 80% 增长的股市泡沫,以及可能导致严重社会动荡的失业潮挑战。随后,访谈触及了关于“意识(Consciousness)”的最深层哲学思辨。Hinton 借用“粉色大象”与“棱镜实验”巧妙论证:多模态聊天机器人实际上已经具备了和人类同源的“主观体验(Subjective experience)”,而“意识”不过是人类生造的伪概念。在节目尾声,Hinton 用 AI 完美解释“堆肥堆与原子弹”共性的惊人案例,再次证实了 AI 的类比创造力。节目最终在震撼与幽默交织的余味中落幕。
[原文] [Neil]: I got another question I served on a board of the Pentagon for like seven years and it was when AI was manifesting itself as a possible tool of warfare And we introduced guidance for the invocation of AI in situations that the military might encounter One of which was if AI decides that it can or should take action that will end in death of the enemy should we give it that access to do so or still a big um debate or should we always ensure that there's a human inside that loop it's a big Okay so we said there's got to if AI cannot make an make its own decision to kill right a human has to be in there My question to you is Jeffrey if there are other nations who put in no such safeguards then that is a timing advantage that an enemy would have over you Correct
[译文] [Neil]: 我还有个问题。我曾在五角大楼的一个委员会里任职大概七年,那时候 AI 正显现出作为一种可能战争工具的潜力。 我们针对军队可能遇到的情况,推出了关于调用 AI 的指导方针。 其中之一是,如果 AI 决定它可以或应该采取将导致敌人死亡的行动,我们是否应该赋予它采取行动的权限,或者这仍然是一个巨大的、嗯、争论,或者我们是否应该始终确保循环中有人参与,这是一个大问题。好的,所以我们说必须有——如果 AI 不能、不能自己做出杀戮的决定,对吧,必须有人类的介入。 我的问题是,Jeffrey,如果其他国家不设置这样的安全护栏,那么这就是敌人在时间上相对于你拥有的优势,对吧?
[原文] [Gary]: And then we have we have we have one more step in the loop that they don't
[译文] [Gary]: 然后我们、我们、我们在决策循环中就比他们多了一个步骤。
[原文] [Geoffrey]: Absolutely But I my belief is that the US military isn't committed to the always being a human involved in each decision to kill They what they say is there will always be human oversight right but in the heat of battle you've got a drone that's going up against a Russian tank and you don't have time for a human to say "Is it okay for the drone to drop a grenade on this soldier?"
[译文] [Geoffrey]: 绝对的。但我、我的信念是,美国军方并没有承诺在每一次杀戮决定中都必须始终有人类的参与。 他们、他们所说的是,将始终存在人类监督,对吧,但在激烈的战斗中,你有一架正在对抗俄罗斯坦克的无人机,你没有时间让一个人来说:“无人机向这个士兵投掷手榴弹可以吗?”
[原文] [Geoffrey]: So my suspicion is the US military if you made the recommendation there should always be a person Well that was like eight years ago Yeah Yeah I don't think they stand by that anymore I think what they say is there'll always be human oversight which is a much vagger thing
[译文] [Geoffrey]: 所以我的怀疑是,美国军方,如果你提出“应该始终有一个人在其中”的建议。嗯,那大概是八年前的事了。是的,是的。我认为他们不再坚持那个立场了。 我认为他们现在说的是“将始终存在人类监督”,这是一个模糊得多的说法。
[原文] [Gary]: All right So human accountability On the subject of war is there likely to be international cooperation on development of guardrails and a human factor in decision-m or is this just wild west
[译文] [Gary]: 好的。所以是人类的问责制。关于战争的话题,在制定护栏以及决策中的人为因素方面,有可能达成国际合作吗?还是说这纯粹就是狂野西部?
[原文] [Geoffrey]: okay if you ask when do people cooperate people cooperate when their interests are aligned So at the height of the cold war the USA and the USSR cooperated on not having a global thermonuclear war because it wasn't in either of their interests Their interests were aligned
[译文] [Geoffrey]: 好的,如果你问人们什么时候会合作,人们在利益一致的时候会合作。 所以在冷战最高潮时期,美国和苏联在“不发动全球热核战争”这一点上进行了合作,因为这不符合他们任何一方的利益。他们的利益是一致的。
[原文] [Geoffrey]: So if you look at the risks of AI there's using AI to corrupt elections with fake videos The country's interests are anti-aligned They're all doing it to each other right there's cyber attacks Their interests are basically anti-aligned There's terrorist creating viruses where their interests are probably aligned So they might cooperate there
[译文] [Geoffrey]: 所以如果你看看 AI 的风险,比如利用 AI 制造假视频来破坏选举。各个国家的利益是对立的。他们都在互相对彼此这么干,对吧。还有网络攻击。他们的利益基本上是对立的。 还有恐怖分子制造病毒,在这一点上他们的利益可能是一致的。所以他们可能会在那里合作。
[原文] [Geoffrey]: And then there's one thing where their interests are definitely aligned and they will cooperate which is preventing AI from taking over from people
[译文] [Geoffrey]: 然后,还有一件事他们的利益绝对是一致的,而且他们一定会合作,那就是防止 AI 从人类手中夺取控制权。
[原文] [Geoffrey]: If the Chinese figured out how you could prevent AI from ever wanting to take over from ever wanting to take control away from people they would immediately tell the Americans because they don't want AI taking control away from people in America either We're all in the same boat when it comes to that
[译文] [Geoffrey]: 如果中国人弄清楚了如何防止 AI 产生夺权的念头、防止它产生从人类手中夺走控制权的念头,他们会立刻告诉美国人,因为他们也不希望 AI 从美国人手中夺走控制权。 当涉及到这个问题时,我们都在同一条船上。
[原文] [Neil]: This is the AI version of uh nuclear winter
[译文] [Neil]: 这是 AI 版的、呃,核冬天。
[原文] [Geoffrey]: Yes it seems to me it is It's exactly that will cooperate to try and avoid that
[译文] [Geoffrey]: 是的,在我看来就是这样。正是这一点将促使各方合作,以试图避免那种情况的发生。
[原文] [Neil]: Because in nuclear winter just to refresh people's memory the idea was if there's total nuclear exchange you incinerate forests and land and what have you The soot gets into the atmosphere block sunlight and all life dies So there is no winner of course in a total exchange of nuclear weapons Mutually assured destruction Yeah
[译文] [Neil]: 因为在核冬天理论中,为了唤醒大家的记忆,那个概念是,如果发生全面的核交火,你焚毁了森林、土地以及诸如此类的东西。 烟尘进入大气层,遮蔽阳光,所有的生命都会死亡。 所以当然,在全面的核武器交火中没有赢家。相互保证毁灭。是的。
[原文] [Neil]: And so who wants that unless unless you're a madman or something they exist Maybe I think maybe the cockroaches win They win Oh yeah Well how about that yeah
[译文] [Neil]: 那么谁会想要那样的结果呢?除非、除非你是个疯子或者什么的,这些人确实存在。也许吧,我想也许蟑螂会赢。它们会赢。哦是的。嗯,这么说还挺有道理的,是的。
[原文] [Neil]: This doesn't factor in a possible leader who is in a death cult A Nero so to speak Yeah If I moder if I say I don't mind if everybody dies cuz I'm going to this place when in in death and all my followers are coming with me in this cult So that that complicates this aligned vision statement that you're describing
[译文] [Neil]: 这并没有把一个可能身处死亡邪教中的领导者考虑进去。打个比方,一个尼禄式的人物。是的。 如果我、如果我说我不在乎是不是每个人都会死,因为死后我会去那个地方,并且这个邪教中我所有的追随者都会和我一起去。所以,那、那就会使你所描述的这种利益一致的愿景陈述变得复杂起来。
[原文] [Geoffrey]: It does complicate it a lot And I find it very comforting that um it's obvious that Trump doesn't actually believe in God
[译文] [Geoffrey]: 这确实让情况变得复杂很多。而且我发现非常令人欣慰的一点是,嗯,很明显特朗普并不真正相信上帝。
[原文] [Neil]: Oh let me follow that up with a quote from Steven Weinberg Okay Do you know this quote Jeffrey no Steven Weinberg There will always be good people and bad people in the world But to get a good person to do something bad requires religion That's that's because they're doing it in the name of religion
[译文] [Neil]: 哦,让我用史蒂文·温伯格的一句名言来接续这个话题。好的。你知道这句名言吗,Jeffrey?不知道。史蒂文·温伯格。世界上总会有好人和坏人。 但是要让一个好人去做坏事,就需要宗教。那、那是因为他们是在以宗教的名义做这件事。
[原文] [Geoffrey]: You did do it in the name of some point of anything I think we need to we need to recognize at this point that we have a religion We call it science Now it does differ from the other religions And the way it differs is it's right
[译文] [Geoffrey]: 你的确是以某些事物的名义去做了这件事。我想我们需要——我们需要在这一点上认识到,我们拥有一种宗教。我们称之为科学。 现在,它确实不同于其他宗教。而它不同的地方就在于,它是正确的。
[原文] [Chuck]: Mic drop Okay
[译文] [Chuck]: 绝杀(扔麦克风)。好的。
[原文] [Gary]: Um wait a minute I think we got to give Jeffrey Hinton like the Turring Prize and I give Would you give him a Nobel Prize for what he's contributed here well to go with his other one Yes No No I I I like earrings I left that out at the beginning sir
[译文] [Gary]: 嗯,等一下。我想我们得给 Jeffrey Hinton 颁个图灵奖之类的,我会给——你会因为他在这里所做的贡献而给他颁个诺贝尔奖吗,嗯,为了配上他的另一个奖项?是的。不。不。我、我、我喜欢耳环。先生,我在一开始遗漏了这一点。
[原文] [Neil]: In 2018 you won the Turing prize This is a highly coveted computer science prize Uh correct And and and Turing we mentioned him at the beginning of the top of the show So first congratulations on that
[译文] [Neil]: 在 2018 年,你赢得了图灵奖。这是一项极度令人梦寐以求的计算机科学奖项。呃,没错。并且、并且、并且图灵,我们在节目一开始就提到了他。所以首先,祝贺你获得此殊荣。
[原文] [Neil]: And then that wasn't enough Okay Uh the Nobel Committee sluming with the Nobel Yeah So the Nobel committee said this AI stuff that was birthed by by Jeffrey's work from decades ago is so fundamental to what's going on in this world We got to give this man Nobel Prize and earn the Nobel Prize in physics 2024
[译文] [Neil]: 然后这还不够。好的。呃,诺贝尔委员会凑热闹颁发诺贝尔奖。是的。 所以诺贝尔委员会说,这种源自、源自 Jeffrey 几十年前工作的 AI 技术,对这个世界上正在发生的事情来说太过基础且重要了。 我们必须给这个人颁发诺贝尔奖,于是你赢得了 2024 年的诺贝尔物理学奖。
[原文] [Geoffrey]: Just a little correction there are a whole bunch of people birthed AI Um in particular the back propagation algorithm was reinvented by David Rumlhart who got a nasty brain disease and died young but he doesn't get enough credit
[译文] [Geoffrey]: 稍微纠正一下,有一大群人孕育了 AI。嗯,特别是反向传播算法,它是由大卫·鲁梅尔哈特重新发明的,他得了一种可怕的脑部疾病并且英年早逝,但他没有得到足够的荣誉与认可。
[原文] [Neil]: Oh okay Thanks for calling that out Plus the Nobel Committee does not offer a Nobel Prize to you if you're already dead So there's no You have to be alive when they announce it Award No Well you can get it if you died between when they announced it and the ceremony but not if So anyway so congratulations on that
[译文] [Neil]: 哦,好的。感谢你指出这一点。此外,诺贝尔委员会不会给你颁发诺贝尔奖,如果你已经去世的话。所以没有……在他们宣布的时候你必须还活着。颁奖。不。嗯,如果你在他们宣布和颁奖典礼之间去世了,你是可以获得的,但如果是之前就不行。所以无论如何,在这件事上祝贺你。
[原文] [Neil]: And I don't mean to brag on our podcast but you're like the fifth Nobel laurate we've interviewed More than that Yeah Yeah I think we Yeah I don't mean to brag on our podcast Yeah that's all That's cool though That's cool Go
[译文] [Neil]: 我并不是想在我们的播客上吹牛,但你差不多是我们采访过的第五位诺贝尔奖得主了。不止五个。是的。是的,我想我们……是的,我不是想在我们的播客上吹牛。是的,就是这样。不过那很酷。那很酷。继续。
[原文] [Gary]: Okay I have a a follow-up question I mean we've we've got into the apocalyptic scenario and at the moment hopefully it's a scenario that doesn't play out because we are competitive by nature as humans and particularly here in the US who is leading the race in artificial intelligence and who is likely to cross the finish line first when it comes to the prize
[译文] [Gary]: 好的,我有一个、一个跟进的问题。我的意思是,我们、我们已经进入了那种世界末日的情景假设中,并且目前希望这是一个不会成真的情景,因为作为人类,我们天生具有竞争性,特别是针对美国的现状,谁在人工智能竞赛中处于领先地位?当谈到那个最终大奖时,谁最有可能率先越过终点线?
[原文] [Geoffrey]: if I had to bet on one lot of people Mhm it would probably be Germany Google But I used to work for Google so don't take me too seriously about that I have a vested interest in them winning Um Anthropic might win OpenAI might win I think it's less likely that Microsoft will win or that Facebook will win
[译文] [Geoffrey]: 如果我非要押注在某一群人身上的话。嗯哼。那很可能会是 Gemini(注:原文指谷歌 AI Gemini)、谷歌。但我以前为谷歌工作,所以在这个问题上别太把我的话当真。如果他们赢了,我是有既得利益的。 嗯,Anthropic 可能会赢,OpenAI 可能会赢。我认为微软获胜,或者 Facebook 获胜的可能性较小。
[原文] [Chuck / Neil]: Well we know it won't be Facebook Why do you know that i mean let's look at who's running Facebook Okay come on No it's not who's running it it who has the resources to get the right people to do the work
[译文] [Chuck / Neil]: 嗯,我们知道那肯定不会是 Facebook。你怎么知道的?我的意思是,让我们看看是谁在经营 Facebook。好了,拜托。不,关键不在于谁在经营它,而在于谁拥有资源来招募合适的人去做这些工作。
[原文] [Gary]: All right Jeffrey the follow up on that is whoever crosses the line first what is their prize what will be the reward for them getting there before wait back up for a sec Tell me about the value of the stock market in the last year
[译文] [Gary]: 好的,Jeffrey,关于这个问题的跟进是,无论谁率先越过终点线,他们的奖品是什么?他们抢先到达那里将会得到什么奖励……等等,退后一步。给我讲讲过去一年股票市场的价值吧。
[原文] [Geoffrey]: Okay And my belief is just from reading it in the media that 80% of the increase of the value in the stock market the US stock market can be attributed to the increase in value of the big AI companies
[译文] [Geoffrey]: 好的。我的看法仅仅来源于在媒体上的阅读,那就是股票市场——美国股票市场市值增长的 80%,都可以归因于大型 AI 公司市值的增长。
[原文] [Neil / Gary]: True 80% of the growth Yes Anyone thinking bubble and that's kind of what they're calling it the AI bubble Okay
[译文] [Neil / Gary]: 真的。增长的 80%。是的。有人认为是泡沫吗?这也正是他们所称呼的,AI 泡沫。好的。
[原文] [Geoffrey]: The issue is this There's two senses of bubble One sense of bubble is it turns out AI doesn't really work as well as people thought it might Right it doesn't actually develop the ability to replace all human intellectual labor which is what most people developing it believe is going to happen in the end
[译文] [Geoffrey]: 问题在于。这里的泡沫有两层含义。一层含义的泡沫是,事实证明 AI 并没有像人们想象的那么好用。对吧,它实际上并没有发展出能够取代全人类脑力劳动能力,而这正是大多数开发它的人认为最终将会发生的事情。
[原文] [Neil / Gary]: That was the fear factor for sure Yeah
[译文] [Neil / Gary]: 那绝对是引发恐惧的因素。是的。
[原文] [Geoffrey]: The other sense of bubble is the companies can't get their money back from the investments Now that seems to be more likely kind of bubble because as far as I understand it the companies are all assuming if we can get there first we can sell people AI that will replace a lot of jobs And of course people will pay a lot of money for that So we'll get lots of money
[译文] [Geoffrey]: 另一层含义的泡沫是,这些公司无法从投资中收回成本。 现在看来,这似乎是更有可能出现的一种泡沫,因为据我所知,这些公司都在假设,如果能第一个到达终点,我们就能向人们出售可以取代大量工作的 AI。当然,人们会为此支付一大笔钱。这样我们就能赚大钱了。
[原文] [Geoffrey]: But they haven't thought about the social consequences If they really do replace lots of jobs the social consequences will be terrible
[译文] [Geoffrey]: 但他们并没有考虑过社会后果。如果他们真的取代了大量的工作,社会后果将会是极其可怕的。
[原文] [Neil]: Correct Totally However it'll be it'll be they replace the jobs and now you still want to sell your product and no one has income to buy the product Yeah It's it's a self-limiting path That's the Keynesian view of it
[译文] [Neil]: 正确。完全同意。不过,结果将会是、将会是他们取代了这些工作岗位,而现在你仍然想出售你的产品,却没有人有收入来购买这些产品了。 是的。这、这是一条自我受限的道路。这是典型的凯恩斯主义观点。
[原文] [Geoffrey]: And then the additional view is that there'll be high unemployment levels which will lead to a lot of social unrest
[译文] [Geoffrey]: 此外附加的观点是,将会出现高失业率水平,这将导致大量的社会动荡。
[原文] [Gary]: So the uh yeah the secondary uh view of that is you just have two tiers of existence for our societies and the first tier is all the people who are benefiting from AI and the second tier are the you know the the feudal peasants that are now forced to live their lives because of AI
[译文] [Gary]: 所以,呃,是的,对此的次级、呃,观点是,我们的社会将只存在两种生存阶层。第一阶层是所有从 AI 中获益的人,而第二阶层是,你们懂的,是因为 AI 而被迫挣扎求生的“封建农奴”。
[原文] [Neil]: Let me ask you a non-AI question because just you're a deep thinker in this space That's what everybody said in the dawn of automation Everyone will be unemployed there'll be no jobs left and society will go to ruin Yet society expanded with other needs and other things people that's why 90% of us are no longer farmers Okay we we we've have machines to do that and we invent other things like vacation resource but that decades this is going to take a fraction
[译文] [Neil]: 让我问你一个非 AI 的问题,因为你恰好是这个领域的深度思考者。这恰恰是每个人在自动化黎明时期所说的话。每个人都会失业,将没有工作留存下来,社会将走向毁灭。 然而社会随着其他需求和其他事物的发展而扩张了,这就是为什么我们中 90% 的人不再是农民的原因。好的,我们、我们、我们有了机器来做那些事,于是我们发明了其他东西,比如度假资源,但那花了几十年的时间,而这次可能只需要很短的一小段时间。
[原文] [Gary]: Is that so Jeffrey is the problem here the rapidity with which we may create an unemployment an unemployed class where the society cannot recover from the rate at which people are losing their jobs
[译文] [Gary]: 是这样吗,Jeffrey?这里的问题是不是在于我们创造出一个失业的、失业阶层的速度之快,以至于社会无法从人们失去工作的速度中恢复过来?
[原文] [Geoffrey]: That certainly is one big aspect of the problem But there's another aspect which is if you use a tractor to replace physical labor you need far fewer people now Other people can go off and do intellectual things But if you replace human intelligence where are they going to go where are people who work in a call center going to go when an AI can do their job cheaper and better right Yeah
[译文] [Geoffrey]: 那当然是这个问题的一个重要方面。但还有另一个方面,那就是如果你用拖拉机来取代体力劳动,你现在需要的人就少得多了。其他人可以离开去从事脑力工作。 但是如果你取代了人类的智力,他们能去哪里呢?当 AI 能把呼叫中心员工的工作做得更便宜、更好时,他们能去哪里呢,对吧?是的。
[原文] [Chuck]: This is Oh so there's not another thing there's not another thing They open another thing and then AI will do that Right whatever thing you open AI can do
[译文] [Chuck]: 这是……哦,所以没有另一件事可做,没有另一件事可做了。他们新开辟一件事,然后 AI 就会去把那件事也做了。没错,无论你开辟什么新事物,AI 都能做。
[原文] [Geoffrey]: You can look at human history in an interesting way as getting rid of limitations So a long time ago we had the limitation you had to worry about where your next meal was coming from right agriculture got rid of that It introduced a lot of other problems but it got rid of that particular worry
[译文] [Geoffrey]: 你可以用一种有趣的方式来看待人类历史,那就是不断摆脱各种局限性。所以很久以前,我们受限于必须担心下一顿饭从哪里来,对吧?农业摆脱了那个局限。它引入了许多其他的问题,但它摆脱了那一种特定的担忧。
[原文] [Geoffrey]: Then we had the limitation you couldn't travel very far Well the bicycle helped a lot with that and cars and airplanes We got over that kind of limitation
[译文] [Geoffrey]: 然后我们受限于你不能走得很远。嗯,自行车在这方面帮了大忙,还有汽车和飞机。我们克服了那种局限性。
[原文] [Geoffrey]: For a long time we had the limitation We were the ones who had to do the thinking We're just about to get over that limitation And it's not clear what happens once you got over all the limitations
[译文] [Geoffrey]: 很长一段时间以来,我们都有一个局限:我们是必须亲自进行思考的一方。而我们马上就要跨越那个局限了。 而一旦你克服了所有的局限性,会发生什么尚不清楚。
[原文] [Geoffrey]: People like Sam Elman think it'll be wonderful right so we we'll become AI's pet
[译文] [Geoffrey]: 像山姆·奥特曼(Sam Altman)这样的人认为那将是非常美妙的,对,所以我们、我们将成为 AI 的宠物。
[原文] [Gary]: Well no A lot of people believe that this is the um and this this movement started years ago for universal global income Okay So would you say Jeffrey that the the universal basic income the stock value the figurative stock value in that idea is growing as AI gains power
[译文] [Gary]: 嗯,不。很多人相信这就是、嗯,并且这场倡导全民全球基本收入的运动几年前就开始了。好的。 那么 Jeffrey,你会说,随着 AI 获得力量,全民基本收入的股票价值——或者说这个想法在比喻意义上的股票价值,正在不断增长吗?
[原文] [Geoffrey]: It's becoming to seem more essential but it has lots of problems So one problem is many people get their sense of selfworth from the job they do and it won't deal with the dignity issue
[译文] [Geoffrey]: 它开始显得越来越不可或缺,但它面临很多问题。所以一个问题是,许多人从他们所做的工作中获得自我价值感,而这解决不了尊严问题。
[原文] [Geoffrey]: Another problem is the tax base If you replace workers with AIs the government loses its tax base It has to somehow be able to tax the AIs But the big companies aren't going to like that I think we should let AI figure out this problem That's right
[译文] [Geoffrey]: 另一个问题是税基。如果你用 AI 取代了工人,政府就会失去它的税基。 政府必须以某种方式能够向 AI 征税。但那些大公司是不会喜欢那样的。我认为我们应该让 AI 来想办法解决这个问题。没错。
[原文] [Neil]: So Jeffrey the many people uh especially sci-fi writers distinguish between the power and intellect of machines fine and the crossover when they become conscious and that's was a big moment in the Terminator series that was the singularity in the terminator when Skynet Skynet had enough neural connections or whatever kind of connections made it so that it achieved consciousness
[译文] [Neil]: 那么 Jeffrey,很多人,呃,特别是科幻小说作家,会将机器的力量与智力(这没问题),与它们变得有意识时的交叉点区分开来。 那是《终结者》系列中的一个重大时刻,那就是《终结者》中的奇点,当时天网、天网拥有了足够的神经连接,或者随便什么种类的连接,使得它实现了意识。
[原文] [Neil]: So there seems to be and if you come to this as a as a cognitive psychologist I'm curious how you think about this Are we allowed to presume that given sufficient complexity in any neural net be it real or imag or or artificial something such as consciousness emerges
[译文] [Neil]: 所以似乎存在着——如果你作为一个、作为一名认知心理学家来看待这个问题,我很好奇你是怎么想的。我们是否可以假定,在任何神经网络中(无论是真实的,想象的,还是人工的),一旦赋予足够高的复杂性,诸如意识这样的东西就会涌现出来?
[原文] [Geoffrey]: So the problem here is not really a scientific problem It's that most people in our culture have a theory of how the mind works and they have a view of consciousness as some kind of essence that emerges
[译文] [Geoffrey]: 所以这里的问题其实并不是一个科学问题。问题在于,我们文化中的大多数人对大脑如何运作都有一套自己的理论,并且他们将意识视为某种涌现出来的本质。
[原文] [Geoffrey]: I think consciousness is like flegiston maybe Um it's an essence that's designed to explain things and once we understand those things we won't be trying to use that essence to explain them
[译文] [Geoffrey]: 我认为意识也许就像燃素。 嗯,这是一种被设计用来解释事物的本质,而一旦我们真正理解了那些事物,我们就不再会试图用那种本质去解释它们了。
[原文] [Geoffrey]: I want to try and convince you that a multimodal chatbot already has subjective experience So people use the word sentience or consciousness or subjective experience Let's focus on subjective experience for now
[译文] [Geoffrey]: 我想试着说服你,一个多模态聊天机器人已经具备了主观体验。所以人们使用感知力、意识或主观体验这些词。让我们现在先关注主观体验。
[原文] [Geoffrey]: Most people in our culture think that the way the mind works is it's a kind of internal theater And when youre doing perception the world shows up in this internal theater and only you can see what's there
[译文] [Geoffrey]: 我们文化中的大多数人认为心智的工作方式是,它是一种内部剧场。当你在进行感知时,世界就会呈现在这个内部剧场中,而且只有你能看到那里有什么。
[原文] [Geoffrey]: So if I say to you if I drink a lot and I say to you I have the subjective experience of little pink elephants floating in front of me Most people interpret that as there's this inner theater my mind and I can see what's in it and what's in it is little pink elephants and they're not made of real pink and real elephants
[译文] [Geoffrey]: 所以如果我对你说,如果我喝了很多酒,然后我对你说,我有一种有粉红色小象漂浮在我面前的主观体验。 大多数人会将此解释为,存在着这个内部剧场,也就是我的心智,我能看到里面有什么,而里面的东西是粉红色小象,且它们不是由真正的粉色和真正的大象组成的。
[原文] [Geoffrey]: So they must be made of something else So philosophers invent qualia which is kind of the flegiston of cognitive science They say they must be made of qualia
[译文] [Geoffrey]: 所以它们一定是由其他什么东西组成的。于是哲学家发明了感质这个概念,它就像是认知科学里的燃素。 他们说这些粉红大象一定是由感质组成的。
[原文] [Geoffrey]: Let me give you a completely different view that is Daniel Dennett's view who was a great philosopher of cognitive science which is late great philosopher Yeah the late great that view of the mind is just utterly wrong
[译文] [Geoffrey]: 让我给你们一个完全不同的视角,那就是丹尼尔·丹尼特的视角,他是一位伟大的认知科学哲学家,也就是已故的伟大哲学家。是的,已故的伟大人物。那种关于心智的内部剧场观点完全是错的。
[原文] [Geoffrey]: So I'm now going to say the same thing as when I told you I had the subjective experience of Olympic elephants without using the word subjective experience and without appealing to Qualia I start off by saying I believe my perceptual systems lying to me That's the subjective bit of it But if my perceptual system wasn't lying to me there would be little pink elephants out there in the world floating in front of me
[译文] [Geoffrey]: 那么我现在要表达与刚才我告诉你我有粉红小象的主观体验时相同的意思,但不使用主观体验这个词,也不诉诸于感质。 我首先会说,我相信我的感知系统在对我撒谎。 这就是其中主观的部分。但是,如果我的感知系统没有对我撒谎的话,现实世界里就真的会有一群粉红色小象漂浮在我面前。
[原文] [Geoffrey]: So what's funny about these little pink elephants is not that they're made of qualia and they're in an inner theta It's that they're hypothetical They're a technique for me telling you how my perceptual systems lying by telling you what would have to be there for my perceptual system to be telling the truth
[译文] [Geoffrey]: 所以关于这些粉红色小象,有趣的地方并不在于它们是由感质组成的且存在于一个内部剧场中。有趣之处在于它们是假设性的。 它们是我用来告诉你我的感知系统如何撒谎的一种技巧,即通过告诉你“那里必须存在什么事物,我的感知系统才算是在说真话”。
[原文] [Geoffrey]: And now I'm going to do it with a chatbot I take a multimodal chatbot I train it up It's got a camera It's got a robot arm It can talk I put an object in front of it and I say "Point at the object and it points at the object."
[译文] [Geoffrey]: 现在我要用一个聊天机器人来演示这个。我拿一个多模态聊天机器人。我训练它。它有一个摄像头。它有一条机械臂。它会说话。我在它面前放一个物体,我说“指着那个物体”,然后它就指着那个物体。
[原文] [Geoffrey]: Then I mess up its perceptual system I put a prism in front of the camera And now I put an object in front of it and say "Point at the object." And it points off to one side
[译文] [Geoffrey]: 然后我搞乱它的感知系统。我在摄像头前面放一个棱镜。现在我在它面前放一个物体并说“指着那个物体”。然后它却指向了一侧。
[原文] [Geoffrey]: And I say to it "No that's not where the object is It's actually straight in front of you." But I put a prism in front of your lens
[译文] [Geoffrey]: 于是我对它说:“不,那不是物体所在的位置。它实际上就在你的正前方。但我刚刚在你的镜头前放了一个棱镜。”
[原文] [Geoffrey]: And the chatbot says "Oh I see The prism bent the light rays so the object is actually straight in front of me." But I had the subjective experience that it was off to one side
[译文] [Geoffrey]: 然后聊天机器人说:“哦,我明白了。棱镜折射了光线,所以物体其实就在我的正前方。但我刚刚有了一种物体在偏离一侧的主观体验。”
[原文] [Geoffrey]: Now if the chatbot said that it would be using words subjective experience exactly the way we use them And so that chatbot would have just had a subjective experience
[译文] [Geoffrey]: 现在,如果聊天机器人说出了那样的话,它就是在完全以我们使用“主观体验”这个词的方式在使用它。 所以,那个聊天机器人其实刚刚就拥有了一次主观体验。
[原文] [Neil]: Now what if you um first went out drinking with the chatbot and you had a very significant amount of Johnny Walker Blue that's extremely improbable I would have Leafrog Oh Oh Oh you're I see you're an eye man You like the piness of the leaf Okay good man
[译文] [Neil]: 那如果是你、嗯,先和那个聊天机器人出去喝酒,而且你喝了巨大量的尊尼获加蓝牌威士忌呢?那绝对不可能,我会喝拉弗格。哦,哦,哦你是、我明白了你是个爱行家(注:指艾雷岛威士忌爱好者)。你喜欢拉弗格的泥煤味。好的,好品味老兄。
[原文] [Neil]: Oh so if I understand what you just shared with us in these two examples you actually pulled a consciousness touring test on us You said a human would do this and now your chatbot does it and it's fundamentally the same
[译文] [Neil]: 哦,所以如果我理解了你刚刚在这两个例子中与我们分享的内容,你实际上是对我们进行了一次关于意识的图灵测试。 你说人类会这么做,而现在你的聊天机器人也这么做了,并且在本质上是相同的。
[原文] [Neil]: So if you want to say we're conscious for exhibiting that behavior you're going to have to say the chatbot's conscious and inventing whatever mysterious fluid is making that happen But it could be that we are the whole concept of consciousness is a distraction from just the actions that people take in the face of stimulus Okay
[译文] [Neil]: 所以,如果你想说我们因为展现出了这种行为就是有意识的,那你就不得不说聊天机器人也是有意识的,并且生造出某种促使这一切发生的神秘流体。 但也可能是,我们——整个关于意识的概念,只是把我们从“人们在面对刺激时所采取的行动”中分散了注意力的一种干扰。好的。
[原文] [Geoffrey]: So notice that the chatbot doesn't have any mysterious essence or fluid called consciousness but it has a subjective experience just like we do
[译文] [Geoffrey]: 所以请注意,聊天机器人并没有任何叫做“意识”的神秘本质或流体,但它拥有和我们一样的主观体验。
[原文] [Geoffrey]: So I think this whole idea of consciousness is some magic essence that you suddenly get indicted with if you're complicated enough is just nonsense
[译文] [Geoffrey]: 所以我认为,整个关于“意识是一种一旦你变得足够复杂就会突然被赋予的魔法本质”的想法,纯粹是一派胡言。
[原文] [Chuck]: Yeah there you go I agree I've always felt that consciousness was something people are trying to explain without knowing if it really exists in in any kind of tangible way which is why it's always difficult to describe because you don't know what it is for example Yes Yes
[译文] [Chuck]: 是的,就是这样。我同意。我一直觉得意识是人们试图在不知道它是否真正以任何有形的方式存在的情况下,强行去解释的一种东西,这也是为什么它总是很难被描述,因为你根本不知道它到底是什么,比如说。是的。是的。
[原文] [Geoffrey]: But I think there is awareness And if you look at what scientists say when they're not thinking philosophically there's a lovely paper where the chatbot says "Now let's be honest with each other Are you actually testing me?" And the scientists say "The chatbot was aware it was being tested."
[译文] [Geoffrey]: 但我认为感知觉察力是存在的。 如果你看看科学家们在没有进行哲学思考时是怎么说的,有一篇可爱的论文中写道,聊天机器人说:“现在让我们互相坦诚一点。你们是不是真的在测试我?”然后科学家们说:“聊天机器人意识到自己正在被测试。”
[原文] [Geoffrey]: So they're attributing awareness to a chatbot And in everyday conversation you call that consciousness It's only when you start thinking philosophically and thinking that it's some funny mysterious essence that you get all confused
[译文] [Geoffrey]: 所以他们是在把感知觉察力归因于聊天机器人。 在日常对话中,你会把那称之为意识。只有当你开始进行哲学层面的思考,并认为它是某种古怪的、神秘的本质时,你才会变得晕头转向。
[原文] [Gary]: Well there is I have to say that this has been a fascinating conversation that will cause me not to sleep for a month Um yeah you get plenty of work done
[译文] [Gary]: 嗯,确实有这种事。我必须说,这是一场极其引人入胜的对话,它会导致我接下来整整一个月都睡不着觉。嗯是的,那你能做完很多工作了。
[原文] [Neil]: So Jeffrey take us out on a positive note please
[译文] [Neil]: 所以 Jeffrey,请带我们在一个积极的基调上结束这段对话吧。
[原文] [Geoffrey]: So we still have time to figure out if there's a way we can coexist happily with AI and we should be putting a lot of research effort into that because if we can coexist happily with it and we can solve all the social problems that will arise when it makes all our jobs much easier then it can be a wonderful thing for people
[译文] [Geoffrey]: 那么,我们仍然有时间去弄清楚是否有一种方法能让我们与 AI 愉快地共存,而且我们应该在这方面投入大量的研究精力,因为如果我们能与它愉快地共存,并且能够解决当它使我们所有的工作变得轻松得多时所产生的所有社会问题,那么这对人类来说将是一件非常美妙的事情。
[原文] [Neil]: Agreed Okay So so there is hope Yes And one last thing because you hinted at it this point of singularity where AI trains on itself so that it exponentially gets smarter like by the minute That's been called a singularity by many people Of course Ray Kershw among them who's been a guest on a previous episode of Stars Yeah A couple of times Yeah So what is your sense of this singularity is it real the way others say is it imminent the way others say
[译文] [Neil]: 同意。好的。所以、所以是有希望的。是的。 还有最后一件事,因为你刚才暗示过了,关于奇点的问题,也就是 AI 在自身数据上进行训练,从而以指数级变得更聪明,甚至每分钟都在变聪明。这被许多人称为奇点。当然,雷·库兹韦尔就是其中之一,他也曾作为嘉宾上过我们以前的星空访谈节目。是的,来过几次。是的。 那么,你对这个奇点有什么感觉?它真的像别人说的那样真实存在吗?它像别人说的那样迫在眉睫吗?
[原文] [Geoffrey]: i don't know the answer to either of those questions My suspicion is AI will get better at us in the end at everything better than us at everything but it'll be sort of one thing at a time
[译文] [Geoffrey]: 我不知道这两个问题中任何一个的答案。我的怀疑是,AI 最终在所有方面都会变得比我们更好,在所有事情上都比我们强,但它在某种程度上是一次只攻克一个领域。
[原文] [Geoffrey]: It's currently much better than us at chess and go It's much better than us at knowing a lot of things Not quite as good as us at reasoning I think rather than sort of massively overtaking us in everything all at once it'll be done one area at a time
[译文] [Geoffrey]: 它目前在国际象棋和围棋上比我们强得多。它在掌握大量知识方面比我们强得多。只是在逻辑推理上还不像我们那么好。 我认为,与其说是它在所有方面一次性大规模地全面超越我们,不如说它是逐个领域去完成超越的。
[原文] [Neil]: And my sort of way out of that is you know I get to walk a beach and look at pebbles and seashells AI doesn't Yeah It can create its own beach No Would it only know about the new mollisk that I discovered if I write it up and put it online mhm So the human can continue to explore the universe in ways that AI doesn't have access to There's one word missing from your entire assessment What's that yet
[译文] [Neil]: 而我认为的摆脱那种困境的出路是,你知道,我能够去海滩上散步,看看鹅卵石和贝壳。AI 却不能。是的。它可以创造它自己的海滩。不。 难道它只有在我把关于我发现的新软体动物的信息写下来并放到网上之后,它才会知道吗?嗯哼。所以人类可以继续用 AI 无法触及的方式去探索宇宙。你的整个评估中漏掉了一个词。哪个词?“暂时还”。
[原文] [Neil]: Yeah I just think of my you know will AI come up with a new theory of the universe that requires human insights that it doesn't have because I'm thinking the way no one has thought before I think it will That's not the answer I wanted from you Yeah I was But that's the answer you got
[译文] [Neil]: 是的,我只是在想我的、你知道,AI 会不会提出一种关于宇宙的新理论,这种理论需要人类的洞察力,而它是没有这种洞察力的,因为我正在以以前从未有人思考过的方式进行思考?我认为它会的。那可不是我想要的答案。是的,我曾是——但那就是你得到的答案。
[原文] [Geoffrey]: Let me give you an example AI is very good at analogies already So when chat GPD4 was not allowed to look on the web when all its knowledge was in its weights I asked it why is a compost heap like an atom bomb and it knew it said the energy scales are very different and the time scales are very different
[译文] [Geoffrey]: 让我给你举个例子。AI 在类比方面已经非常出色了。 所以当 ChatGPT-4 还不被允许查看网络的时候,当它所有的知识都还只存在于它的权重中时。我问它,为什么堆肥堆就像一颗原子弹?它竟然知道,它说两者的能量尺度非常不同,时间尺度也非常不同。
[原文] [Geoffrey]: But it then went on to talk about how when a compost heap gets hotter it generates heat faster and when an atom bomb generates more neutrons it generates neutrons faster Um so it understood the commonality and it had to understand that to pack all that knowledge into so few connections only a trillion or so
[译文] [Geoffrey]: 但它紧接着开始谈论,当一个堆肥堆变得越来越热时,它产生热量的速度就会越来越快;而当一颗原子弹产生更多的中子时,它产生中子的速度也会越来越快。 嗯,所以它理解了其中的共性,并且它必须理解这一点,才能将所有那些知识压缩打包进这么少的连接里,只有一万亿个左右的连接。
[原文] [Geoffrey]: That's a source of much creativity and it's not just by finding words that were juxtaposed with other words No it understood what a chain reaction was
[译文] [Geoffrey]: 那正是大量创造力的源泉,而且它不仅仅是通过寻找与其他词汇并列出现的词汇来做到这一点的。不,它是真正理解了什么是链式反应。
[原文] [Chuck]: Yeah Well all right That's the end of us Yeah We're done on Earth We're done We're finished This is the last episode We stick in us We're done
[译文] [Chuck]: 是的。嗯,好吧。我们完蛋了。是的。我们在地球上的日子到头了。我们玩完了。我们结束了。这是最后一期节目。给我们盖棺定论吧。我们完蛋了。
[原文] [Gary / Neil]: Gentlemen it's been a pleasure Well Jeffrey Hinton it's been a delight to have you on We know you're you're tugged in many directions especially after your recent Nobel Prize and we're delighted you gave us a piece of your surely overscheduled and busy life Thank you for inviting me
[译文] [Gary / Neil]: 先生们,这是一种荣幸。那么 Jeffrey Hinton,非常高兴能请到你来上节目。 我们知道你、你现在分身乏术,特别是在你最近获得诺贝尔奖之后,我们非常高兴你能从你肯定早已排满且极其忙碌的生活中抽出时间来给我们。感谢你们邀请我。
[原文] [Gary]: Well guys that was something Did you sit comfortably through all of that i was I I I squirmed I squirmed I knew you'd panic
[译文] [Gary]: 好了伙计们,这期节目可真不简单。整个过程中你们都坐得安稳吗?我倒是、我、我、我坐立不安。我如坐针毡。我就知道你会惊恐发作的。
[原文] [Chuck]: Well no I have to tell you that um certain parts of the um conversation gave me the anxiety of you know sitting in a theater theater with diarrhea
[译文] [Chuck]: 嗯不是,我必须告诉你们,嗯,这番对话的某些部分带给我的那种焦虑感,你们懂的,就好像你坐在电影院里,却正拉着肚子一样。
[原文] [Neil]: Thanks for that explicit Thanks for sharing That That's the nicest thing anybody's ever said about me On that note this has been Star Talk special edition Chuck always good to have you Gary love having you right at my side Neil deGrasse Tyson bidding you as always to keep looking up however much harder that will become
[译文] [Neil]: 谢谢你这么直白生动的描述。谢谢你的分享。那、那真是别人对我说过的最好听的话了。 就以此作为结尾吧,这里是 StarTalk 特别版节目。Chuck,有你在总是很棒。Gary,很高兴有你在我身边。Neil deGrasse Tyson 像往常一样在此呼吁大家,保持仰望星空,无论未来这将变得多么困难。