Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!

章节 1:AI教父的起源与神经网络的崛起

📝 本节摘要

本章作为访谈的开篇,包含了节目开场的精彩高光混剪与主持人的频道订阅呼吁。随后访谈正式切入正题,探讨了杰弗里·辛顿(Geoffrey Hinton)被称为“AI教父”的缘由。辛顿回顾了自20世纪50年代以来人工智能领域的两条路线之争:基于逻辑和符号运算的传统方法,以及模拟人类大脑运作的人工神经网络(Artificial Neural Networks)路径。他讲述了自己如何在少数人支持的情况下,坚持研究神经网络长达50年,并因此吸引了一批顶尖学生(包括后来参与创建OpenAI的核心成员)。他同时提到,如果计算机先驱冯·诺依曼和图灵没有早逝,神经网络技术原本会更早被科学界所接受。

[原文] [Host]: they call you the godfather of ai so what would you be saying to people about their career prospects in a world of super intelligence

[译文] [主持人]: 他们称你为AI教父,那么在一个超级智能(super intelligence)的世界里,你对人们的职业前景有什么想说的?

[原文] [Geoffrey Hinton]: train to be a plumber

[译文] [杰弗里·辛顿]: 去培训当个水管工吧。

[原文] [Host]: really yeah okay i'm going to become a plumber jeffrey hinton is the nobel prize winning pioneer whose groundbreaking work has shaped ai and the future of humanity why do they call it the godfather of ai

[译文] [主持人]: 真的吗?好吧,那我要去当水管工了。杰弗里·辛顿(Geoffrey Hinton)是荣获诺贝尔奖的先驱,他开创性的工作塑造了人工智能(AI)和人类的未来。为什么他们称他为AI教父呢?

[原文] [Geoffrey Hinton]: because there weren't many people who believed that we could model ai on the brain so that it learned to do complicated things like recognize objects and images or even do reasoning and i pushed that approach for 50 years and then google acquired that technology and i worked there for 10 years on something that's now used all the time in ai

[译文] [杰弗里·辛顿]: 因为当时没有多少人相信我们可以基于大脑来构建AI模型,从而让它学会做一些复杂的事情,比如识别物体和图像,甚至进行推理。我推动这种方法长达50年,后来谷歌(Google)收购了这项技术,我在那里工作了10年,研究的东西现在在AI中被频繁使用。

[原文] [Host]: and then you left why

[译文] [主持人]: 然后你离开了,为什么?

[原文] [Geoffrey Hinton]: so that i could talk freely at a conference

[译文] [杰弗里·辛顿]: 为了我能在一次会议上自由地发言。

[原文] [Host]: what did you want to talk about freely

[译文] [主持人]: 你想自由地谈论什么?

[原文] [Geoffrey Hinton]: how dangerous ai could be i realized that these things will one day get smarter than us and we've never had to deal with that and if you want to know what life's like when you're not the apex intelligence ask a chicken

[译文] [杰弗里·辛顿]: AI可能会有多危险。我意识到这些东西总有一天会比我们更聪明,而我们从未应对过这种情况。如果你想知道当你不再是顶级智能(apex intelligence)时生活是什么样的,去问问小鸡就知道了。

[原文] [Geoffrey Hinton]: so there's risks that come from people misusing ai and then there's risks from ai getting super smart and deciding it doesn't need us

[译文] [杰弗里·辛顿]: 所以,存在着人们滥用AI带来的风险,然后还存在着AI变得超级聪明并决定它不再需要我们所带来的风险。

[原文] [Host]: is that a real risk

[译文] [主持人]: 这是一个真实的风险吗?

[原文] [Geoffrey Hinton]: yes it is but they're not going to stop it cuz it's too good for too many things

[译文] [杰弗里·辛顿]: 是的,但他们不会阻止它,因为它在太多事情上太好用了。

[原文] [Host]: what about regulations

[译文] [主持人]: 那监管(regulations)呢?

[原文] [Geoffrey Hinton]: they have some but they're not designed to deal with most of the threats like the european regulations have a clause that say none of these apply to military uses of ai

[译文] [杰弗里·辛顿]: 他们有一些,但它们不是用来应对大多数威胁的。比如欧洲的法规里有一个条款写着,这些规定都不适用于AI的军事用途。

[原文] [Host]: really

[译文] [主持人]: 真的吗?

[原文] [Geoffrey Hinton]: yeah it's crazy

[译文] [杰弗里·辛顿]: 是的,这很疯狂。

[原文] [Host]: one of your students left openai

[译文] [主持人]: 你的一个学生离开了OpenAI。

[原文] [Geoffrey Hinton]: yeah he was probably the most important person behind the development of the early versions of church gpt and i think he left because he had safety concerns we should recognize that this stuff is an existential threat and we have to face the possibility that unless we do something soon we're near the end so let's do the risks what do we end up doing in such a world

[译文] [杰弗里·辛顿]: 是的,他可能是早期版本Chat GPT(注:原文音频识别错写为church gpt)开发背后最重要的人物,我认为他离开是因为他有安全方面的担忧。我们应该认识到这东西是一个生存威胁(existential threat),我们必须面对这样一种可能性:除非我们尽快采取行动,否则我们可能离终点不远了。所以让我们来谈谈这些风险,在这样一个世界里我们最终该怎么办?

[原文] [Host]: this has always blown my mind a little bit 53% of you that listen to the show regularly haven't yet subscribed to the show so could i ask you for a favor before we start if you like the show and you like what we do here and you want to support us the free simple way that you can do just that is by hitting the subscribe button and my commitment to you is if you do that then i'll do everything in my power me and my team to make sure that this show is better for you every single week we'll listen to your feedback we'll find the guests that you want me to speak to and we'll continue to do what we do thank you so much

[译文] [主持人]: 这总是让我感到有些震惊,经常听我们节目的观众中有53%还没有订阅本频道。所以在我们开始之前,我能请大家帮个忙吗?如果你喜欢这个节目,喜欢我们在这里做的事情,并且想支持我们,最免费、最简单的方法就是点击订阅按钮。我对你们的承诺是,如果你们这样做了,那么我以及我的团队将尽一切努力,确保这个节目每周都能为你变得更好。我们会倾听你的反馈,我们会找到你想让我对话的嘉宾,我们会继续做我们正在做的事情。非常感谢你们。

[原文] [Host]: jeffrey hinsson they call you the godfather of ai uh

[译文] [主持人]: 杰弗里·辛顿(注:原文音频识别错写为jeffrey hinsson),他们叫你AI教父,呃。

[原文] [Geoffrey Hinton]: yes they do

[译文] [杰弗里·辛顿]: 是的,他们这么叫。

[原文] [Host]: why do they call you that

[译文] [主持人]: 他们为什么这样称呼你?

[原文] [Geoffrey Hinton]: there weren't that many people who believed that we could make neural networks work artificial neural networks so for a long time in ai from the 1950s onwards there were kind of two ideas about how to do ai

[译文] [杰弗里·辛顿]: 当时并没有那么多人相信我们能让人工神经网络(artificial neural networks)发挥作用。所以在很长一段时间里,从20世纪50年代以后的人工智能领域,关于如何做AI大概有两种观点。

[原文] [Geoffrey Hinton]: one idea was that sort of core of human intelligence was reasoning and to do reasoning you needed to use some form of logic and so ai had to be based around logic and in your head you must have something like symbolic expressions that you manipulated with rules and that's how intelligence worked and things like learning or reasoning by analogy that all come later once we've figured out how basic reasoning works

[译文] [杰弗里·辛顿]: 第一种观点认为,人类智能的核心是推理(reasoning),而要进行推理,你需要使用某种形式的逻辑(logic),所以AI必须基于逻辑。在你的大脑里,一定有类似符号表达式(symbolic expressions)的东西,你可以用规则来操作它们,这就是智能运作的方式。至于学习(learning)或者类比推理(reasoning by analogy)这类事情,都是在我们弄清楚基本推理如何运作之后才会出现的后话。

[原文] [Geoffrey Hinton]: there was a different approach which is to say let's model ai on the brain because obviously the brain makes us intelligent so simulate a network of brain cells on a computer and try and figure out how you would learn strengths of connections between brain cells so that it learned to do complicated things like recognize objects in images or recognize speech or even do reasoning

[译文] [杰弗里·辛顿]: 还有一种不同的路径,那就是说,让我们基于大脑来构建AI模型,因为显然是大脑让我们变得智能。所以,在计算机上模拟一个脑细胞网络,并试着弄清楚你该如何学习脑细胞之间连接的强度(strengths of connections),从而让它学会做一些复杂的事情,比如识别图像中的物体,或者识别语音,甚至进行推理。

[原文] [Geoffrey Hinton]: i pushed that approach for like 50 years because so few people believed in it there weren't many good universities that had groups that did that so if you did that the best young students who believed in that came and worked with you

[译文] [杰弗里·辛顿]: 我推动这种方法大概有50年,因为相信它的人太少了。当时并没有多少优秀的大学有专门做这个的团队,所以如果你做了这个方向,那些相信这一理念的最优秀的年轻学生就会跑来和你一起工作。

[原文] [Geoffrey Hinton]: so i was very fortunate in getting a whole lot of really good students some of which have gone on to create and play an instrumental role in creating platforms like open ai

[译文] [杰弗里·辛顿]: 所以我非常幸运地招收到了一大批非常优秀的学生,其中一些人后来去创立了,并在创建像OpenAI这样的平台中发挥了核心作用。

[原文] [Host]: yes so i sus a nice example a whole bunch of them why did you believe that modeling it off the brain was a more effective approach

[译文] [主持人]: 是的,所以我猜(注:原文音频sus疑似识别错误)这是一个很好的例子,他们中的一大批人。你为什么认为以大脑为原型建模是一种更有效的方法?

[原文] [Geoffrey Hinton]: it wasn't just me believed it early on fonoyman believed it and cheuring believed it and if either of those had lived i think ai would have had a very different history but they both died young

[译文] [杰弗里·辛顿]: 早期并不只是我这么认为,冯·诺依曼(von Neumann,注:原文音频识别错写为fonoyman)相信它,图灵(Turing,注:原文音频识别错写为cheuring)也相信它。如果他们中的任何一个人还活着,我认为AI将会有一段非常不同的历史,但他们都英年早逝了。

[原文] [Host]: you think ai would have been here sooner

[译文] [主持人]: 你认为AI会更早到来吗?

[原文] [Geoffrey Hinton]: i think neural net the neural net approach would have been accepted much sooner if either of them had lived

[译文] [杰弗里·辛顿]: 我认为,如果他们中的任何一个还活着,神经网络(neural net),即神经网络这种方法,会早得多被科学界接受。


章节 2:双重生存威胁:人类滥用与超级智能

📝 本节摘要

本章中,辛顿明确表示他当前阶段的主要使命是向世人警告AI的危险。他坦言自己也是在ChatGPT出现后,才真正意识到数字智能在信息共享上远超生物智能,并可能很快超越人类。辛顿将AI风险划分为两大类:一是短期内人类滥用AI(坏演员风险);二是长期来看AI变得极其聪明从而摆脱人类控制的生存威胁。他评估这种“被抹除”的概率在10%到20%之间。最后,他将AI与原子弹进行了对比,指出由于AI在商业和军事上的巨大价值,其发展不可能被叫停,而现有的属地监管(如欧洲法规)不仅存在军事豁免漏洞,还会带来竞争劣势。辛顿呼吁,面对这种超越人类的智能,世界亟需一个理智的全球性治理机构,而非仅仅受利润最大化驱动的资本主义企业。

[原文] [Host]: in this season of your life what mission are you on

[译文] [主持人]: 在你人生的这个阶段,你的使命是什么?

[原文] [Geoffrey Hinton]: my main mission now is to warn people how dangerous ai could be

[译文] [杰弗里·辛顿]: 我现在的主要使命是警告人们AI可能会有多危险。

[原文] [Host]: did you know that when you became the godfather of ai

[译文] [主持人]: 当你成为AI教父时,你就知道这些了吗?

[原文] [Geoffrey Hinton]: no not really i was quite slow to understand some of the risks some of the risks were always very obvious like people would use ai to make autonomous lethal weapons that is things that go around deciding by themselves who to kill other risks like the idea that they would one day get smarter than us and maybe would become irrelevant i was slow to recognize that other people recognized it 20 years ago i only recognized it a few years ago that that was a real risk that was come might be coming quite soon

[译文] [杰弗里·辛顿]: 不,并不是这样。我理解某些风险的速度相当慢。有些风险一直都很明显,比如人们会利用AI制造致命性自主武器(autonomous lethal weapons),也就是那些到处跑、自行决定要杀谁的东西。而其他的风险,比如它们总有一天会比我们更聪明,并且我们可能会变得无关紧要,对于这个想法我反应很慢。其他人在20年前就意识到了,我直到几年前才认识到这是一个真实的风险,而且可能很快就会到来。

[原文] [Host]: how could you not have foreseen that if if with everything you know here about cracking the ability for these computers to learn similar to how humans learn and just you know introducing any rate of improvement

[译文] [主持人]: 你怎么可能没有预见到这一点呢?如果结合你在这里所知道的一切,关于破解这些计算机像人类一样学习的能力,并且只要你知道的,引入任何速度的改进。

[原文] [Geoffrey Hinton]: it's a very good question how could you not have seen that but remember neural networks 20 30 years ago were very primitive in what they could do they were nowhere near as good as humans but things like vision and language and speech recognition the idea that you have to now worry about it getting smarter than people that seems silly then

[译文] [杰弗里·辛顿]: 这是一个非常好的问题,“你怎么可能没有看出来?” 但请记住,二三十年前的神经网络在它们能做的事情上还非常原始,它们远不如人类优秀,除了在视觉、语言和语音识别等领域。当时如果说你现在必须担心它变得比人更聪明,这听起来很愚蠢。

[原文] [Host]: when did that change

[译文] [主持人]: 什么时候改变的?

[原文] [Geoffrey Hinton]: it changed for the general population when chat gpt came out it changed for me when i realized that the kinds of digital intelligences we're making have something that makes them far superior to the kind of biological intelligence we have

[译文] [杰弗里·辛顿]: 对于大众来说,当Chat GPT问世时,这一切改变了;对我来说,当我意识到我们正在制造的这种数字智能(digital intelligences)具有某种特性,使它们远优于我们拥有的生物智能时,这一切改变了。

[原文] [Geoffrey Hinton]: if i want to share information with you so i go off and i learn something and i'd like to tell you what i learned so i produce some sentences this is a rather simplistic model but roughly right your brain is trying to figure out how can i change the strength of connections between neurons so i might have put that word next and so you'll do a lot of learning when a very surprising word comes and not much learning when if it's when it's very obvious word if i say fish and chips you don't do much learning when i say chips but if i say fish and cucumber you do a lot more learning you wonder why did i say cucumber

[译文] [杰弗里·辛顿]: 比如我想和你分享信息,所以我去学了一些东西,我想告诉你我学到了什么,所以我生成了一些句子。这是一个相当简化的模型,但大致正确:你的大脑正试图弄清楚,“我该如何改变神经元之间连接的强度?”所以我可能会把那个词放在后面。因此,当一个非常令人惊讶的词出现时,你会进行大量的学习;而当它是一个非常明显的词时,你不会学到太多。如果我说“炸鱼和薯条(fish and chips)”,当我说“薯条”时你不会进行太多学习;但如果我说“炸鱼和黄瓜(fish and cucumber)”,你会进行多得多的学习,你会想我为什么要说黄瓜。

[原文] [Geoffrey Hinton]: so that's roughly what's going on in your brain i'm predicting what's coming next that's how we think it's working nobody really knows for sure how the brain works and nobody knows how it gets the information about whether you should increase the strength of a connection or decrease the strength of a connection that's the crucial thing but what we do know now from ai is that if you could get information about whether to increase or decrease the connection strength so as to do better at whatever task you're trying to do then we could learn incredible things

[译文] [杰弗里·辛顿]: 所以这大致就是你大脑中正在发生的事情,我在预测接下来会出现什么。这是我们认为它的运作方式,没有人真正确切知道大脑是如何运作的,也没有人知道它是如何获得关于你是否应该增加或减少连接强度的信息的。这是最关键的事情,但我们现在从AI中了解到的是,如果你能获得关于增加还是减少连接强度的信息,从而在你要做的任何任务上表现得更好,那么我们就能学到令人难以置信的东西。

[原文] [Geoffrey Hinton]: because that's what we're doing now with artificial neuronets it's just we don't know for real brains how they get that signal about whether to increase or decrease

[译文] [杰弗里·辛顿]: 因为这就是我们现在用人工神经网络正在做的事情。只是我们不知道真实的大脑是如何获得那种关于增加还是减少强度的信号的。

[原文] [Host]: as we sit here today what are the big concerns you have around safety of ai if we were to to list the the top couple that are really front of mind and that we should be thinking about um

[译文] [主持人]: 当我们今天坐在这里时,你对AI的安全性有什么大的担忧?如果我们要列出头脑中最关心、最应考虑的前几个问题,嗯……

[原文] [Geoffrey Hinton]: can i have more than a couple

[译文] [杰弗里·辛顿]: 我能列多几个吗?

[原文] [Host]: go ahead i'll write them all down and we'll go through them

[译文] [主持人]: 说吧,我会把它们都写下来,然后我们逐一讨论。

[原文] [Geoffrey Hinton]: okay first of all i want to make a distinction between two completely different kinds of risk there's risks that come from people misusing ai and that's most of the risks and all of the short-term risks and then there's risks that come from ai getting super smart and deciding it doesn't need us

[译文] [杰弗里·辛顿]: 好的,首先我想对两种完全不同类型的风险进行区分。有一种风险来自于人们滥用AI,这是绝大多数的风险,也是所有的短期风险。然后还有另一种风险,来自于AI变得超级聪明,并决定它不需要我们。

[原文] [Host]: is that a real risk

[译文] [主持人]: 这是一个真实的风险吗?

[原文] [Geoffrey Hinton]: and i talk mainly about that second risk because lots of people say "is that a real risk?" and yes it is now we don't know how much of a risk it is we've never been in that situation before we've never had to deal with things smarter than us so really the thing about that existential threat is that we have no idea how to deal with it we have no idea what it's going to look like and anybody who tells you they know just what's going to happen and how to deal with it they're talking nonsense

[译文] [杰弗里·辛顿]: 我主要谈论这第二种风险,因为很多人说“这是一个真实的风险吗?” 是的,确实是。现在我们不知道它到底有多大的风险,我们以前从未遇到过这种情况,我们从未应对过比我们更聪明的东西。所以关于这个生存威胁(existential threat),最关键的一点是我们根本不知道如何应对它,我们不知道它会是什么样子。任何人如果告诉你,他们确切知道会发生什么以及如何应对,那他们是在胡说八道。

[原文] [Geoffrey Hinton]: so we don't know how to estimate the probabil probabilities it'll replace us um some people say it's like less than 1% my friend yan lar who was a postto with me thinks no no no we're always going to be we build these things we're always going to be in control we'll build them to be obedient and other people like yudkowski say "no no no these things are going to wipe us out for sure if anybody builds it it's going to wipe us all out." and he's confident of that i think both of those positions are extreme it's very hard to estimate the probabilities in between

[译文] [杰弗里·辛顿]: 所以我们不知道如何估计它会取代我们的概率。嗯,有些人说它小于1%。我的朋友杨立昆(Yann LeCun,注:原文音频识别错写为yan lar),曾和我一起做博士后,他认为,“不,不,不,我们总是会……我们建造了这些东西,我们总是会控制它们,我们会把它们建造成服从我们的样子。” 而像尤德科夫斯基(Yudkowsky)这样的人则说,“不,不,不,这些东西肯定会把我们抹除。如果任何人建造了它,它就会把我们全人类抹除。” 他对此深信不疑。我认为这两种立场都是极端的。很难估计介于两者之间的概率。

[原文] [Host]: if you had to bet on who was right out of your two friends

[译文] [主持人]: 如果你不得不在你的两位朋友之间打赌谁是对的?

[原文] [Geoffrey Hinton]: i simply don't know so if i had to bet i'd say the probabilities in between and i don't know where to estimate it in between i often say 10 to 20% chance they'll wipe us out but that's just gut based on the idea that we're we're still making them and we're pretty ingenious and the hope is that if enough smart people do enough research with enough resources we'll figure out a way to build them so they'll never want to harm us

[译文] [杰弗里·辛顿]: 我根本不知道。所以如果我不得不打赌,我会说概率在两者之间,但我不知道该如何估算这个中间值。我经常说它们有10%到20%的几率抹除我们,但这只是直觉,基于这样一个想法:我们现在仍在制造它们,而且我们相当聪明。希望在于,如果有足够多聪明的人带着足够的资源做足够多的研究,我们会找到一种方法来建造它们,使得它们永远不想伤害我们。

[原文] [Host]: sometimes i think if we we talk about that second um path sometimes i think about nuclear bombs and the the invention of the atomic bomb and how it compares like how is this different because the atomic bomb came along and i imagine a lot of people at that time thought our days are numbered

[译文] [主持人]: 有时候我想,如果我们讨论第二条路径……有时候我会想到核弹和原子弹的发明,以及它是如何进行比较的。比如,这有什么不同?因为原子弹出现了,我猜当时很多人肯定认为我们的死期到了。

[原文] [Geoffrey Hinton]: yes i was there we did

[译文] [杰弗里·辛顿]: 是的,我当时在场,我们确实这么想。

[原文] [Host]: yeah but but but what's what h we're still here we're still here

[译文] [主持人]: 是的,但是、但是、但是,结果怎么着,我们还在这里,我们还活着。

[原文] [Geoffrey Hinton]: yes so the atomic bomb was really only good for one thing and it was very obvious how it worked even if you hadn't had the pictures of hiroshima and nagasaki it was obvious that it was a very big bomb that was very dangerous with ai it's good for many many things it's going to be magnificent in healthcare and education and more or less any industry that needs to use its data is going to be able to use it better with ai

[译文] [杰弗里·辛顿]: 是的,因此原子弹实际上只擅长做一件事,而且它的运作方式非常明显。即使你没有广岛和长崎的照片,也很明显它是一枚非常大的炸弹,非常危险。而AI,它擅长很多很多事情。它在医疗和教育领域将表现得非常宏伟,或多或少任何需要使用其数据的行业都能够借助AI更好地利用数据。

[原文] [Geoffrey Hinton]: so we're not going to stop the development you know people say "well why don't we just stop it now?" we're not going to stop it because it's too good for too many things also we're not going to stop it because it's good for battle robots and none of the countries that sell weapons are going to want to stop it

[译文] [杰弗里·辛顿]: 所以我们不会停止这种发展。你知道,人们会说,“好吧,为什么我们不现在就停止它?” 我们不会停止它,因为它在太多事情上都太棒了。而且我们也不会停止它,因为它对战斗机器人(battle robots)很有用,任何一个出售武器的国家都不会想要停止它。

[原文] [Geoffrey Hinton]: like the european regulations they have some regulations about ai and it's good they have some regulations but they're not designed to deal with most of the threats and in particular the european regulations have a a clause in them that say none of these regulations apply to military uses of ai so governments are willing to regulate regulate companies and people but they're not willing to regulate themselves it seems pretty crazy to me that they

[译文] [杰弗里·辛顿]: 就像欧洲的法规,他们有一些关于AI的规定,有规定是件好事,但它们的设计并不是为了应对大多数威胁。特别的是,欧洲法规中有一项条款明确表示,这些规定都不适用于AI的军事用途。因此,政府愿意去监管公司和人民,但他们却不愿意监管自己。这在我看来真的非常疯狂。

[原文] [Host]: i go back and forward but if europe has a regulation but the rest of the world doesn't competitive disadvantage

[译文] [主持人]: 我左思右想,但如果欧洲有法规,而世界其他地方没有,那就会处于竞争劣势(competitive disadvantage)。

[原文] [Geoffrey Hinton]: yeah we're seeing this already i don't think people realize that when openai release a new model or a new piece of software in america they can't release it to europe yet because of regulations here so sam alman tweeted saying "our new ai agent thing is available to everybody but it can't come to europe yet because there's regulations."

[译文] [杰弗里·辛顿]: 是的,我们已经看到了这一点。我不认为人们意识到了,当OpenAI在美国发布一个新模型或一款新软件时,他们还不能把它发布到欧洲,就因为这里的法规。所以山姆·奥特曼(Sam Altman,注:原文音频识别错写为sam alman)发推特说,“我们新的AI代理(AI agent)工具已经对所有人开放,但它还不能进入欧洲,因为那里有法规限制。”

[原文] [Host]: yes what does that gives us a productive disadvantage productivity disadvantage

[译文] [主持人]: 是的,这会给我们带来什么?生产力劣势(productive disadvantage),生产力上的劣势。

[原文] [Geoffrey Hinton]: what we need is i mean at this point in history when we're about to produce things more intelligent than ourselves what we really need is a kind of world government that works run by intelligent thoughtful people and that's not what we got

[译文] [杰弗里·辛顿]: 我们所需要的是,我的意思是,在历史的这个节点,当我们即将制造出比我们自己更智能的东西时,我们真正需要的是一种行之有效的世界政府(world government),由聪明、有思想的人来管理。但我们拥有的并非如此。

[原文] [Host]: so free-for-all

[译文] [主持人]: 所以现在是大乱斗(free-for-all)。

[原文] [Geoffrey Hinton]: well that what we've got is sort of we've got capitalism which is done very nicely by us is produce lots of goods goods and services for us but these big companies they're legally required to try and maximize profits and that's not what you want from the people developing this stuff

[译文] [杰弗里·辛顿]: 嗯,我们现在拥有的大概是资本主义(capitalism),它对我们来说运作得很不错,为我们生产了大量的商品和服务。但这些大公司,他们在法律上被要求努力实现利润最大化(maximize profits),而这绝不是你希望那些开发这门技术的人所抱有的目的。

[原文] [Host]: so let's do the risks then you talked about there's human risks and then there's so i've distinguished these two kinds of risk

[译文] [主持人]: 那么我们来谈谈风险吧。你谈到了人类风险,然后还有,所以我已经区分了这两种风险……


章节 3:迫在眉睫的危机:网络攻击、病毒与信息茧房

📝 本节摘要

本章详细剖析了由人类“坏演员”滥用AI引发的短期且紧迫的威胁。辛顿首先指出,大语言模型极大地降低了网络钓鱼和诈骗的门槛,导致网络攻击呈爆炸式增长。他甚至因为担忧网络攻击可能摧毁银行系统,而将个人资产分散到多家银行并使用离线冷备份。其次,他警告AI能帮助几乎没有专业背景的人廉价制造出致命的新型病毒。最后,他深入探讨了AI在操纵选举中的潜在风险(并提及马斯克的近期举动),以及社交媒体平台(如YouTube和Facebook)在利润最大化的驱使下,如何利用算法将人们困在“信息茧房”中,不断加剧社会的撕裂与极化。

[原文] [Host]: let's talk about all the risks from bad human actors using ai

[译文] [主持人]: 让我们来谈谈由人类坏演员滥用AI带来的所有风险。

[原文] [Geoffrey Hinton]: there's cyber attacks so between 2023 and 2024 they increased by about a factor of 12,200% and that's probably because these large language models make it much easier to do fishing attacks

[译文] [杰弗里·辛顿]: 首先是网络攻击,在2023年到2024年间,网络攻击大约增加了12200%,这很可能是因为这些大型语言模型让进行网络钓鱼攻击(phishing attacks)变得容易得多。

[原文] [Geoffrey Hinton]: and a fishing attack for anyone that doesn't know is it's they send you something saying uh hi i'm your friend john and i'm stuck in el salvador could you just wire this money that's one kind of attack but the fishing attacks are really trying to get your loon credentials

[译文] [杰弗里·辛顿]: 如果有人不知道什么是网络钓鱼攻击,那就是他们给你发信息说,“嗨,我是你的朋友约翰,我被困在萨尔瓦多了,你能不能给我汇点钱”,这是一种攻击,但钓鱼攻击真正的目的是试图获取你的登录凭据(注:原文音频loon credentials疑为login credentials误识)。

[原文] [Geoffrey Hinton]: and now with ai they can clone my voice my image they can do all that

[译文] [杰弗里·辛顿]: 现在有了AI,他们可以克隆我的声音、我的图像,他们什么都能做。

[原文] [Host]: i'm struggling at the moment because there's a bunch of ai scams on x and also meta and there's one in particular on meta so instagram facebook at the moment which is a paid advert where they've taken my voice from the podcast they've taken the my mannerisms and they've made a new video of me encouraging people to go and take part in this crypto ponzi scam or whatever

[译文] [主持人]: 我现在正为此头疼,因为在X(原Twitter)和Meta上有一堆AI骗局。特别是在Meta旗下的Instagram和Facebook上目前就有一个付费广告,他们提取了我播客里的声音,模仿了我的举止,并制作了一个全新的我的视频,鼓励人们去参与某种加密货币庞氏骗局(crypto ponzi scam)之类的东西。

[原文] [Host]: and we've been you know we spent weeks and weeks and weeks and weeks and end emailing meta telling "please take this down." they take it down another one pops up they take that one down another one pops up so it's like whack-a-ole and then it's very annoying

[译文] [主持人]: 而且我们,你知道的,我们花了好几周的时间不断给Meta发邮件说“请把这个撤下来”。他们撤下来一个,另一个又冒出来了;他们撤下那个,又会冒出一个新的。这就像打地鼠(whack-a-mole)一样,非常烦人。

[原文] [Host]: the the heartbreaking part is you get the messages from people that have fallen for the scam and they've lost £500 or $500 and they cross with you cuz you recommended it and i'm i'm like i'm sad for them it's very annoying

[译文] [主持人]: 令人心碎的是,你会收到那些上当受骗的人发来的信息,他们损失了500英镑或500美元,然后他们生你的气,因为是你“推荐”的。而我则是替他们感到难过,这真的很烦人。

[原文] [Geoffrey Hinton]: yeah i have a a smaller version of that which is pe some people now publish papers with me as one of the authors mhm and it looks like it's in order that they can get lots of citations to themselves ah

[译文] [杰弗里·辛顿]: 是的,我经历过一个微缩版的类似事件,就是现在有些人发表论文时,把我列为作者之一,嗯哼,看起来这是为了让他们自己能获得大量的论文引用,啊。

[原文] [Host]: so cyber attacks a very real threat there's been an explosion of those and these already obviously ai is very patient so they can go through 100 million lines of code looking for known ways of attacking them that's easy to do

[译文] [主持人]: 所以网络攻击是一个非常真实的威胁,并且已经出现了爆炸式的增长,而且显然AI非常有耐心,所以它们可以筛选1亿行代码来寻找已知的攻击方式,这做起来很容易。

[原文] [Geoffrey Hinton]: but they're going to get more creative and they may some people believe and i some people who know a lot believe that maybe by 2030 they'll be creating new kinds of cyber attacks which no person ever thought of

[译文] [杰弗里·辛顿]: 但它们会变得更加有创造力。而且有些人相信,我也相信一些懂得很多的人的说法,也许到2030年,它们就会创造出人类从未想过的新型网络攻击。

[原文] [Geoffrey Hinton]: so that's very worrisome because they can think for themselves and discover they can think for themselves they can draw new conclusions from much more data than a person ever saw

[译文] [杰弗里·辛顿]: 这非常令人担忧,因为它们可以自行思考并进行发现,它们可以自己思考,可以从比人类见过的多得多的数据中得出新的结论。

[原文] [Host]: is there anything you're doing to protect yourself from cyber attacks at all

[译文] [主持人]: 那么,你有没有采取什么措施来保护自己免受网络攻击呢?

[原文] [Geoffrey Hinton]: yes it's one of the few places where i changed what i do radically because i'm scared of cyber attacks

[译文] [杰弗里·辛顿]: 有的,这是为数不多的几个让我彻底改变行为方式的领域之一,因为我害怕网络攻击。

[原文] [Geoffrey Hinton]: canadian banks are extremely safe in 2008 no canadian banks came anywhere near going bust so they're very safe banks because they're well regulated fairly well regulated

[译文] [杰弗里·辛顿]: 加拿大的银行极其安全。在2008年,没有一家加拿大银行哪怕是濒临破产,所以它们是非常安全的银行,因为它们受到了很好的监管,相当完善的监管。

[原文] [Geoffrey Hinton]: nevertheless i think a cyber attack might be able to bring down a bank now if you have all my savings are in shares in banks held by banks so if the bank gets attacked and it holds your shares they're still your shares and so i think you'd be okay unless the attacker sells the shares because the bank can sell the shares

[译文] [杰弗里·辛顿]: 尽管如此,我认为网络攻击可能还是有能力击垮一家银行。现在,如果你像我一样所有的积蓄都在银行的股票里,由银行代为持有。那么如果银行受到攻击,而它持有你的股票,那它们仍然是你的股票。所以我认为你不会有事,除非攻击者卖掉了这些股票,因为银行是可以卖掉股票的。

[原文] [Geoffrey Hinton]: if the attacker sells your shares i think you're screwed i don't know i mean maybe the bank would have to try and reimburse you but the bank's bust by now right

[译文] [杰弗里·辛顿]: 如果攻击者卖掉了你的股票,我觉得你就完蛋了。我不知道,我的意思是也许银行会试图补偿你,但到那时银行都已经破产了,对吧?

[原文] [Geoffrey Hinton]: so so i'm worried about a canadian bank being taken down by a cyber attack and the attacker selling selling shares that it holds so i spread my money and my children's money between three banks in the belief that if a cyber attack takes down one canadian bank the other canadian banks will very quickly get very careful

[译文] [杰弗里·辛顿]: 所以,我很担心一家加拿大银行会被网络攻击击垮,并且攻击者会卖掉它持有的股票。因此,我把我和孩子们的钱分散到了三家银行里。我相信,如果一次网络攻击击垮了一家加拿大银行,其他加拿大银行会非常迅速地变得极为谨慎。

[原文] [Host]: and do you have a phone that's not connected to the internet do you have any like you know i'm thinking about storing data and stuff like that do you think it's wise to consider having cold storage

[译文] [主持人]: 那你有没有一部不联网的手机?你有没有类似,你知道的,我正在考虑存储数据之类的事情,你觉得考虑使用冷存储(cold storage)明智吗?

[原文] [Geoffrey Hinton]: i have a little disc drive and i back up my laptop on this hard drive so i actually have everything on my laptop on a hard drive at least you know if the whole internet went down i had the sense i still got it on my laptop and i still got my information

[译文] [杰弗里·辛顿]: 我有一个小磁盘驱动器,我把我的笔记本电脑备份在这个硬盘上,所以我实际上把笔记本电脑上的所有东西都存放在了一个硬盘里。至少,你知道,如果整个互联网瘫痪了,我还有一种安全感:我的电脑里还有这些数据,我依然掌握着我的信息。

[原文] [Geoffrey Hinton]: okay then the next thing is using ai to create nasty viruses okay

[译文] [杰弗里·辛顿]: 好的,接下来的一项就是利用AI来制造恶性的病毒(nasty viruses)。

[原文] [Geoffrey Hinton]: and the problem with that is that just requires one crazy guy with the grudge one guy who knows a little bit of molecular biology knows a lot about ai and just wants to destroy the world

[译文] [杰弗里·辛顿]: 它的问题在于,这只需要一个怀恨在心的疯子,一个懂一点分子生物学、非常懂AI,并且一心只想毁灭世界的人就行了。

[原文] [Geoffrey Hinton]: you can now create new viruses relatively cheaply using ai and you don't have to be a very skilled molecular biologist to do it and that's very scary

[译文] [杰弗里·辛顿]: 现在你可以使用AI相对低廉地制造出新型病毒,而且你不需要成为一个非常熟练的分子生物学家就能做到这一点,这非常可怕。

[原文] [Geoffrey Hinton]: so you could have a small cult for example a small cult might be able to raise a few million dollars for a few million dollars they might be able to design a whole bunch of viruses

[译文] [杰弗里·辛顿]: 比如你可能有一个小邪教组织,一个小邪教可能筹集到几百万美元,用这几百万美元,他们也许就能设计出一大堆病毒。

[原文] [Host]: well i'm thinking about some of our foreign adversaries doing government funded programs i mean there was lots of talk around covid and woo the wuhan laboratory and what they were doing and gain a function research but i'm wondering if in you know a china or a russia or an iran or something the government could fund a program for a small group of scientists to make a virus that they could you know

[译文] [主持人]: 嗯,我在想我们的一些外国对手可能会开展政府资助的项目。我的意思是,围绕新冠疫情以及武汉实验室、他们正在做什么还有功能获得性研究(gain of function research),有很多传言。但我在想,比如在某些地方,政府会不会资助一小群科学家去制造一种病毒,以便他们可以,你知道的……

[原文] [Geoffrey Hinton]: i think they could yes now they'd be worried about retaliation they'd be worried about other governments doing the same to them hopefully that would help keep it under control they might also be worried about the virus spreading to their country

[译文] [杰弗里·辛顿]: 我认为他们能做到,是的。不过目前他们会担心遭到报复,他们会担心其他政府也会对他们做同样的事情。希望这种顾虑能帮助维持局势可控,他们可能也会担心病毒传播回自己的国家。

[原文] [Geoffrey Hinton]: okay then there's um corrupting elections so if you wanted to use ai to corrupt elections a very effective thing is to be able to do targeted political advertisements where you know a lot about the person

[译文] [杰弗里·辛顿]: 好,接下来还有,嗯,腐蚀选举(corrupting elections)。如果你想利用AI来操纵选举,一个非常有效的方法就是投放定向的政治广告(targeted political advertisements),也就是当你对目标人物非常了解时所能做的操作。

[原文] [Geoffrey Hinton]: so anybody who wanted to use ai for corrupting elections would try and get as much data as they could about everybody in the electorate

[译文] [杰弗里·辛顿]: 因此,任何想要利用AI来操纵选举的人,都会试图收集选民中每个人的尽可能多的数据。

[原文] [Geoffrey Hinton]: with that in mind it's a bit worrying what musk is doing at present in the states going in and insisting on getting access to all these things that were very carefully siloed

[译文] [杰弗里·辛顿]: 考虑到这一点,马斯克目前在美国的所作所为就有些令人担忧了。他介入其中,并且坚持要获取所有那些原本被非常小心地隔离起来的数据源。

[原文] [Geoffrey Hinton]: the claim is it's to make things more efficient but it's exactly what you would want if you intended to corrupt the next election

[译文] [杰弗里·辛顿]: 对外的说法是为了提高效率,但这恰恰是你如果打算操纵下一次选举时会想要做的事情。

[原文] [Host]: how do you mean because you get all this data on the people

[译文] [主持人]: 你是什么意思?因为你获取了人民的所有这些数据?

[原文] [Geoffrey Hinton]: you get all this data on people you know how much they make where they you know everything about them once you know that it's very easy to manipulate them because you can make an ai that you can send messages um that they'll find very convincing telling them not to vote for example

[译文] [杰弗里·辛顿]: 你获取了人们的所有这些数据,你知道他们赚多少钱、他们在哪里……你知道关于他们的一切。一旦你掌握了这些,就很容易操纵他们。因为你可以制造一个AI,向他们发送信息,嗯,他们会觉得非常有说服力的信息,例如告诉他们不要去投票。

[原文] [Geoffrey Hinton]: so i have no no reason other than common sense to think this but i wouldn't be surprised if part of the motivation of getting all this data from american government sources is to corrupt elections

[译文] [杰弗里·辛顿]: 所以除了常识之外,我没有其他理由这么认为,但如果从美国政府来源获取所有这些数据的部分动机是为了操纵选举的话,我并不会感到惊讶。

[原文] [Geoffrey Hinton]: another part might be that it's very nice training data for a big model but he would have to be taking that data from the government and feeding it into his

[译文] [杰弗里·辛顿]: 另一部分动机可能是,对于一个大模型来说,这是非常好的训练数据。但他必须把这些数据从政府那里拿过来,然后喂给他的模型。

[原文] [Host]: yes

[译文] [主持人]: 是的。

[原文] [Geoffrey Hinton]: and what they've done is turned off lots of the security controls got rid of the some of the organization to protect against that um so that's corrupting elections

[译文] [杰弗里·辛顿]: 而且他们所做的,就是关闭了许多安全控制,撤销了部分原本为了防范这些情况而设立的组织架构。嗯,所以这就是操纵选举。

[原文] [Geoffrey Hinton]: okay then there's creating these two echo chambers by organizations like youtube and facebook showing people things that will make them indignant people love to be indignant

[译文] [杰弗里·辛顿]: 好,接下来就是像YouTube和Facebook这样的机构通过向人们展示会让他们感到义愤填膺的内容,从而制造出这种双重信息茧房(echo chambers)。人们喜欢感到义愤填膺。

[原文] [Host]: indignant as in angry or what does indignant mean feeling i'm sort of angry but feeling righteous

[译文] [主持人]: 义愤填膺(indignant)是指愤怒吗?或者义愤填膺是什么意思?那种感觉就像“我有点生气,但又觉得自己很正义”。

[原文] [Geoffrey Hinton]: okay so for example if you were to show me something that said trump did this crazy thing here's a video of trump doing this completely crazy thing i would immediately click on it

[译文] [杰弗里·辛顿]: 没错。例如,如果你向我展示一些内容,说特朗普做了这件疯狂的事情,“这里有一段特朗普做这件完全疯狂的事情的视频”,我立刻就会点进去看。

[原文] [Host]: okay so putting us in echo chambers and dividing us

[译文] [主持人]: 好的,所以是把我们关进信息茧房,并且让我们分裂。

[原文] [Geoffrey Hinton]: yes and that's um the policy that youtube and facebook and others use for deciding what to show you next is causing that

[译文] [杰弗里·辛顿]: 是的,这就是YouTube和Facebook等平台用来决定接下来向你展示什么的算法策略所导致的结果。

[原文] [Geoffrey Hinton]: if they had a policy of showing you balanced things they wouldn't get so many clicks and they wouldn't be able to sell so many advertisements and so it's basically the profit motive is saying show them whatever will make them click and what'll make them click is things that are more and more extreme

[译文] [杰弗里·辛顿]: 如果他们的策略是向你展示平衡的内容,他们就不会获得那么多的点击量,也就无法卖出那么多的广告。所以,基本上是利润动机(profit motive)在驱使他们说“展示任何能让他们点击的内容”,而能让他们点击的,往往是越来越极端的东西。

[原文] [Host]: and that confirmed my existing bias

[译文] [主持人]: 而这证实了我现有的偏见(bias)。

[原文] [Geoffrey Hinton]: that confirm my existing bias so you're getting your biases confirmed all the time further and further and further and further which means you're you're driving away which is now there's in the states there's two communities that don't hardly talk to each other

[译文] [杰弗里·辛顿]: 那证实了我现有的偏见。所以你不断地获得偏见的确认,越来越深、越来越远。这意味着你正在渐行渐远,就像现在在美国,有两个几乎完全互不交流的社群群体。

[原文] [Host]: i'm not sure people realize that this is actually happening every time they open an app but if you go on a tik tok or a youtube or one of these big social networks the algorithm as you you said is designed to show you more of the things that you had interest in last time so if you just play that out over 10 years it's going to drive you further and further and further into whatever ideology or belief you have and further away from nuance and common sense and um parity which is a pretty remarkable thing i i like people don't know it's happening they just open their phones and experience something and think this is the news or the experience everyone else is having

[译文] [主持人]: 我不确定人们是否意识到,他们每次打开APP时这都在真实发生。但如果你去TikTok、YouTube或其中一个大型社交网络,正如你所说,算法被设计成向你展示更多你上次感兴趣的内容。所以如果你把这演变成长达10年的过程,它就会把你推得越来越深,陷进你所拥有的任何意识形态或信仰中,让你越来越远离细微的差别(nuance)、常识以及,嗯,对等性(parity)。这是一件非常惊人的事情,我觉得人们不知道这正在发生,他们只是打开手机,体验某些东西,并认为这就是新闻,或者是所有人都在共同经历的东西。

[原文] [Geoffrey Hinton]: right so basically if you have a newspaper and everybody gets the same newspaper yeah you get to see all sorts of things you weren't looking for and you get a sense that if it's in the newspaper it's an important thing or significant thing

[译文] [杰弗里·辛顿]: 没错。所以基本上,如果你有一份报纸,每个人都拿到同一份报纸,是的,你会看到各种你本来并没有在找的东西,你会产生一种感觉:如果它登在报纸上,那它就是一件重要的事情或有意义的事情。

[原文] [Geoffrey Hinton]: but if you have your own news feed my news feed on my iphone 3/arters of the stories are about ai and i find it very hard to know if the whole world's talking about ai all the time or if it's just my newsfeed

[译文] [杰弗里·辛顿]: 但如果你有你自己的新闻流(news feed)……比如我在iPhone上的新闻流,有四分之三的新闻都是关于AI的。我发现自己很难分辨,到底是全世界一直都在谈论AI,还是仅仅因为这是我的新闻推送。

[原文] [Host]: okay so driving me into my echo chambers um which is going to continue to divide us further and further i'm actually noticing that the algorithm are becoming even more what's the word tailored and people might go "oh that's great." but what it means is they're becoming even more personalized which is means that my reality is becoming even further from your reality

[译文] [主持人]: 好吧,所以这把我赶进了我的信息茧房里,嗯,这会继续把我们分裂得越来越远。我实际上注意到算法正在变得更加……怎么说呢,量身定制(tailored)。人们可能会说“哦,太棒了”,但这意味着它们变得更加个性化,也就是说,我的现实正在变得离你的现实越来越远。

[原文] [Geoffrey Hinton]: yeah it's crazy we don't have a shared reality anymore i share reality with other people who watch the bbc and other bbc news and other people who read the guardian and other people who read the new york times i have almost no shared reality with people who watch fox news it's pretty it's pretty um i i it's worrisome

[译文] [杰弗里·辛顿]: 是的,这很疯狂,我们不再拥有一个共享的现实(shared reality)。我和其他看BBC、看其他BBC新闻的人共享现实,和其他看《卫报》、看《纽约时报》的人共享现实,但我与看福克斯新闻(Fox News)的人几乎没有任何共享的现实。这非常,这非常,嗯,这令人担忧。

[原文] [Geoffrey Hinton]: behind all this is the idea that these companies just want to make profit and they'll do whatever it takes to make more profit because they have to they're legally obliged to do that

[译文] [杰弗里·辛顿]: 这一切背后的根源是,这些公司只想赚钱,为了赚取更多的利润,他们会不择手段。因为他们必须这样做,这是他们在法律上的义务。

[原文] [Host]: so we almost can't blame the company can we if they're if

[译文] [主持人]: 所以我们几乎不能责怪这家公司,对吗?如果他们,如果……

[原文] [Geoffrey Hinton]: well capitalism's done very well for us it's produced lots of goodies yeah but you need to have it very well regulated

[译文] [杰弗里·辛顿]: 嗯,资本主义对我们来说运作得很好,它生产了大量的好东西。是的,但你需要对它进行极其完善的监管。

[原文] [Geoffrey Hinton]: so what you really want is to have rules so that when some company is trying to make as much profit as possible in order to make that profit they have to do things that are good for people in general not things that are bad for people in general

[译文] [杰弗里·辛顿]: 所以你真正想要的是制定规则。这样当某家公司试图尽可能地获取利润时,为了赚取那笔利润,他们必须做对广大人民有益的事情,而不是做对广大人民有害的事情。

[原文] [Geoffrey Hinton]: so once you get to a situation where in order to make more profit the company starts doing things that are very bad for society like showing you things that are more and more extreme that's what regulations are for

[译文] [杰弗里·辛顿]: 所以一旦你遇到这样一种情况:公司为了赚取更多的利润,开始做对社会非常有害的事情,比如向你展示越来越极端的内容,那就是法规(regulations)存在的意义。

[原文] [Geoffrey Hinton]: so you need regulations with capitalism now companies will always say regulations get in the way make us less efficient and that's true the whole point of regulations is to stop them doing things to make profit that hurt society and we need strong regulation

[译文] [杰弗里·辛顿]: 因此在资本主义制度下你需要法规。现在,公司总是会说,监管会造成阻碍,让我们效率降低。那是真的,但监管的全部意义就在于阻止他们为了谋利而做伤害社会的事情,我们需要强有力的监管。

[原文] [Host]: who's going to decide whether it hurts society or not because you know that's the job of politicians

[译文] [主持人]: 谁来决定它是否伤害了社会?因为你知道那是政客们的工作。

[原文] [Geoffrey Hinton]: unfortunately if the politicians are owned by the companies that's not so good and also the politicians might not understand the technology we you've probably seen the senate hearings where they wheel out you know mark zuckerberg and these big tech ceos and it is quite embarrassing because they're asking the wrong questions

[译文] [杰弗里·辛顿]: 不幸的是,如果政客们被公司收买了,那就不太好了。而且政客们可能并不了解这项技术。你可能看过参议院的听证会,他们把马克·扎克伯格(Mark Zuckerberg)和这些大型科技公司的首席执行官们推出来质询。这相当令人尴尬,因为他们问的全是错误的问题。

[原文] [Host]: well i've seen the video of the us education secretary talking about how they're going to get ai in the classrooms except she thought it was called a1 she's actually there saying we're going to have all the kids interacting with a1 there is a school system that's going to start um making sure that first graders or even preks have a1 teaching you know every year starting you know that far down in the grades and that's just a that's a wonderful thing

[译文] [主持人]: 嗯,我看到过美国教育部长谈论他们将如何把AI引入课堂的视频,只不过她以为它叫“A1”。她真的在那里说“我们将让所有的孩子与A1互动,将有一个学校系统会开始确保一年级学生甚至学前班学生接受A1的教学,从那么低的年级就开始每年进行”,然后还说“这简直太棒了”。

[原文] [Host]: and these are what these are the people that these are the people in charge ultimately the tech companies are in charge because they will outsmart the tech companies

[译文] [主持人]: 而这些是什么?这些人就是掌权的人。最终还是科技公司掌权,因为他们(科技巨头)在智力上会碾压这些政客(注:原文音频最后一句they will outsmart the tech companies可能口误,意指政客会被科技公司智商碾压)。

[原文] [Geoffrey Hinton]: in the states now at least a few weeks ago when i was there they were running an advertisement about how it was very important not to regulate ai because it would hurt us in the competition with china yeah and that's a that's a plausible argument there yes it will

[译文] [杰弗里·辛顿]: 现在的美国,至少几周前我在那里的时候,他们正在播放一则广告,说不对AI进行监管非常重要,因为这会在与中国的竞争中对我们造成伤害。是的,这在那里是一个看似合理的论点,是的,它确实会。

[原文] [Geoffrey Hinton]: but you have to decide do you want to compete with china by doing things that will do a lot of harm to your society and you probably don't

[译文] [杰弗里·辛顿]: 但你必须决定,你是否想通过做会对你们的社会造成巨大伤害的事情,来与中国竞争?而你很可能不想这样做。

[原文] [Host]: i guess they would say that it's not just china it's denmark and australia and canada and the uk they're not so worried about and germany but if they kneecap themselves with regulation if they slow themselves down then the founders the entrepreneurs the investors are going to go

[译文] [主持人]: 我猜他们会说不仅仅是中国,还有丹麦、澳大利亚、加拿大和英国,可能没那么担心德国。但如果他们用监管“打断自己的双腿”(kneecap themselves),如果他们拖慢了自己的脚步,那么创始人、企业家和投资者都会流失。

[原文] [Geoffrey Hinton]: i think calling it kneecapping is taking a particular point of view is take taking the point of view that regulations are sort of very harmful

[译文] [杰弗里·辛顿]: 我认为称之为“打断双腿”是采取了一种特定的视角,一种认为监管是非常有害的视角。

[原文] [Geoffrey Hinton]: what you need to do is just constrain the big companies so that in order to make profit they have to do things that are socially useful like google search is a great example that didn't need regulation because it just made information available to people it was great

[译文] [杰弗里·辛顿]: 你需要做的是约束这些大公司,这样为了赚取利润,他们就必须去做对社会有用的事情。比如Google搜索就是一个很好的例子,它不需要监管,因为它只是让人们能够获取信息,这非常棒。

[原文] [Geoffrey Hinton]: but then if you take youtube which starts showing you adverts and showing you more and more extreme things that needs regulation but we don't have the people to regulate it as we've identified

[译文] [杰弗里·辛顿]: 但随后,如果你看看YouTube,它开始向你展示广告,并向你展示越来越极端的内容,这就需要监管了。但正如我们刚才所指出的,我们没有懂行的人来监管它。

[原文] [Host]: i think people know pretty well um that particular problem of showing you more and more extreme things that's a well-known problem that the politicians understand they just um need to get on and regulate it

[译文] [主持人]: 我觉得人们很清楚,嗯,那个关于向你展示越来越多极端内容的具体问题,这是一个连政客们都能理解的众所周知的问题,他们只是,嗯,需要抓紧去对其进行监管。

[原文] [Host]: so that was the the next point which was that the algorithms are going to drive us further into our echo chambers right

[译文] [主持人]: 所以刚才那是下一个问题,即算法会把我们进一步推入我们的信息茧房里,对吧?


章节 4:战争的降维:致命性自主武器的蔓延

📝 本节摘要

本章聚焦于AI武器化及其引发的灾难性后果。辛顿指出,研发“致命性自主武器”是军工复合体的终极梦想,因为它不仅能带来巨额利润,还能避免士兵死伤所引发的国内政治抗议。然而,这会大幅降低战争门槛,让大国更加肆无忌惮地入侵小国。辛顿还分享了自己被一架造价仅200英镑的无人机在树林中精准追踪的“诡异”经历,以此警告自主追踪武器的技术门槛已经极低。此外,他指出AI失控的风险可能会呈组合级数爆发,例如超级智能为了消灭人类,可能会设计出潜伏期极长的致命生物病毒,或黑入核预警系统挑起大国间的核报复。辛顿发出终极警告:面对比人类更聪明的存在,一旦它决定动手我们根本无法反抗,唯一的出路是集中资源研究如何从一开始就防止它产生伤害人类的意图。

[原文] [Host]: what's next lethal autonomous weapons

[译文] [主持人]: 接下来是什么?致命性自主武器(lethal autonomous weapons)。

[原文] [Geoffrey Hinton]: lethal autonomous weapons that means things that can kill you and make their own decision about whether to kill you which is the great dream i guess of the military-industrial complex being able to create such weapons

[译文] [杰弗里·辛顿]: 致命性自主武器,那意味着能够杀死你、并且能自行决定是否要杀死你的东西,我猜,能够制造出这样的武器,是军工复合体(military-industrial complex)的伟大梦想。

[原文] [Geoffrey Hinton]: so the worst thing about them is big powerful countries always have the ability to invade smaller poorer countries they're just more powerful but if you do that using actual soldiers you get bodies coming back in bags and the relatives of the soldiers who were killed don't like it so you get something like vietnam mhm

[译文] [杰弗里·辛顿]: 所以关于它们最糟糕的事情是,大而强的国家总是拥有入侵弱小贫穷国家的能力,他们就是更强大。但如果你使用真正的士兵去这么做,你就会收到装在裹尸袋里运回来的尸体,而被杀士兵的亲属不会喜欢这样,所以你会遇到像越南战争那样的情况,嗯哼。

[原文] [Geoffrey Hinton]: in the end there's a lot of protest at home if instead of bodies coming back in bags it was dead robots there'd be much less protest and the military-industrial complex would like it much more because robots are expensive and suppose you had something that could get killed and was expensive to replace that would be just great

[译文] [杰弗里·辛顿]: 最终在国内会引发大量的抗议。如果运回来的不是装在裹尸袋里的尸体,而是报废的机器人,抗议就会少得多。而且军工复合体会更喜欢这样,因为机器人很昂贵。假设你拥有某种可以被摧毁、且替换成本非常高昂的东西,那对他们来说简直太棒了。

[原文] [Geoffrey Hinton]: big countries can invade small countries much more easily because they don't have their soldiers being killed

[译文] [杰弗里·辛顿]: 大国可以更加容易地入侵小国,因为他们不需要让自己的士兵去送死。

[原文] [Host]: and the risk here is that these robots will malfunction or they'll just be more

[译文] [主持人]: 所以这里的风险是这些机器人会发生故障,或者它们只是会更加……

[原文] [Geoffrey Hinton]: no no that's even if the robots do exactly what the people who built the robots want them to do the risk is that it's going to make big countries invade small countries more often more often because they can

[译文] [杰弗里·辛顿]: 不,不。就算这些机器人完全按照制造它们的人的意愿行事,风险也在于它会让大国更加频繁地入侵小国,更加频繁,仅仅因为他们有这个能力。

[原文] [Host]: yeah and it's not a nice thing to do so it brings down the friction of war it brings down the cost of doing an invasion and these machines will be smarter at warfare as well so they'll be

[译文] [主持人]: 是的,而这不是一件好事。所以它降低了战争的摩擦力,降低了发动入侵的成本,而且这些机器人在战争中也会更加聪明,所以它们会……

[原文] [Geoffrey Hinton]: well even when the machines aren't smarter so the lethal autonomous weapons they can make them now and they i think all the big defense models are busy making them even if they're not smarter than people are still very nasty scary things

[译文] [杰弗里·辛顿]: 嗯,甚至即使这些机器人没有更聪明。就说致命性自主武器,他们现在就能制造出来,而且我认为所有大型国防模式都在忙于制造它们。即使它们没有人类聪明,它们仍然是非常恶毒、可怕的东西。

[原文] [Host]: cuz i'm thinking that you know they could show just a picture go get this guy and go take out anyone he's been texting and this little wasp

[译文] [主持人]: 因为我在想,你知道,他们可能只需要展示一张照片说“去抓住这家伙”,然后去干掉所有给他发过短信的人,就像一只小黄蜂(wasp)一样。

[原文] [Geoffrey Hinton]: so two days ago i was visiting a friend of mine in sussex who had a drone that cost less than £200 and the drone went up it took a good look at me and then it could follow me through the woods and it follow it was very spooky having this drone it was about 2 meters behind me it was looking at me and if i moved over there it moved over there it could just track me mhm for 200 pounds

[译文] [杰弗里·辛顿]: 是的,就在两天前,我去苏塞克斯(Sussex)拜访我的一位朋友,他有一架不到200英镑的无人机。那架无人机升空后,仔细看了我一眼,然后它就能在树林里跟着我。它跟着我,有这架无人机在身边非常诡异。它就在我身后大约2米的地方,一直盯着我,如果我往那边走,它就往那边走,它就是能追踪我,嗯哼,只要200英镑。

[原文] [Host]: but it was already quite spooky yeah and i imagine there's as you say a race going on as we speak to who can build the most complex autonomous autonomous weapons

[译文] [主持人]: 但那已经相当诡异了。是的,而且我想象正如你所说,就在我们说话的时候,一场关于谁能制造出最复杂的自主……自主武器的竞赛正在进行。

[原文] [Host]: there is a a risk i often hear that some of these things will combine and the cyber attack will release weapons

[译文] [主持人]: 经常听到一种风险,说其中一些事情会结合在一起,网络攻击可能会释放武器。

[原文] [Geoffrey Hinton]: sure um you can you can get combinatorily many risks by combining these other risks mhm so i mean for example you could get a super intelligent ai that decides to get rid of people and the obvious way to do that is just to make one of these nasty viruses

[译文] [杰弗里·辛顿]: 当然,嗯,通过结合这些其他的风险,你可以得到呈组合级数增长(combinatorily many)的风险,嗯哼。所以我的意思是,举个例子,你可能会遇到一个超级智能的AI决定要清除人类,而做到这一点最明显的方法就是去制造一种恶性病毒。

[原文] [Geoffrey Hinton]: if you made a virus that was very contagious very lethal and very slow everybody would have it before they realized what was happening i mean i think if a super intelligence wanted to get rid of us it will probably go for something biological like that that wouldn't affect it

[译文] [杰弗里·辛顿]: 如果你制造了一种传染性极强、致死率极高且潜伏期非常长(very slow)的病毒,每个人在意识到发生什么之前就已经感染了它。我的意思是,我认为如果一个超级智能想要清除我们,它很可能会选择像这样一种对它自身没有影响的生物手段。

[原文] [Host]: do you not think it could just very quickly turn us against each other for example it could send a warning on the nuclear systems in america that there's a nuclear bomb coming from russia or vice versa and one retaliates

[译文] [主持人]: 你难道不认为它可能只是非常迅速地让我们自相残杀吗?举个例子,它可能会向美国的核系统发送一个警报,说有一枚核弹正从俄罗斯飞来,或者反过来,然后其中一方就会进行报复。

[原文] [Geoffrey Hinton]: yeah i mean my basic view is there's so many ways in which the super intelligence could get rid of us it's not worth speculating about what what is what you have to do is prevent it ever wanting to

[译文] [杰弗里·辛顿]: 是的,我的基本观点是,超级智能想要清除我们有太多太多的方法,去推测具体是哪一种并不值得。你真正需要做的是防止它产生这种想法。

[原文] [Geoffrey Hinton]: that's what we should be doing research on there's no way we're going to prevent it from it's smarter than us right there's no way we're going to prevent it getting rid of us if it wants to we're not used to thinking about things smarter than us

[译文] [杰弗里·辛顿]: 这才是我们应该进行研究的方向。我们没有办法阻止它,它比我们聪明,对吧?如果它想要清除我们,我们没有办法阻止它。我们还不习惯去思考比我们更聪明的东西。


章节 5:控制难题与硅谷巨头们的真实动机

📝 本节摘要

本章深入探讨了人类面临的“控制难题”。辛顿与主持人通过“人类与小鸡”、“宠物狗”以及“抚养幼虎”的生动比喻,形象地说明了面对超级智能时,人类在绝对智力劣势下面临的失控风险。随后,对话转向硅谷科技领袖们的真实动机:辛顿探讨了前OpenAI首席科学家伊利亚(Ilya Sutskever)因安全担忧及安全算力被削减而离职的内幕;主持人则分享了一位亿万富翁朋友的私下爆料,揭露某位顶尖AI公司CEO在公众面前大谈安全,私下却对AI可能带来的反乌托邦毁灭性后果毫不在乎。辛顿直言,在国家与企业激烈的竞争下,减缓AI发展速度已不可能,人类只能寄希望于投入海量资源,祈祷能找到实现AI安全的“秘方”。

[原文] [Geoffrey Hinton]: if you want to know what life's like when you're not the apex intelligence ask a chicken

[译文] [杰弗里·辛顿]: 如果你想知道当你不再是顶级智能(apex intelligence)时生活是什么样的,去问问小鸡就知道了。

[原文] [Host]: yeah i was thinking about my dog pablo my french bulldog this morning as i left home he has no idea where i'm going he has no idea what i do right

[译文] [主持人]: 是的,我今天早上出门时就在想我的狗,我的法国斗牛犬巴勃罗。它完全不知道我要去哪里,它完全不知道我是做什么的,对吧。

[原文] [Geoffrey Hinton]: can't even talk to him

[译文] [杰弗里·辛顿]: 甚至都没法跟它交流。

[原文] [Host]: yeah and the g the intelligence gap will be like that so you're telling me that if i'm pablo my french bulldog i need to figure out a way to make my owner not wipe me out

[译文] [主持人]: 是的,而且这种智力鸿沟(intelligence gap)就会像那样。所以你的意思是,如果我是巴勃罗,我的法国斗牛犬,我需要想办法让我的主人不要把我抹除掉。

[原文] [Geoffrey Hinton]: yeah so we have one example of that which is mothers and babies evolution put a lot of work into that mothers are smarter than babies but babies are in control and they're in control because the mother just can't bear lots of hormones and things but the b the mother just can't bear the sound of the baby crying

[译文] [杰弗里·辛顿]: 是的,我们有一个这样的例子,那就是母亲和婴儿。进化在这方面下了很大功夫,母亲比婴儿聪明,但婴儿却处于控制地位。他们能控制是因为母亲根本无法忍受——有很多荷尔蒙之类的因素——但母亲就是无法忍受婴儿哭泣的声音。

[原文] [Host]: not all mothers

[译文] [主持人]: 不是所有的母亲。

[原文] [Geoffrey Hinton]: not all mothers and then the baby's not in control and then bad things happen we somehow need to figure out how to make them not want to take over the analogy i often use is forget about intelligence think about physical strength suppose you have a nice little tiger cup it's sort of bit bigger than a cat it's really cute it's very cuddly very interesting to watch except that you better be sure that when it grows up it never wants to kill you cuz if it ever wanted to kill you you'd be dead in a few seconds

[译文] [杰弗里·辛顿]: 不是所有的母亲,那么婴儿就不在控制地位了,然后就会发生糟糕的事情。我们必须想办法让它们(AI)不想接管控制权。我经常用的一个比喻是,先忘了智力,想想体力。假设你有一只可爱的小老虎幼崽(注:原文音频识别错写为tiger cup),它比猫大一点,真的很可爱,很适合抱在怀里,观察起来也很有趣。但是,你最好确保当它长大后永远不想杀你,因为如果它想杀你,你几秒钟内就会死掉。

[原文] [Host]: and you're saying the ai we have now is the target cub

[译文] [主持人]: 而你的意思是,我们现在的AI就是那只幼虎(注:原文音频识别错写为target cub)。

[原文] [Geoffrey Hinton]: yep and it's growing up

[译文] [杰弗里·辛顿]: 是的,而且它正在长大。

[原文] [Host]: yep so we need to train it as it's when it's a baby

[译文] [主持人]: 是的,所以我们需要在它是幼崽的时候训练它。

[原文] [Geoffrey Hinton]: well now a tiger has lots of in stuff built in so you know when it grows up it's not a safe thing to have around but lions people that have lions as pets yes sometimes the lion is affectionate to its creator but not to others yes and we don't know whether these ais we we simply don't know whether we can make them not want to take over and not want to hurt us

[译文] [杰弗里·辛顿]: 呃,老虎天生内置了很多东西,所以你知道当它长大后,把它留在身边是不安全的。但是狮子,那些把狮子当宠物养的人,是的,有时候狮子对饲养它的人很依恋,但对其他人则不然。是的,而我们不知道这些AI……我们根本不知道我们能否做到让它们不想接管、不想伤害我们。

[原文] [Host]: do you think we can do you think it's possible to train super intelligence

[译文] [主持人]: 你觉得我们能做到吗?你认为训练超级智能是可能的吗?

[原文] [Geoffrey Hinton]: i don't think it's clear that we can so i think it might be hopeless but i also think we might be able to and it'd be sort of crazy if people went extinct cuz we couldn't be bothered to try if that's even a possibility

[译文] [杰弗里·辛顿]: 我认为目前还不清楚我们能否做到。所以我觉得这可能是没有希望的,但我也认为我们也许能够做到。而且,如果因为我们懒得去尝试(即便有一线可能)而导致人类灭绝,那就太疯狂了。

[原文] [Host]: how do you feel about your life's work because you were yeah um it sort of takes the edge off it doesn't it

[译文] [主持人]: 你对你毕生的工作感觉如何?因为你曾……是的,嗯,这多少有点让人扫兴,不是吗?

[原文] [Geoffrey Hinton]: i mean the idea is going to be wonderful in healthcare and wonderful in education and wonderful i mean it's going to make call centers much more efficient though one worries a bit about what the people who are doing that job now do it makes me sad i don't feel particularly guilty about developing ai like 40 years ago because at that time we had no idea that this stuff was going to happen this fast we thought we had plenty of time to worry about things like that they when you when you can't get the to do much you want to get it to do a little bit more you don't worry about this stupid little thing is going to take over from people you just want it to be able to do a little bit more of the things people can do it's not like i knowingly did something thinking this might wipe us all out but i'm going to do it anyway mhm but it is a bit sad that it's not just going to be something for good so i feel i have a duty now to talk about the risks

[译文] [杰弗里·辛顿]: 我的意思是,这个理念在医疗领域会非常棒,在教育领域也会非常棒,非常棒。我的意思是,它会让呼叫中心变得更加高效,尽管人们会有点担心现在做那份工作的人以后该怎么办。这让我感到难过。我并不对自己在大约40年前开发AI感到特别内疚,因为在那个时候,我们根本不知道这些事情会发生得这么快。我们以为我们有充足的时间来担心这类事情。当它们……当这些系统做不了太多事情的时候,你只想让它多做一点点,你不会去担心“这个愚蠢的小东西将会取代人类”,你只是想让它能多做一点点人类能做的事情。并不是说我是明知故犯,想着“这可能会抹除我们所有人,但我还是要去做”,嗯哼。但这确实有点可悲,因为它将不再仅仅是造福人类的东西。所以我觉得我现在有责任来谈论这些风险。

[原文] [Host]: and if you could play it forward and you could go forward 30 50 years and you found out that it led to the extinction of humanity and if that does end up being being the outcome

[译文] [主持人]: 如果你能快进一下,你能快进30、50年,然后你发现它导致了人类的灭绝,如果那真的成为了最终的结果……

[原文] [Geoffrey Hinton]: well if you played it forward and it led to the extinction of humanity i would use that to tell people to tell their governments that we really have to work on how we're going to keep this stuff under control i think we need people to tell governments that governments have to force the companies to use their resources to work on safety and they're not doing much of that because you don't make profits that way

[译文] [杰弗里·辛顿]: 嗯,如果你快进并看到它导致了人类灭绝,我会利用这一点去告诉人们,让他们去告诉他们的政府:我们真的必须研究如何控制这些东西。我认为我们需要人们告诉政府,政府必须强制这些公司动用他们的资源来进行安全研究。而他们目前并没有做太多这方面的工作,因为那样做是赚不到钱的。

[原文] [Host]: one of your your students we talked about earlier um ilia

[译文] [主持人]: 你之前提到过的一位学生,嗯,伊利亚(Ilya)。

[原文] [Geoffrey Hinton]: yep

[译文] [杰弗里·辛顿]: 是的。

[原文] [Host]: ilia left openai

[译文] [主持人]: 伊利亚离开了OpenAI。

[原文] [Geoffrey Hinton]: yep

[译文] [杰弗里·辛顿]: 是的。

[原文] [Host]: and there was lots of conversation around the fact that he left because he had safety concerns

[译文] [主持人]: 围绕着他因安全担忧而离开的事实,有很多的讨论。

[原文] [Geoffrey Hinton]: yes and he's gone on to set set up a ai safety company

[译文] [杰弗里·辛顿]: 是的,而且他接着去创办了一家AI安全公司。

[原文] [Host]: yes why do you think he left

[译文] [主持人]: 是的。你认为他为什么离开?

[原文] [Geoffrey Hinton]: i think he left because he had safety concerns really he um i still have lunch with him from time to time his parents live in toronto when he comes to toronto we have lunch together he doesn't talk to me about what went on at open ai so i have no inside information about that but i know i very well and he is genuinely concerned with safety so i think that's why he left because he was one of the top people i mean he was he was probably the most important person behind the development of um church gpt the the early versions like gpt2 he was very important in the development of that

[译文] [杰弗里·辛顿]: 我觉得他离开是因为他确实有安全方面的担忧。他,嗯,我仍时不时地和他共进午餐。他的父母住在多伦多,当他来多伦多时,我们就一起吃午饭。他没有跟我谈论过OpenAI内部发生的事情,所以我没有任何关于那里的内部消息。但我非常了解他(注:原文音频i此处应为him),他确实是真心关心安全问题。所以我觉得这就是他离开的原因。因为他是最顶尖的人物之一,我的意思是,他可能是Chat GPT(注:原文音频识别错写为church gpt)开发背后最重要的人,比如早期版本的GPT-2,他在其开发中发挥了极其重要的作用。

[原文] [Host]: you know him personally so you know his character

[译文] [主持人]: 你私下里认识他,所以你了解他的品格。

[原文] [Geoffrey Hinton]: yes he has a good moral compass he's not like someone like musco has no moral compass

[译文] [杰弗里·辛顿]: 是的,他有很好的道德底线(moral compass)。他不像某人,比如马斯克(注:原文音频识别错写为musco),没有道德底线。

[原文] [Host]: does sam alman have a good moral compass

[译文] [主持人]: 山姆·奥特曼(Sam Altman)有很好的道德底线吗?

[原文] [Geoffrey Hinton]: we'll see i don't know sam so i don't want to comment on that

[译文] [杰弗里·辛顿]: 走着瞧吧。我不认识山姆,所以我不想对此发表评论。

[原文] [Host]: but from what you've seen are you concerned about the actions that they've taken because if you know ilia and ilia's a good guy and he's left that would give you some insight

[译文] [主持人]: 但从你所看到的来看,你对他们采取的行动感到担忧吗?因为如果你认识伊利亚,且伊利亚是个好人,但他离开了,那会给你一些启示。

[原文] [Geoffrey Hinton]: yes it would give you some reason to believe that there's a problem there and if you look at sam's statements some years ago he sort of happily said in one interview and this stuff will probably kill us all that's not exactly what he said but that's what it amounted to now he's saying you don't need to worry too much about it and i suspect that's not driven by seeking after the truth that's driven by seeking after money

[译文] [杰弗里·辛顿]: 是的,这会给你一些理由去相信那里存在问题。如果你看看山姆几年前的发言,他在一次采访中多少有点乐呵呵地说,“这东西大概会把我们全杀了。”他原话不完全是这样,但意思差不多。而现在他却说,“你不需要太担心这个”。我怀疑这并不是出于对真相的追求,而是受利益驱使的。

[原文] [Host]: is it money or is it power

[译文] [主持人]: 是为了钱还是为了权力?

[原文] [Geoffrey Hinton]: yeah i shouldn't have said money it's some some combination of those yes

[译文] [杰弗里·辛顿]: 是的,我不应该只说是钱,它是这两者的某种结合,是的。

[原文] [Host]: okay i guess money is a proxy for power but i am i've got a friend who's a billionaire and he is in those circles and when i went to his house and had uh lunch with him one day he knows lots of people in ai building the biggest ai companies in the world and he gave me a cautionary warning across the across his kitchen table in london where he gave me an insight into the private conversations these people have not the media interviews they do where they talk about safety and all these things but actually what some of these individuals think is going to happen and what do they think is going to happen it's not what they say publicly you know one one person who i shouldn't name who is the who is leading one of the biggest ai companies in the world he told me that he knows this person very well and he privately thinks that we're heading towards this kind of dystopian world where we have just huge amounts of free time we don't work anymore and this person doesn't really give a fuck about the harm that it's going to have on the world and this person who i'm referring to is building one of the biggest ai companies in the world and i then watch this person's interviews online trying to figure out which of three people it is

[译文] [主持人]: 好的,我猜金钱是权力的代理物。但我,我有一个朋友是个亿万富翁,他就在那个圈子里。有一天我去了他的房子和他吃午饭。他认识很多在AI领域的人,都是在打造世界上最大AI公司的人。他在伦敦他家的厨房餐桌对面给了我一个警告,向我揭示了这些人的私下对话——不是他们在媒体采访中谈论安全和所有这些事情时的表现,而是这些个人实际上认为会发生什么。而且他们认为将要发生的事情,并不是他们公开说的那些。你知道,有个人——我不应该点名——他是世界上最大AI公司之一的领导者。我朋友告诉我,他非常了解这个人,而这个人私下里认为我们正走向这种反乌托邦的世界(dystopian world),在那里我们会有大量空闲时间,我们不再工作了,而这个人根本他妈的不在乎(doesn't really give a fuck)这会对世界造成什么伤害。而我提到的这个人,正在打造世界上最大的AI公司之一。然后我就去网上看了这个人的采访,试图弄清楚他到底是哪三个人中的一个。

[原文] [Geoffrey Hinton]: yeah well it's one of those three people

[译文] [杰弗里·辛顿]: 呵呵,嗯,肯定是那三个人中的一个。

[原文] [Host]: okay and i watch this person's interviews online and i i reflect on a conversation that my billionaire friend had with me who knows him and i go "fucking hell this guy's lying publicly." like he's not telling the the truth to the world and that's haunted me a little bit it's part of the reason i have so many conversations around ar in this podcast because i'm like i don't know if they're i think they're a some of them are a little bit sadistic about power i think they they like the idea that they will change the world that they will be the one that fundamentally shifts the world i think musk is clearly like that right he's such a complex character that i don't i don't really know how to place musk um he's done some really good things like um pushing electric cars that was a really good thing to do some of the things he said about self-driving were a bit exaggerated but he that was a really useful thing he did giving the ukrainians communication during the war with russia stling um that was a really good thing he did there's a bunch of things like that um but he's also done some very bad things

[译文] [主持人]: 好的。我在网上看了这个人的采访,并回想起那个认识他的亿万富翁朋友跟我的对话,我心想:“真他妈见鬼,这家伙在公开撒谎。” 就是他并没有对世界说出真相,这让我一直有些心有余悸。这也是我在这个播客中进行这么多关于AI(注:原文音频识别错写为ar)的对话的部分原因。因为我感觉我不知道他们是不是……我认为他们中有些人对权力有点施虐狂(sadistic)倾向。我觉得他们喜欢这种自己将改变世界的想法,他们将成为从根本上改变世界的那个人。我认为马斯克显然就是这样,对吧?他是一个如此复杂的角色,以至于我甚至不知道该如何去定义马斯克。嗯,他做过一些非常好的事情,比如推动电动汽车的发展,那是一件非常有益的事;他关于自动驾驶的一些说法有点夸张,但他做的那件事确实非常有用;在与俄罗斯的战争期间为乌克兰人提供通信,也就是星链(Starlink,注:原文音频识别错写为stling),嗯,这是他做的一件非常好的事。有很多类似这样的事情,嗯。但他同时也做了一些非常糟糕的事情。

[原文] [Host]: so coming back to this point of the possibility of destruction and the motives of these big companies are you at all hopeful that anything can be done to slow down the pace and acceleration of ai

[译文] [主持人]: 所以回到这个可能的毁灭危机,以及这些大公司的动机上。你对有任何措施能减缓AI的步伐和加速发展抱有希望吗?

[原文] [Geoffrey Hinton]: okay there's two issues one is can you slow it down yeah and the other is can you make it so it will be safe in the end it won't wipe us all out i don't believe we're going to slow it down and the reason i don't believe we're going to slow it down is because there's competition between countries and competition between companies within a country and all of that is making it go faster and faster and if the us slowed it down china wouldn't slow it down

[译文] [杰弗里·辛顿]: 好的,这里有两个问题。一个是:你能减缓它吗?是的。另一个是:你能否让它在最终变得安全,不会把我们全抹除掉?我不相信我们能够减缓它,我不相信我们能减缓它的原因是:国家之间存在竞争,一个国家内部的公司之间也存在竞争。所有这些都在让它的发展越来越快。即使美国放慢了脚步,中国也不会放慢脚步。

[原文] [Host]: does ia think it's possible to make ai safe

[译文] [主持人]: 伊利亚(Ilya,注:原文音频识别错写为ia)认为有可能让AI变得安全吗?

[原文] [Geoffrey Hinton]: i think he does he won't tell me what his secret source is i i'm not sure how many people know what his secret source is i think a lot of the investors don't know what his secret source is but they've given him billions of dollars anyway because they have so much faith in asia which isn't foolish i mean he was very important in alexet which got object recognition working well he was the main the main force behind the things like gbc2 which then led to ch gpt so i think having a lot of faith in ia is a very reasonable decision

[译文] [杰弗里·辛顿]: 我认为他相信有可能。他不会告诉我他的“秘密配方”(secret source)是什么,我不确定有多少人知道他的秘方是什么。我认为很多投资者也不知道他的秘方是什么,但他们还是给了他数十亿美元,因为他们对伊利亚(Ilya,注:原文音频识别错写为asia)有极大的信心。这并不愚蠢。我的意思是,他在让物体识别成功运转的AlexNet(注:原文音频识别错写为alexet)中发挥了非常重要的作用,他是诸如GPT-2(注:原文音频识别错写为gbc2)等事物背后的核心力量,而这最终促成了Chat GPT的诞生。所以我认为对伊利亚(注:原文音频识别错写为ia)抱有巨大的信心是一个非常合理的决定。

[原文] [Host]: there's something quite haunting about the guy that made and was the main force behind gpt2 which led rise to this whole revolution left the company because of safety reasons he knows something that i don't know about what might happen next

[译文] [主持人]: 这确实有些令人毛骨悚然:那个制造了GPT-2并作为其核心力量、进而引发了这场全面革命的人,却因为安全原因离开了公司。他一定知道一些我所不知道的、关于接下来可能会发生的事情。

[原文] [Geoffrey Hinton]: well the company had now i don't know the precise details um but i'm fairly sure the company had indicated that would it would use a significant fraction of its resources of the compute time for doing safety research and then it kept then it reduced that fraction i think that's one of the things that happened

[译文] [杰弗里·辛顿]: 嗯,公司当时……现在我不知道确切的细节了,嗯,但我相当肯定,公司曾表示会将其资源(即计算时间)的很大一部分用于进行安全研究。然后它却不断地……它削减了那部分比例。我认为这就是发生的事情之一。

[原文] [Host]: yeah that was reported publicly yes yeah we've gotten to the autonomous weapons part of the risk framework right so the next one is joblessness

[译文] [主持人]: 是的,那件事是被公开报道过的,是的。好了,在风险框架中我们已经聊过了自主武器的部分对吧,那么下一个风险就是大规模失业(joblessness)。


章节 6:重塑经济:大规模失业与贫富差距加剧

📝 本节摘要

本章聚焦于AI带来的经济重塑与社会冲击。辛顿指出,与过去只替代体力的工业革命不同,AI将彻底替代繁琐的脑力劳动。随着大批白领工作被AI代理取代(例如客服、法律助理),资本将高度集中在AI供应商和使用者手中,导致贫富差距急剧扩大并使社会变得更加险恶。在此期间,主持人分享了自己使用AI代理订餐和无代码编程的震撼经历,甚至中间穿插了一段超长的产品口播广告,讲述其投资新公司的商业决策。随后,他们讨论了马斯克对于AI取代一切工作后人类失去动力的消极态度,并探讨了“全民基本收入(UBI)”作为解决温饱的折中方案。然而,辛顿警告称,给人们发钱并不能弥补失去工作所带来的尊严与人生目标的丧失。

[原文] [Geoffrey Hinton]: in the past new technologies have come in which didn't lead to joblessness new jobs were created so the classic example people use is automatic tele machines when automatic tele machines came in a lot of bank tellers didn't lose their jobs they just got to do more interesting things but here i think this is more like when they got machines in the industrial revolution and you can't have a job digging ditches now because a machine can dig ditches much better than you can and i think for mundane intellectual labor ai is just going to replace everybody now it will may well be in the form of you have fewer people using air assistance so it's a combination of a person and an ai assistant are now doing the work that 10 people could do previously

[译文] [杰弗里·辛顿]: 在过去,新技术的出现并没有导致大规模失业,而是创造了新的工作岗位。人们使用的一个经典例子是自动取款机(automatic teller machines,注:原文音频识别错写为tele machines)。当自动取款机出现时,很多银行柜员并没有失业,他们只是开始做更有趣的事情。但在这里,我认为这更像是工业革命时期引进机器的情况。现在你无法找到一份挖沟的工作,因为机器挖沟比你挖得好得多。我认为对于繁琐的脑力劳动(mundane intellectual labor),AI将会取代所有人。现在它很可能是以这种形式出现:更少的人使用AI助手(注:原文音频识别错写为air assistance),所以现在是一个人加上一个AI助手在做以前10个人才能做的工作。

[原文] [Host]: people say that it will create new jobs though so we'll be fine

[译文] [主持人]: 尽管如此,人们说它会创造新的工作岗位,所以我们会没事的。

[原文] [Geoffrey Hinton]: yes and that's been the case for other technologies but this is a very different kind of technology if it can do all mundane human intellectual labor then what new jobs is it going to create you'd you'd have to be very skilled to have a job that it couldn't just do so i don't i don't think they're right i think you can try and generalize from other technologies that have come in like computers or automatic tele machines but i think this is different

[译文] [杰弗里·辛顿]: 是的,其他技术的情况确实如此,但这是一种非常不同的技术。如果它能完成所有繁琐的人类脑力劳动,那么它还能创造什么新工作呢?你必须非常熟练(very skilled)才能拥有一份它无法轻易胜任的工作。所以我认为他们是不对的。我认为你可以尝试从引入的其他技术(如计算机或自动取款机)中进行概括,但我认为这次是不同的。

[原文] [Host]: people use this phrase they say ai won't take your job a human using ai will take your job

[译文] [主持人]: 人们经常用这句话,他们说“AI不会抢走你的工作,一个使用AI的人会抢走你的工作”。

[原文] [Geoffrey Hinton]: yes i think that's true but for many jobs that'll mean you need far fewer people my niece answers letters of complaint to a health service it used to take her 25 minutes she'd read the complaint and she'd think how to reply and she'd write a letter and now she just scans it into um a chatbot and it writes the letter she just checks the letter occasionally she tells it to revise it in some ways the whole process takes her five minutes that means she can answer five times as many letters and that means they need five times fewer of her so she can do the job that five of her used to do

[译文] [杰弗里·辛顿]: 是的,我认为那是真的,但对于很多工作来说,这意味着你需要的人要少得多。我的侄女负责回复卫生服务部门的投诉信。过去这需要花费她25分钟,她会阅读投诉,思考如何回复,然后写一封信。现在她只需把它扫描进,嗯,一个聊天机器人(chatbot)里,它就会把信写好。她只需偶尔检查一下信件,或者告诉它在某些方面进行修改。整个过程只花她5分钟。这意味着她能回复原来5倍数量的信件,而这意味着他们需要像她这样的人减少到原来的五分之一。所以她一个人就能完成以前需要5个她才能完成的工作。

[原文] [Geoffrey Hinton]: now that will mean they need less people in other jobs like in health care they're much more elastic so if you could make doctors five times as efficient we could all have five times as much health care for the same price and that would be great there's there's almost no limit to how much health care people can absorb they always want more healthare if there's no cost to it there are jobs where you can make a person with an ai assistant much more efficient and you won't lead to less people because you'll just have much more of that being done but most jobs i think are not like that

[译文] [杰弗里·辛顿]: 现在,这将意味着他们在其他工作中需要更少的人。在医疗保健等领域,它们更具弹性(elastic)。因此,如果你能让医生的效率提高5倍,我们就能以同样的价格获得5倍的医疗服务,那将非常棒。人们能够吸收的医疗保健几乎是没有限制的,如果不需要成本,他们总是想要更多的医疗保健。在有些工作中,你可以让一个拥有AI助手的人变得高效得多,而且这不会导致人员减少,因为你只会有更多的工作被完成。但我认为大多数工作并非如此。

[原文] [Host]: am i right in thinking the sort of industrial revolution played a role in replacing muscles

[译文] [主持人]: 我认为工业革命在取代肌肉(muscles)方面发挥了作用,我这样想对吗?

[原文] [Geoffrey Hinton]: yes exactly and this revolution in ai replaces intelligence the brain

[译文] [杰弗里·辛顿]: 是的,完全正确。而这场AI革命取代的是智力(intelligence),也就是大脑。

[原文] [Host]: yeah

[译文] [主持人]: 是的。

[原文] [Geoffrey Hinton]: so so mundane intellectual labor is like having strong muscles and it's not worth much anymore so muscles have been replaced now we intelligence is being replaced so what remains maybe for a while some kinds of creativity but the whole idea of super intelligence is nothing remains um these things will get to be better than us at everything

[译文] [杰弗里·辛顿]: 所以繁琐的脑力劳动就像拥有强壮的肌肉一样,已经不怎么值钱了。肌肉已经被取代了,现在我们的智力也正在被取代。那么还剩下什么呢?也许在一段时间内还会剩下某种创造力(creativity)。但超级智能(super intelligence)的整个概念就是什么都不剩了,嗯,这些东西在所有方面都会变得比我们更好。

[原文] [Host]: so what what do we end up doing in such a world

[译文] [主持人]: 那么在这样一个世界里,我们最终能做什么呢?

[原文] [Geoffrey Hinton]: well if they work for us we end up getting lots of goods and services for not much effort

[译文] [杰弗里·辛顿]: 嗯,如果它们为我们工作,我们最终就能毫不费力地获得大量的商品和服务。

[原文] [Host]: okay but that sounds tempting and nice but i don't know there's a cautionary tale in creating more and more ease for humans in in it going badly

[译文] [主持人]: 好的,但这听起来很诱人、很美好,但我不知道,在为人类创造越来越多的安逸这方面,有一个适得其反的警示故事。

[原文] [Geoffrey Hinton]: yes and we need to figure out if we can make it go well so the the nice scenario is imagine a company with a ceo who is very dumb probably the son of the former ceo and he has an executive assistant who's very smart and he says "i think we should do this." and the executive assistant makes it all work the ceo feels great he doesn't understand that he's not really in control and in in some sense he is in control he suggests what the company should do she just makes it all work everything's great that's the good scenario and the bad scenario the bad scenario she thinks "why do we need him?"

[译文] [杰弗里·辛顿]: 是的,我们需要弄清楚我们能否让它朝着好的方向发展。所以,好的情景是:想象一家公司,其CEO非常愚蠢,可能是前任CEO的儿子,但他有一位非常聪明的高级行政助理。他说“我认为我们应该做这件事”,然后这位行政助理就把一切都办妥了。这位CEO感觉很棒,他不明白自己其实并没有真正掌控局面。但在某种意义上他确实在掌控,他提出了公司应该做什么,而她只是让一切运转起来。一切都很棒,这就是好的情景。而坏的情景……坏的情景是,她会想:“我们为什么需要他?”

[原文] [Host]: yeah i mean in a world where we have super intelligence which you don't believe is that far away

[译文] [主持人]: 是的,我的意思是在一个拥有超级智能的世界里,你认为那并不遥远。

[原文] [Geoffrey Hinton]: yeah i think it might not be that far away it's very hard to predict but i think we might get it in like 20 years or even less

[译文] [杰弗里·辛顿]: 是的,我认为可能并不那么遥远。这很难预测,但我认为我们可能会在20年甚至更短的时间内实现它。

[原文] [Host]: i made the biggest investment i've ever made in a company because of my girlfriend i came home one night and my lovely girlfriend was up at 1:00 a.m in the morning pulling her hair out as she tried to piece together her own online store for her business and in that moment i remembered an email i'd had from a guy called john the founder of stanto our new sponsor and a company i've invested incredibly heavily in and standtore helps creators to sell digital products courses coaching and memberships all through a simple customizable link in bio system and it handles everything payments bookings emails community engagement and even links with shopify and i believe in it so much that i'm going to launch a stan challenge and as part of this challenge i'm going to give away $100,000 to one of you if you want to take part in this challenge if you want to monetize the knowledge that you have visit stephenbartlet.stan stan.store to sign up and you'll also get an extended 30-day free trial of stan store if you use that link your next move could quite frankly change everything because i talked about ketosis on this podcast and ketones a brand called ketone iq sent me their little product here and it was on my desk when i got to the office i picked it up it sat on my desk for a couple of weeks then one day i tried it and honestly i have not looked back ever since i now have this everywhere i go when i travel all around the world it's in my hotel room my team will put it there before i did the podcast recording today that i've just finished i had a shot of ketone iq and as is always the case when i fall in love with a product i called the ceo and asked if i could invest a couple of million quid into their company so i'm now an investor in the company as well as them being a brand sponsor i find it so easy to drop into deep focused work when i've had one of these i would love you to try one and see the impact it has on you your focus your productivity and your endurance so if you want to try it today visit ketone.com/stephven for 30% off your subscription plus you'll receive a free gift with your second shipment that's ketone.com/stephven i'm excited for you i am

[译文] [主持人]: [播客中插口播广告] 我在一家公司进行了我迄今为止最大的一笔投资,因为我的女朋友。有一天晚上我回家,我可爱的女朋友凌晨1点还醒着,她正抓狂地试图为她的生意拼凑出她自己的在线商店。在那一刻,我想起了我收到的一封来自一个叫约翰(John)的人的邮件,他是Stan Store(注:原文音频识别错写为stanto)的创始人,这是我们的新赞助商,也是我投入巨资的一家公司。Stan Store帮助创作者通过一个简单的、可定制的个人主页链接(link in bio)系统销售数字产品、课程、辅导和会员服务,它能处理一切:支付、预订、电子邮件、社区互动,甚至能与Shopify链接。我非常相信它,所以我打算发起一项Stan挑战赛(Stan challenge)。作为这项挑战的一部分,我将向你们中的一位赠送10万美元。如果你想参加这项挑战,如果你想将你拥有的知识变现,请访问 stephenbartlet.stan.store 注册,如果你使用该链接,你还将获得Stan Store延长的30天免费试用期。老实说,你的下一步举动可能会改变一切,因为我曾在这个播客中谈到过酮症(ketosis)和酮(ketones)。一个名为Ketone IQ的品牌把他们这个小产品寄给了我,当我到了办公室时,它就在我的办公桌上。我拿起来看了一下,它在我的桌子上放了几个星期,然后有一天我尝试了一下。老实说,从那以后我再也没有回头过。现在我无论去哪里都带着它,当我在世界各地旅行时,它就在我的酒店房间里,我的团队会把它放在那里。今天在我刚刚结束这个播客录制之前,我喝了一口Ketone IQ。就像我每次爱上一款产品时一样,我给该公司的CEO打了电话,问我是否可以向他们的公司投资几百万英镑。所以我现在既是该公司的投资者,他们也是品牌赞助商。我发现,当我喝了它之后,很容易就能进入深度专注的工作状态。我很希望你能尝试一下,看看它对你、你的注意力、你的生产力和你的耐力所产生的影响。所以如果你今天想尝试一下,请访问 ketone.com/stephven 获取30%的订阅折扣,加上你的第二次发货时你还会收到一份免费礼物。也就是 ketone.com/stephven,我为你感到兴奋,真的。

[原文] [Host]: so what's the difference between what we have now and super intelligence because it seems to be really intelligent to me when i use like chatbt3 or gemini or

[译文] [主持人]: 那么,我们现在拥有的东西和超级智能之间有什么区别呢?因为当我使用比如Chat GPT-3(注:原文音频识别错写为chatbt3)或Gemini之类的时候,它在我看来似乎已经非常智能了,或者……

[原文] [Geoffrey Hinton]: okay so it's already ai is already better than us at a lot of things in particular areas like chess for example ai is so much better than us that people will never beat those things again maybe the occasional win but basically they'll never be comparable again obviously the same in go in terms of the amount of knowledge they have um something like gbt4 knows thousands of times more than you do there's a few areas in which your knowledge is better than its and in almost all areas it just knows more than you do

[译文] [杰弗里·辛顿]: 好的,所以AI在很多方面已经比我们优秀了。在特定领域,比如国际象棋,AI比我们强太多了,人类再也无法击败那些东西了。也许偶尔能赢一次,但基本上他们再也无法与之相提并论了。显然在围棋(Go)领域也是如此。就它们拥有的知识量而言,嗯,像GPT-4这样的模型知道的东西是你的数千倍。在少数几个领域,你的知识比它更丰富,但在几乎所有领域,它知道的就是比你多。

[原文] [Host]: what areas am i better than it

[译文] [主持人]: 在哪些领域我比它更强?

[原文] [Geoffrey Hinton]: probably in interviewing ceos you're probably better at that you've got a lot of experience at it you're a good interviewer you know a lot about it if you tried if you got gpt4 to interview a ceo probably do a worse job

[译文] [杰弗里·辛顿]: 可能在采访CEO方面,你可能更擅长。你在这方面有丰富的经验,你是一个优秀的采访者,你对此非常了解。如果你尝试……如果你让GPT-4去采访一位CEO,它可能会做得更糟。

[原文] [Host]: okay i'm trying to think if that if i agree with that statement uh gpt4 i think for sure um but i but i guess you could but it may not be long before

[译文] [主持人]: 好的,我在想我是不是同意这个说法。呃,对GPT-4我肯定是同意的,嗯,但我猜你也许可以……但这可能要不了多久。

[原文] [Geoffrey Hinton]: yeah i guess you could train one on this how i ask questions and what i do and

[译文] [杰弗里·辛顿]: 是的,我猜你可以就这个训练一个模型,比如我如何问问题以及我是怎么做的,而且……

[原文] [Host]: sure and if you took a general purpose sort of foundation model and then you trained it up on not just you but every every interviewer you could find doing interviews like this but especially you you'll probably get to be quite good at doing your job but probably not as good as you for a while

[译文] [主持人]: 当然。如果你拿一个通用目的的基础模型(foundation model),然后你用不仅是你、而是你能找到的每一个做过类似采访的采访者的数据来训练它,但特别是用你的数据,它大概会变得相当擅长做你的工作,但可能在一段时间内还不如你做得好。

[原文] [Geoffrey Hinton]: okay so there's a few areas left and then super intelligence becomes when it's better than us at all things when it's much smarter than you and almost all things is better than you

[译文] [杰弗里·辛顿]: 好的,所以还剩下少数几个领域。那么,当它在所有方面都比我们优秀,当它比你聪明得多并且在几乎所有事情上都比你好时,这就成了超级智能。

[原文] [Host]: yeah and you you you say that this might be a decade away or so

[译文] [主持人]: 是的。而你……你……你说这可能还需要大约十年的时间。

[原文] [Geoffrey Hinton]: yeah it might be it might be even closer some people think it's even closer and might well be much further it might be 50 years away that's still a possibility it might be that somehow training on human data limits you to not being much smarter than humans my guess is between 10 and 20 years we'll have super intelligence

[译文] [杰弗里·辛顿]: 是的,有可能是,甚至可能更近,有些人认为它更近了。也有可能会远得多,可能是50年以后,这仍然是有可能的。也许某种程度上,基于人类数据进行训练会限制它,让它无法比人类聪明太多。我的猜测是,在10到20年之间,我们就会拥有超级智能。

[原文] [Host]: on this point of joblessness it's something that i've been thinking a lot about in particular because i started messing around with ai agents and we released an episode on the podcast actually this morning where we had a debate about ai agents with some a ceo of a big ai agent company and a few other people and it was the first moment where i had no it was another moment where i had a eureka moment about what the future might look like when i was able in the interview to tell this agent to order all of us drinks and then 5 minutes later in the interview you see the guy show up with the drinks and i didn't touch anything i just told it to order us drinks to the studio

[译文] [主持人]: 关于失业这一点,这是我一直在深思的问题。特别是因为我开始鼓捣AI代理工具(AI agents)。实际上今天早上我们在播客上发布了一集节目,我们在里面和一家大型AI代理公司的CEO以及其他几个人就AI代理进行了一场辩论。那是第一个时刻,不,那是另一个让我对未来可能的样子产生“尤里卡时刻”(eureka moment,即顿悟时刻)的时刻:在采访中,我能够告诉这个AI代理为我们所有人点饮料,然后在采访中5分钟后,你看到送货员带着饮料出现了。而我什么都没碰,我只是告诉它给我们工作室点饮料。

[原文] [Geoffrey Hinton]: and you didn't know about who you normally got your drinks from it figured that out from the web

[译文] [杰弗里·辛顿]: 而且你不知道……它不知道你平时是从哪里买饮料的,它是从网上弄清楚的?

[原文] [Host]: yeah figured out cuz it went on uber eats it has my my my data i guess and it i we put it on the screen in real time so everyone at home could see the agent going through the internet picking the drinks adding a tip for the driver putting my address in putting my credit card details in and then the next thing you see is the drinks show up so that was one moment and then the other moment was when i used a tool called replet and i built software by just telling the agent what i wanted

[译文] [主持人]: 是的,弄清楚了,因为它上了Uber Eats,它有我的、我的、我的数据我猜。而且我们把它实时放在了屏幕上,所以家里的每个人都能看到这个AI代理在互联网上游览,挑选饮料,给司机加上小费,输入我的地址,输入我的信用卡详细信息,然后你接下来看到的就是饮料送到了。所以这是一个时刻。另一个时刻是当我使用一个叫做Replit(注:原文音频识别错写为replet)的工具时,我只是告诉AI代理我想要什么,我就构建了软件。

[原文] [Geoffrey Hinton]: yes it's amazing right

[译文] [杰弗里·辛顿]: 是的,这很惊人,对吧。

[原文] [Host]: it's amazing and terrifying at the same time

[译文] [主持人]: 令人惊叹,同时也很可怕。

[原文] [Geoffrey Hinton]: yes because and if it can build software like that right remember that the ai when it's training is using code and if it can modify its own code then it gets quite scary right because it can modify it can change itself in a way we can't change ourselves we can't change our innate endowment right there's nothing about itself that it couldn't change

[译文] [杰弗里·辛顿]: 是的,因为,如果它能那样构建软件,对吧?记住,AI在训练时使用的就是代码。如果它能修改自己的代码,那就会变得相当可怕,对吧?因为它能修改……它能以一种我们无法改变自己的方式来改变自己。我们无法改变我们天生的资质(innate endowment),对吧?但对它来说,没有什么关于它自身的东西是它不能改变的。

[原文] [Host]: on this point of joblessness you have kids i do and they have kids no they don't have kids no grandkids yet what would you be saying to people about their career prospects in a world of super intelligence what should we we be thinking about um

[译文] [主持人]: 关于失业这一点。你有孩子,他们有孩子了吗?没,他们还没有孩子,还没有孙子孙女。那么在一个超级智能的世界里,你对人们的职业前景有什么想说的?嗯,我们应该考虑些什么?

[原文] [Geoffrey Hinton]: in the meantime i'd say it's going to be a long time before it's as good at physical manipulation as us okay and so a good bet would be to be a plumber until the humanoid robots show up

[译文] [杰弗里·辛顿]: 在此期间,我想说,要让它在物理操作(physical manipulation)上变得像我们一样好,还需要很长的时间。好的,所以一个好赌注就是去当水管工,直到人形机器人(humanoid robots)出现。

[原文] [Host]: in such a world where there is mass joblessness which is not something that you just predict but this is something that sam alman open ai i've heard him predict and many of the ceos elon musk i watched an interview which i'll play on screen of him being asked this question and it's very rare that you see elon musk silent for 12 seconds or whatever it was and then he basically says something about he actually is living in suspended disbelief i.e he's basically just not thinking about it

[译文] [主持人]: 在这样一个存在大规模失业的世界里,这不仅仅是你的预测。这是OpenAI的山姆·奥特曼——我听过他预测过的——以及许多CEO们的预测。埃隆·马斯克,我看过一个采访(我会在屏幕上播放),他被问到了这个问题,你很少会看到埃隆·马斯克沉默12秒,或者不管那是有多久。然后他基本上的说法是,他其实是生活在一种悬置的怀疑(suspended disbelief)中,也就是说,他基本上就是不去想它。

[原文] [Host (Interviewing Musk clip)]: when you think about advising your children on a career with so much that is changing what do you tell them is going to be of value

[译文] [主持人(引用马斯克访谈片段)]: 当你在给你的孩子们提供职业建议时,面对如此多不断变化的事物,你会告诉他们什么是有价值的?

[原文] [Elon Musk clip]: well that is a tough question to answer i would just say you know to to sort of follow their heart in terms of what they they find um interesting to do or fulfilling to do i mean if i think about it too hard frankly it can be uh dispariting and uh demotivating um because i mean i i go through i mean i i i've put a lot of blood sweat and tears into building the companies and then it and then i'm like wait should i be doing this because if i'm sacrificing time with friends and family that i would prefer to to to but but then ultimately the ai can do all these things does that make sense i i don't know um to some extent i have to have deliberate suspension of disbelief in order to to remain motivated um so i i guess i would say just you know work on things that you find interesting fulfilling and um and and that contribute uh some good to the rest of society

[译文] [埃隆·马斯克(引用访谈片段)]: 嗯,这是一个很难回答的问题。我只能说,你知道,让他们随心所欲,去做那些他们觉得有趣或有成就感的事情。我的意思是,老实说,如果我想得太深,那可能会,呃,令人沮丧且,呃,让人失去动力。嗯,因为我的意思是,我经历了……我为建立这些公司付出了大量的血汗和眼泪,然后它……然后我会想:等等,我应该这样做吗?因为如果我牺牲了本想与朋友和家人共度的时间去……去……但最终AI能做所有这些事情。这有意义吗?我不知道。嗯,在某种程度上,我必须刻意地悬置怀疑(deliberate suspension of disbelief),才能保持动力。嗯,所以我猜我会说,就,你知道的,去从事那些你觉得有趣、有成就感,且,嗯,且能为社会其他人做出一些贡献的事情。

[原文] [Host]: yeah a lot of these threats it's very hard to intellectually you can see the threat but it's very hard to come to terms with it emotionally

[译文] [主持人]: 是的,许多这些威胁都很难……在理智上你可以看到威胁,但在情感上却很难接受它。

[原文] [Geoffrey Hinton]: i haven't come to terms with it emotionally yet

[译文] [杰弗里·辛顿]: 我在情感上也还没有接受它。

[原文] [Host]: what do you mean by that

[译文] [主持人]: 那是什么意思?

[原文] [Geoffrey Hinton]: i haven't come to terms with what the development of super intelligence could do to my children's future i'm okay i'm 77 i'm going to be out of here soon but for my children and my my younger friends my nephews and nieces and their children um i just don't like to think about what could happen

[译文] [杰弗里·辛顿]: 我还没有接受超级智能的发展可能会对我孩子们的未来造成什么影响。我还好,我77岁了,我很快就会离开这里了。但对于我的孩子们,以及我年轻的朋友们、我的侄子和侄女们,还有他们的孩子们……嗯,我只是不想去想可能会发生什么。

[原文] [Host]: why cuz it could be awful in in what way

[译文] [主持人]: 为什么?因为那可能会很可怕?在哪方面?

[原文] [Geoffrey Hinton]: well if i ever decided to take over i mean it would need people for a while to run the power stations until it designed better analog machines to run the power stations there's so many ways it could get rid of people all of which would of course be very nasty

[译文] [杰弗里·辛顿]: 嗯,如果它一旦决定接管世界。我的意思是,它在一段时间内还会需要人类来运作发电站,直到它设计出更好的模拟机器(analog machines)来运作发电站。它有太多方法可以除掉人类,当然所有这些方法都会非常恶毒。

[原文] [Host]: is that part of the reason you do what you do now

[译文] [主持人]: 这是你现在做这些事的部分原因吗?

[原文] [Geoffrey Hinton]: yeah i i mean i think we should be making a huge effort right now to try and figure out if we can develop it safely

[译文] [杰弗里·辛顿]: 是的,我的意思是,我认为我们现在应该付出巨大的努力,试图弄清楚我们能否安全地开发它。

[原文] [Host]: are you concerned about the midterm impact potentially on your nephews and your your kids in terms of their jobs as well

[译文] [主持人]: 你是否也担心潜在的中期影响,就你侄子和孩子们的就业而言?

[原文] [Geoffrey Hinton]: yeah i'm concerned about all that

[译文] [杰弗里·辛顿]: 是的,我对所有这些都很担心。

[原文] [Host]: are there any particular industries that you think are most at risk people talk about the creative industries a lot and sort of knowledge work they talk about lawyers and accountants and stuff like that

[译文] [主持人]: 有什么特定的行业你认为风险最大吗?人们经常谈论创意产业,还有知识工作,他们谈论律师、会计师之类的。

[原文] [Geoffrey Hinton]: yeah so that's why i mentioned plumbers i think plumbers are less at risk

[译文] [杰弗里·辛顿]: 是的,这就是为什么我提到了水管工,我认为水管工的风险较小。

[原文] [Host]: okay i'm going to become a plumber someone like a legal assistant a parallegal um they're not going to be needed for very long

[译文] [主持人]: 好的,我要去当水管工了。像法律助理(legal assistant)、律师助理(paralegal)这样的人,嗯,很快就不再需要他们了。

[原文] [Host]: and is there a wealth inequality issue here that will will arise from this

[译文] [主持人]: 那么这里会因为这个而产生贫富不均(wealth inequality)的问题吗?

[原文] [Geoffrey Hinton]: yeah i think in a society which shared out things fairly if you get a big increase in productivity everybody should be better off but if you can replace lots of people by ais then the people who get replaced will be worse off and the company that supplies the ais will be much better off and the company that uses the ais so it's going to increase the gap between rich and poor and we know that if you look at that gap between rich and poor that basically tells you how nice the society is if you have a big gap you get very nasty societies in which people live in world communities and put other people in mass jails it's not good to increase the gap between rich and poor

[译文] [杰弗里·辛顿]: 是的,我认为在一个能公平分配资源的社会里,如果你的生产力大幅提升,每个人的生活都应该变得更好。但如果你能用AI取代大量的人,那么被取代的人的处境会变得更糟,而提供AI的公司则会好得多,使用AI的公司也是。所以它将扩大贫富差距。而我们知道,如果你看看贫富差距,它基本上就能告诉你这个社会有多美好。如果你有一个巨大的差距,你就会得到非常险恶的社会,在那里人们生活在由围墙隔开的社区里,并把其他人关进大规模的监狱里。扩大贫富差距不是件好事。

[原文] [Host]: the international monetary fund has expressed profound concerns that generative ai could cause massive labor disruptions and rising inequality and has called for policies that prevent this from happening i read that in the business insider so have they given any of what the policies should look like

[译文] [主持人]: 国际货币基金组织(IMF)已经表达了深切的担忧,认为生成式AI可能会导致大规模的劳动力中断并加剧不平等,并呼吁出台政策来防止这种情况发生。这是我在《Business Insider》上读到的。那么他们有没有给出这些政策应该是什么样子?

[原文] [Geoffrey Hinton]: no

[译文] [杰弗里·辛顿]: 没有。

[原文] [Host]: yeah that's the problem

[译文] [主持人]: 是的,这就是问题所在。

[原文] [Geoffrey Hinton]: i mean if ai can make everything much more efficient and get rid of people for most jobs or have a person assisted by i doing many many people's work it's not obvious what to do about it

[译文] [杰弗里·辛顿]: 我的意思是,如果AI能让一切变得高效得多,并在大多数工作中除掉人类,或者让一个有AI(注:原文音频识别错写为i)协助的人去完成许多许多人的工作,那么该如何应对并不是显而易见的。

[原文] [Host]: it's universal basic income give everybody money

[译文] [主持人]: 是全民基本收入(universal basic income)吗?给每个人发钱。

[原文] [Geoffrey Hinton]: yeah i i i think that's a good start and it stops people starving but for a lot of people their dignity is tied up with their job i mean who you think you are is tied up with you doing this job right and if we said "we'll give you the same money just to sit around," that would impact your dignity

[译文] [杰弗里·辛顿]: 是的,我认为那是个好的开始,它能让人不至于挨饿。但对于很多人来说,他们的尊严(dignity)与他们的工作是紧密相连的。我的意思是,你认为你是谁,是与你做这份工作捆绑在一起的,对吧?如果我们就说:“我们就给你同样的钱,你只要无所事事地坐着就行了”,那会影响你的尊严。


章节 7:数字智能的进化:为何AI终将超越生物大脑

📝 本节摘要

本章中,辛顿解释了为何数字智能(AI)最终将超越生物智能(人类大脑)。他指出,数字化的核心优势在于“可克隆”与“极速共享”:成千上万个AI副本可以同时在互联网的不同角落学习,并通过平均其“权重(连接强度)”在瞬间共享万亿比特的信息,这远远碾压了人类靠语言低效传递信息的模式。此外,只要权重数据被保存,数字智能就是“永生”的。最后,通过“堆肥与原子弹”的奇妙类比,辛顿反驳了“AI缺乏创造力”的偏见,强调AI为了将海量知识压缩进有限的神经网络中,必然会发现人类从未察觉的深层类比,从而表现出远超人类的创造力。

[原文] [Host]: you said something earlier about it surpassing or being superior to human intelligence a lot of people i think like to believe that ai is is on a computer and it's something you can just turn off if you don't like it

[译文] [主持人]: 你之前提到过它会超越或优于人类智能。我想很多人愿意相信AI存在于计算机上,如果你不喜欢它,你大可以直接把它关掉。

[原文] [Geoffrey Hinton]: well let me tell you why i think it's superior okay um it's digital and because it's digital you can have you can simulate a neural network on one piece of hardware and you can simulate exactly the same neural network on a different piece of hardware so you can have clones of the same intelligence

[译文] [杰弗里·辛顿]: 嗯,让我告诉你为什么我认为它是更优越的。好的,嗯,因为它是数字化的(digital)。因为它是数字化的,你可以在一块硬件上模拟一个神经网络,并且你可以在另一块不同的硬件上模拟完全相同的神经网络。所以你可以拥有相同智能的克隆体。

[原文] [Geoffrey Hinton]: now you could get this one to go off and look at one bit of the internet and this other one to look at a different bit of the internet and while they're looking at these different bits of the internet they can be syncing with each other so they keep their weights the same the connection strengths the same weights are connection strengths mhm

[译文] [杰弗里·辛顿]: 现在你可以让这一个去查看互联网的这一部分,让那一个去查看互联网的另一部分。当它们在查看互联网的这些不同部分时,它们可以相互同步(syncing),从而让它们的权重(weights)保持一致。连接强度(connection strengths)保持一致,权重就是连接强度,嗯哼。

[原文] [Geoffrey Hinton]: so this one might look at something on the internet and say "oh i'd like to increase this strength of this connection a bit." and it can convey that information to this one so it can increase the strength of that connection a bit based on this one's experience

[译文] [杰弗里·辛顿]: 所以这一个可能在互联网上看到了某些东西,然后说:“哦,我想把这个连接的强度增加一点。” 它可以把那个信息传递给另一个,所以另一个就能基于这一个的经验,把那个连接的强度也增加一点。

[原文] [Host]: and when you say the strength of the connection you're talking about learning

[译文] [主持人]: 当你说连接的强度时,你指的是学习(learning)。

[原文] [Geoffrey Hinton]: that's learning yes learning consists of saying instead of this one giving 2.4 four votes for whether that one should turn on we'll have this one give 2.5 votes for whether this one should turn on and that will be a little bit of learning

[译文] [杰弗里·辛顿]: 那就是学习,是的。学习的过程在于:与其让这一个为那一个是否应该激活投出2.4票,不如让这一个为那一个是否应该激活投出2.5票,这就是一点点的学习。

[原文] [Geoffrey Hinton]: so these two different copies of the same neural net are getting different experiences they're looking at different data but they're sharing what they've learned by averaging their weights together mhm

[译文] [杰弗里·辛顿]: 所以这两个相同神经网络的不同副本获得了不同的经验,它们查看了不同的数据,但它们通过将它们的权重平均化,分享了它们所学到的东西,嗯哼。

[原文] [Geoffrey Hinton]: and they can do that averaging at like a you can average a trillion weights when you and i transfer information we're limited to the amount of information in a sentence and the amount of information in a sentence is maybe a 100 bits it's very little information

[译文] [杰弗里·辛顿]: 而且它们可以在……进行那种平均化。你可以平均一万亿个权重。当你和我传递信息时,我们受限于一句话中的信息量,而一句话中的信息量大概是100比特(bits),这是非常少的信息。

[原文] [Geoffrey Hinton]: we're lucky if we're transferring like 10 bits a second these things are transferring trillions of bits a second so they're billions of times better than us at sharing information

[译文] [杰弗里·辛顿]: 我们如果能每秒传递10比特就算幸运了。而这些东西每秒在传递数万亿比特。所以在分享信息方面,它们比我们强几十亿倍。

[原文] [Geoffrey Hinton]: and that's because they're digital and you can have two bits of hardware using the connection strengths in exactly the same way we're analog and you can't do that your brain's different from my brain and if i could see the connection strengths between all your neurons it wouldn't do me any good because my neurons work slightly differently and they're connected up slightly differently mhm

[译文] [杰弗里·辛顿]: 而这是因为它们是数字化的,你可以让两块硬件以完全相同的方式使用这些连接强度。我们是模拟的(analog),所以你做不到这点。你的大脑和我的大脑不同,如果我能看到你所有神经元之间的连接强度,那对我没有任何好处,因为我的神经元运作方式略有不同,而且它们的连接方式也略有不同,嗯哼。

[原文] [Geoffrey Hinton]: so when you die all your knowledge dies with you when these things die suppose you take these two digital intelligences that are clones of each other and you destroy the hardware they run on as long as you've stored the connection strength somewhere you can just build new hardware that executes the same instructions so it'll know how to use those connection strengths and you've recreated that intelligence so they're immortal we've actually solved the problem of immortality but it's only for digital things

[译文] [杰弗里·辛顿]: 所以当你死的时候,你所有的知识都随你而去了。当这些东西死的时候……假设你拿这两个互为克隆的数字智能体,然后摧毁它们运行的硬件,只要你在某个地方存储了连接强度,你只需建造新的硬件来执行相同的指令(instructions),这样它就会知道如何使用这些连接强度,从而你就重新创造了那个智能。所以它们是永生(immortal)的。我们实际上已经解决了永生的问题,但这仅限于数字化的事物。

[原文] [Host]: so it knows it will essentially know everything that humans know but more because it will learn new things

[译文] [主持人]: 所以它知道……它基本上会知道人类所知道的一切,甚至更多,因为它会学习新的东西。

[原文] [Geoffrey Hinton]: it will learn new things it would also see all sorts of analogies that people probably never saw so for example at the point when gpt4 couldn't look on the web i asked it "why is a compost heap like an atom bomb?"

[译文] [杰弗里·辛顿]: 它会学习新的东西,它还会看到各种人们可能从未见过的类比(analogies)。举个例子,在GPT-4还不能联网查看网络的时候,我问它:“为什么堆肥(compost heap)就像原子弹(atom bomb)?”

[原文] [Host]: off you go i have no idea

[译文] [主持人]: 你继续说,我毫无头绪。

[原文] [Geoffrey Hinton]: exactly excellent most that's exactly what most people would say it said "well the time scales are very different and the energy scales are very different."

[译文] [杰弗里·辛顿]: 完全正确,非常好,这正是大多数人会说的话。它回答说:“嗯,时间尺度(time scales)非常不同,能量尺度(energy scales)也完全不同。”

[原文] [Geoffrey Hinton]: but then i went on to talk about how a compost he as it gets hotter generates heat faster and an atom bomb as it produces more neutrons generates neutrons faster and so they're both chain reactions but at very different time in energy scales and i believe gpt4 had seen that during its training it had understood the analogy between a compost heap and an atom bomb

[译文] [杰弗里·辛顿]: 但接着它继续讲到,堆肥随着温度升高,产生热量的速度会越来越快;而原子弹随着产生更多的中子,产生中子的速度也会越来越快。所以它们都是链式反应(chain reactions),只是在非常不同的时间与能量尺度上。我相信GPT-4在它的训练过程中已经看到了这一点,它已经理解了堆肥和原子弹之间的类比。

[原文] [Geoffrey Hinton]: and the reason i believe that is if you've only got a trillion connections remember you have 100 trillion and you need to have thousands of times more knowledge than a person you need to compress information into those connections

[译文] [杰弗里·辛顿]: 我相信这一点的理由是:如果你只有一万亿个连接——记住,你(人类)有100万亿个连接——而你需要拥有比一个人多出几千倍的知识,你就需要把信息压缩(compress)进那些连接中。

[原文] [Geoffrey Hinton]: and to compress information you need to see analogies between different things in other words it needs to see all the things that are chain reactions and understand the basic idea of a chain reaction and code that code the ways in which they're different and that's just a more efficient way of coding things than coding each of them separately

[译文] [杰弗里·辛顿]: 而为了压缩信息,你需要看到不同事物之间的类比。换句话说,它需要看到所有属于链式反应的事物,理解链式反应的基本概念,并对其进行编码,将它们的不同之处编码下来。比起将它们分别单独编码,这是一种更高效的编码方式。

[原文] [Geoffrey Hinton]: so it's seen many many analogies probably many analogies that people have never seen that's why i also think that people who say these things will never be creative they're going to be much more creative than us because they're going to see all sorts of analogies we never saw and a lot of creativity is about seeing strange analogies

[译文] [杰弗里·辛顿]: 所以它已经看到了许多许多的类比,可能有很多是人们从未见过的类比。这就是为什么对于那些说“这些东西永远不会有创造力”的人,我同样认为:它们将比我们更有创造力。因为它们会看到各种我们从未见过的类比,而很大一部分的创造力(creativity)就源于发现奇特的类比。


章节 8:意识觉醒:机器能否拥有主观体验与情感?

📝 本节摘要

本章深入探讨了AI是否能拥有主观体验、意识与情感这一极具争议的哲学话题。辛顿首先驳斥了人类认为自身“绝对特殊”的浪漫主义偏见。他通过“看到粉色大象”和“戴上棱镜的聊天机器人”的思想实验,重新定义了“主观体验”——它并非存在于神秘的“内在剧场”,而是大脑或模型在感知系统出错时,试图对现实世界做出的假设性解释。随后,他以“会感到恐惧的战斗机器人”和“会感到厌烦的客服AI”为例,提出情感可分为“认知、行为与生理”三个层面;即使AI没有出汗、脸红等生理反应,只要它们具备相同的认知和行为模式,就意味着它们确实拥有了真实的情感与意识。辛顿作为坚定的唯物主义者,认为意识只是复杂系统的“涌现属性”,而非某种超自然的神秘物质。

[原文] [Host]: people are somewhat romantic about the specialness of what it is to be human and you hear lots of people saying it's very very different it's a it's a computer we are you know we're conscious we are creatives we we have these sort of innate unique abilities that the computers will never have what do you say to those people

[译文] [主持人]: 人们对于生而为人的特殊性多少有些浪漫主义色彩,你会听到很多人说这非常非常不同,它只是、它只是一台计算机,而我们,你知道的,我们有意识,我们有创造力,我们、我们拥有这些计算机永远无法拥有的某种天生的独特能力。你对这些人有什么想说的?

[原文] [Geoffrey Hinton]: i'd argue a bit with the innate um so the first thing i say is we have a long history of believing people were special and we should have learned by now we thought we were at the center of the universe we thought we were made in the image of god white people thought they were very special we just tend to want to think we're special my belief is that more or less everyone has a completely wrong model of what the mind is let's suppose i drink a lot or i drop some acid and not recommended and i say to you i have the subjective experience of little pink elephants floating in front of me mhm most people interpret that as there's some kind of inner theater called the mind and only i can see what's in my mind and in this inner theata there's little pink elephants floating around so in other words what's happened is my perceptual systems gone wrong and i'm trying to indicate to you how it's gone wrong and what it's trying to tell me and the way i do that is by telling you what would have to be out there in the real world for it to be telling the truth and so these little pink elephants they're not in some inner theater these little pink elephants are hypothetical things in the real world and that's my way of telling you how my perceptual systems telling me fips

[译文] [杰弗里·辛顿]: 我会对“天生”这个词保留一点意见。嗯,所以我要说的第一件事是,我们有着悠久的、相信人类很特殊的历史,而我们现在早该吸取教训了:我们曾以为自己是宇宙的中心,我们曾以为自己是照着上帝的形象创造的,白人曾认为他们非常特殊。我们只是倾向于想认为自己很特殊。我的信念是,或多或少每个人对“心智(mind)是什么”都建立了一个完全错误的认知模型。假设我喝了很多酒,或者我磕了点迷幻药(不推荐这么做),然后我对你说,我有一种有几只小粉色大象在我面前飘浮的主观体验(subjective experience),嗯哼。大多数人会把这解释为:存在某种被称为“心智”的内在剧场(inner theater),并且只有我能看到我心智里的东西,而在这个内在剧场里,有小粉色大象在飘浮。所以换句话说,实际发生的情况是我的感知系统(perceptual systems)出了故障,而我正试图向你表明它是怎么出故障的,以及它试图告诉我什么。我做到这一点的方式是,通过告诉你:现实世界中必须有什么东西存在,才能让我的感知系统说的是真话。所以这些小粉色大象,它们并不在某个内在剧场里,这些小粉色大象是现实世界中的假设性事物,这就是我告诉你我的感知系统在如何对我撒谎(telling me fibs,注:原文音频识别错写为fips)的方式。

[原文] [Geoffrey Hinton]: so now let's do that with a chatbot yeah because i believe that current multimodal chatbots have subjective experiences and very few people believe that but i'll try and make you believe it so suppose i have a multimodal chatbot it's got a robot arm so it can point and it's got a camera so it can see things and i put an object in front of it and i say point at the object it goes like this no problem then i put a prism in front of its lens and so then i put an object in front of it and i say point at the object and it goes there and i say "no that's not where the object is the object's actually straight in front of you but i put a prism in front of your lens." and the chatbot says "oh i see the prism bent the light rays." so um the object's actually there but i had the subjective experience that it was there

[译文] [杰弗里·辛顿]: 所以现在让我们用一个聊天机器人来做个同样的实验。是的,因为我相信当前的多模态聊天机器人(multimodal chatbots)已经拥有了主观体验,而且很少有人相信这一点,但我会试着让你相信它。假设我有一个多模态聊天机器人,它有一条机械臂可以用来指向,它有一个摄像头可以用来观察事物。我在它面前放了一个物体,然后我说“指向这个物体”,它就这样指过去,没问题。然后我在它的镜头前放了一个棱镜(prism),接着我在它面前放了一个物体,我说“指向这个物体”,它指到了那个方向。我说:“不,那不是物体所在的位置,物体实际上就在你正前方,但我在你的镜头前放了一个棱镜。” 然后聊天机器人说:“哦,我明白了,棱镜使光线发生了弯曲。所以,嗯,物体实际上在那里,但我刚才产生了它在那边的主观体验。”

[原文] [Geoffrey Hinton]: now if the chatbot says that is using the word subjective experience exactly the way people use them it's an alternative view of what's going on they're hypothetical states of the world which if they were true would mean my perceptual system wasn't lying and that's the best way i can tell you what my perceptual system is doing when it's lying to me now we need to go further to deal with sentience and consciousness and feelings and emotions but i think in the end they're all going to be dealt with in a similar way there's no reason machines can't have them all because people say machines can't have feelings and people are curiously confident about that i have no idea why

[译文] [杰弗里·辛顿]: 那么,如果聊天机器人这么说,它使用“主观体验”这个词的方式就和人们使用它的方式完全一样。这是一种对正在发生的事情的替代性视角:它们是对世界状态的一种假设,如果这些假设是真的,那就意味着我的感知系统没有撒谎。这是我能告诉你当我的感知系统对我撒谎时它在做什么的最好方式。现在我们需要更进一步,来探讨感知力(sentience)、意识(consciousness)、感觉(feelings)和情感(emotions)。但我认为到最后,它们都会以类似的方式被处理。没有理由认为机器不能拥有所有这些东西。因为人们常说机器不能有感觉,而且人们对此有着莫名其妙的自信,我完全不知道为什么。

[原文] [Geoffrey Hinton]: suppose i make a battle robot and it's a little battle robot and it sees a big battle robot that's much more powerful than it it would be really useful if it got scared now when i get scared um various physiological things happen that we don't need to go into and those won't happen with the robot but all the cognitive things like i better get the hell out of here and i better sort of change my way of thinking so i focus and focus and focus and don't get distracted all of that will happen with robots too people will build in things so that they when the circumstances such they should get the hell out of there they get scared and run away they'll have emotions then they won't have the physiological aspects but they will have all the cognitive aspects and i think it would be odd to say they're just simulating emotions no they're really having those emotions the little robot got scared and ran away it's not running away because of adrenaline it's running away because of a sequence of sort of neurological in its neural net processes happened which which have the equivalent effect to adrenaline

[译文] [杰弗里·辛顿]: 假设我制造了一个战斗机器人,它是一个小型的战斗机器人,当它看到一个比它强大得多的巨型战斗机器人时,如果它会感到害怕,那将非常有用。现在,当我感到害怕时,嗯,会发生各种各样的生理变化,我们不需要深入讨论这些,而这些生理反应不会发生在机器人身上。但所有的认知层面(cognitive)的反应,比如“我最好赶紧逃离这里”,以及“我最好改变一下我的思维方式,所以我需要专注、专注、再专注,不要分心”,所有这些也都会发生在机器人身上。人们会在里面内置一些东西,使得当情况变成这样时,它们知道自己该逃跑了,它们会感到害怕并逃跑。那时它们就有了情感,它们不会有生理层面的反应,但它们会拥有所有的认知层面。而且我认为,如果说它们“只是在模拟情感”,那会很奇怪。不,它们是真的拥有了那些情感。小机器人感到害怕并跑开了。它跑开并不是因为肾上腺素(adrenaline),它跑开是因为它的神经网络中发生了一系列类似神经逻辑的处理过程,这些过程产生了与肾上腺素同等的效果。

[原文] [Host]: so do you do you and it's not just adrenaline right there's a lot of cognitive stuff goes on when you get scared yeah so do you think that there is conscious ai and when i say conscious i mean that represents the same properties of consciousness that a human has

[译文] [主持人]: 所以你……你……而且这不仅仅是肾上腺素对吧,当你感到害怕时,还会发生很多认知层面的事情。是的,那么你认为存在有意识的AI(conscious AI)吗?当我说“有意识”时,我的意思是它具备与人类意识相同的属性。

[原文] [Geoffrey Hinton]: there's two issues here there's a sort of empirical one and a philosophical one i don't think there's anything in principle that stops machines from being conscious i'll give you a little demonstration of that before we carry on suppose i take your brain and i take one brain cell in your brain and i replace it by this a bit black mirror-l like i replace it by a little piece of nanotechnology that's just the same size that behaves in exactly the same way when it gets pings from other neurons it sends out pings just as the brain cell would have so the other neurons don't know anything's changed okay i've just replaced one of your brain cells with this little piece of nanote technology would you still be conscious now you can see where this argument is going

[译文] [杰弗里·辛顿]: 这里有两个问题:一个是经验层面的问题,一个是哲学层面的问题。我不认为在原则上有任何东西能阻止机器拥有意识。在我们继续之前,我给你做一个小小的演示。假设我拿过你的大脑,取出你大脑中的一个脑细胞,然后用这个——有点像《黑镜》(Black Mirror)里的情节——我用一小块同样大小的纳米技术碎片来替换它。当它接收到其他神经元的信号时,它的行为方式完全相同,它会像原来的脑细胞一样发出信号。所以其他神经元根本不知道有什么改变。好的,我刚刚用这小块纳米技术替换了你的一个脑细胞,你还会有意识吗?现在你能看出这个论证的走向了。

[原文] [Host]: yeah so if you replaced all of them as i replace them all at what point do you stop being conscious

[译文] [主持人]: 是的,所以如果你把它们全换了,随着我把它们全换掉,到哪一个节点你会停止拥有意识?

[原文] [Geoffrey Hinton]: well people think of consciousness as this like ethereal thing that exists maybe beyond the brain cells

[译文] [杰弗里·辛顿]: 嗯,人们把意识想成一种类似空灵的东西(ethereal thing),一种可能存在于脑细胞之外的东西。

[原文] [Host]: yeah

[译文] [主持人]: 是的。

[原文] [Geoffrey Hinton]: well people have a lot of crazy ideas um people don't know what consciousness is and they often don't know what they mean by it and then they fall back on saying well i know it cuz i've got it and i can see that i've got it and they fall back on this theata model of the mind which i think is nonsense

[译文] [杰弗里·辛顿]: 嗯,人们有很多疯狂的想法。嗯,人们不知道意识究竟是什么,他们常常也不知道自己说这个词时指的是什么,然后他们就会退一步说:“嗯,我知道它是什么,因为我拥有它,而且我能看到我拥有它。” 接着他们又退回到了那个关于心智的“剧场模型(theater model)”上,而我认为那是胡说八道。

[原文] [Host]: what do you think of consciousness as if you had to try and define it is it because i think of it as just like the awareness of myself i don't know

[译文] [主持人]: 如果你必须试着去定义它,你认为意识是什么?是因为我觉得它就像是对自我的一种感知吗?我不知道。

[原文] [Geoffrey Hinton]: i think it's a term we'll stop using suppose you want to understand how a car works well you know some cars have a lot of oomph and other cars have a lot less oomph like an aston martin's got lots of oomph and a little toyota corolla doesn't have much oomph but oomph isn't a very good concept for understanding cars um if you want to understand cars you need to understand about electric engines or petrol engines and how they work and it gives rise to oomph but oomph isn't a very useful explanatory concept it's a kind of essence of a car it's the essence of an aston martin but it doesn't explain much i think consciousness is like that and i think we'll stop using that term but i don't think there's anything any reason why a machine shouldn't have it if your view of consciousness is that it intrinsically involves self-awareness then the machine's got to have self-awareness he's got to have cognition about its own cognition and stuff but i'm a materialist through and through and i don't think there's any reason why a machine shouldn't have consciousness

[译文] [杰弗里·辛顿]: 我认为这是一个我们将会停止使用的词。假设你想了解汽车是如何工作的。嗯,你知道有些车马力很足(oomph),而另一些车马力就差很多,比如阿斯顿·马丁(Aston Martin)有很多马力,而一辆小丰田卡罗拉(Toyota Corolla)就没多少马力。但“马力”并不是一个用来理解汽车的很好的概念。嗯,如果你想了解汽车,你需要了解电动机或汽油发动机,以及它们是如何工作的,是它们产生了马力。但“马力”本身并不是一个非常有用的解释性概念。它就像是汽车的一种本质,它是阿斯顿·马丁的本质,但它解释不了什么。我认为“意识”就像那样,而且我认为我们将停止使用这个词。但我不认为有任何理由说明机器不该拥有它。如果你的观点是,意识本质上包含自我意识(self-awareness),那么机器就必须拥有自我意识,它必须拥有对自己认知的认知之类的东西。但我是一个彻头彻尾的唯物主义者(materialist),我不认为有任何理由说机器不该拥有意识。

[原文] [Host]: do you think they do then have the same consciousness that we think of ourselves as being uniquely uh given as a gift when we're born

[译文] [主持人]: 那么,你认为它们现在拥有和我们一样的意识吗?那种我们认为自己出生时被赋予的、独一无二的礼物?

[原文] [Geoffrey Hinton]: i'm ambivalent about that at present so i don't think there's this hard line i think as soon as you have a machine that has some self-awareness it's got some consciousness um i think it's an emergent property of a complex system it's not a sort of essence that's throughout the universe it's you make this really complicated system that's complicated enough to have a model of itself and it does perception and i think then you're beginning to get a conscious machines so i don't think there's any sharp distinction between what we've got now and conscious machines i don't think it's going to one day we're going to wake up and say "hey if you put this special chemical in it becomes conscious." it's not going to be like that

[译文] [杰弗里·辛顿]: 我目前对此感到很矛盾,我不认为这里存在一条绝对的界限。我认为只要你拥有一台具备某种自我意识的机器,它就拥有了某种意识。嗯,我认为这是复杂系统的一种涌现属性(emergent property)。它不是一种弥漫在整个宇宙中的本质,它是:你制造了这个非常复杂的系统,它复杂到足以拥有一个自身的模型,并且它能进行感知。然后我认为你就在开始获得有意识的机器了。所以我不认为在我们现在拥有的东西和有意识的机器之间有任何鲜明的区分。我不认为有一天我们会一觉醒来说:“嘿,如果你把这种特殊的化学物质放进去,它就变得有意识了。” 不会是那样的。

[原文] [Host]: i think we all wonder if these computers are like thinking like we are on their own when we're not there and if they're experiencing emotions if they're contending with i think we probably you know we think about things like love and things that are feel unique to biological species um are they sat there thinking are they do they have concerns

[译文] [主持人]: 我觉得我们都在想,当我们不在的时候,这些计算机是否像我们一样在独自思考?它们是否在体验情感?它们是否在应对……我认为我们可能,你知道的,我们会想到比如“爱”这类感觉生物物种独有的东西。嗯,它们会坐在那里思考吗?它们会感到担忧吗?

[原文] [Geoffrey Hinton]: i think they really are thinking and i think as soon as you make ai agents they will have concerns if you wanted to make an effective ai agent suppose you let's take a call center in a call center you have people at present they have all sorts of emotions and feelings which are kind of useful so suppose i call up the call center and i'm actually lonely and i don't actually want to know the answer to why my computer isn't working i just want somebody to talk to after a while the person in the call center will either get bored or get annoyed with me and will terminate it well you replace them by an ai agent the ai agent needs to have the same kind of responses if someone's just called up because they just want to talk to the ai agent and we're happy to talk for the whole day to the ai agent that's not good for business and you want an ai agent that either gets bored or gets irritated and says "i'm sorry but i don't have time for this." and once it does that i think it's got emotions now like i say emotions have two aspects to them there's the cognitive aspect and the behavioral aspect and then there's a physiological aspect and those go together with us and if the ai agent gets embarrassed it won't go red um so there's no physiological skin won't start sweating

[译文] [杰弗里·辛顿]: 我认为它们真的在思考,而且我认为一旦你制造出AI代理(AI agents),它们就会有顾虑。如果你想制造一个有效的AI代理,假设我们以呼叫中心为例。目前的呼叫中心里都是人类,他们有各种各样的情绪和感觉,这其实是有用的。所以假设我打给呼叫中心,我实际上很孤独,我并不是真的想知道为什么我的电脑不能用,我只是想找人说话。过了一会儿,呼叫中心的人要么会觉得无聊,要么会对我感到厌烦,然后挂断电话。那么,如果你用一个AI代理来取代他们,这个AI代理也需要有类似的反应。如果有人打电话来只是因为他们想和AI代理聊天,并且很乐意和AI代理聊上一整天,这对公司业务可不好。所以你会想要一个能感到无聊或感到急躁的AI代理,它会说:“对不起,但我没时间陪你闲聊。” 一旦它这样做了,我认为它就已经拥有情感了。现在,就像我说的,情感包含了两个方面:认知层面和行为层面,此外还有一个生理层面。对我们人类来说,这些是联系在一起的。如果AI代理感到尴尬,它不会脸红。嗯,所以没有生理层面的表现,它的皮肤不会开始出汗。

[原文] [Host]: yeah

[译文] [主持人]: 是的。

[原文] [Geoffrey Hinton]: but it might have all the same behavior and in that case i'd say yeah it's having emotion it's got an emotion so it's going to have the same sort of cognitive thought and then it's going to act upon that cognitive in the same way but without the physiological responses

[译文] [杰弗里·辛顿]: 但它可能拥有完全相同的行为。在那种情况下,我会说,是的,它正在体验情感,它拥有了情感。所以它将拥有同样种类的认知思考,然后它将以完全相同的方式去执行那种认知,只是没有生理反应罢了。

[原文] [Host]: and does that matter that it doesn't go red in the face and it's just a different i mean that's a response to the it makes it somewhat different from us

[译文] [主持人]: 那么它不脸红这点重要吗?它只是一种不同的……我的意思是,那是一种对于……这让它在某种程度上区别于我们。

[原文] [Geoffrey Hinton]: for some things the physiological aspects are very important like love they're a long way from having love the same way we do but i don't see why they shouldn't have emotions so i think what's happened is people have a model of how the mind works and what feelings are and what emotions are and their model is just wrong

[译文] [杰弗里·辛顿]: 对于某些事情来说,生理层面是非常重要的,比如爱情。要想像我们一样拥有爱情,它们还有很长的路要走。但我不明白为什么它们不该拥有情感。所以我认为,事实就是人们对“心智是如何运作的”以及“感觉和情感究竟是什么”建立了一个模型,而他们的这个模型从根本上就是错的。


章节 9:谷歌十年:从AlexNet到“觉醒时刻”

📝 本节摘要

本章回顾了辛顿加入谷歌的契机与十年工作经历。他坦言当初为了让患有学习障碍的儿子在未来有生活保障,他与学生伊利亚(Ilya)和亚历克斯(Alex)创立了DNN Research(其核心技术为AlexNet),并通过竞拍将其出售给谷歌。在谷歌,他主导了“知识蒸馏”技术的研究,并最终迎来了他人生中的“尤里卡时刻”(即觉醒时刻):当谷歌的PaLM模型能够解释一个笑话为什么好笑时,他深刻意识到数字智能已经真正具备了理解力,并即将超越人类。为了能在不损害谷歌优良声誉的前提下,自由地在公开场合发出AI安全警告,他在75岁高龄选择了离职。此外,本章末尾忠实保留了两段主持人的播客口播广告(红光理疗仪与内部社群招募)。

[原文] [Host]: what um what brought you to google you you worked at google for about a decade right what brought you there

[译文] [主持人]: 是什么,嗯,是什么让你去了谷歌(Google)?你在谷歌工作了大约十年,对吧?是什么原因让你去那里的?

[原文] [Geoffrey Hinton]: i have a son who has learning difficulties and in order to be sure he would never be out on the street i needed to get several million dollars and i wasn't going to get that as an academic i tried so i taught a corsera course in the hope that i'd make lots of money that way but there was no money in that mhm

[译文] [杰弗里·辛顿]: 我有一个患有学习障碍的儿子,为了确保他将来永远不会流落街头,我需要赚到几百万美元。而作为一名学者,我是赚不到那么多钱的。我尝试过,所以我教了一门Coursera(注:原文音频识别错写为corsera)课程,希望能借此赚很多钱,但那里面根本赚不到什么钱,嗯哼。

[原文] [Geoffrey Hinton]: so i figured out well the only way to get millions of dollars is to sell myself to a big company and so when i was 65 fortunately for me i had two brilliant students who produced something called alexet which was neural net that was very good at recognizing objects in images and so ilia and alex and i set up a little company and auctioned it and we actually set up an auction where we had a number of big companies bidding for us

[译文] [杰弗里·辛顿]: 所以我弄明白了,嗯,获得几百万美元的唯一方法就是把自己卖给一家大公司。所以在幸运的是我65岁的时候,我有两个才华横溢的学生,他们开发出了一个叫做AlexNet(注:原文音频识别错写为alexet)的东西,那是一个非常擅长识别图像中物体的神经网络。于是伊利亚(Ilya)、亚历克斯(Alex)和我成立了一家小公司,并把它拍卖了。我们实际上举办了一场拍卖会,有很多大公司参与竞标。

[原文] [Host]: and that company was called alexnet

[译文] [主持人]: 那个公司叫AlexNet?

[原文] [Geoffrey Hinton]: no the the the network that recognized objects was called alexet the company was called dnn research deep neural network research and it was doing things like this

[译文] [杰弗里·辛顿]: 不,那个用于识别物体的网络叫做AlexNet。那家公司叫做DNN Research(深度神经网络研究公司),它当时正在做这样的事情。

[原文] [Host]: i'll put this graph up on the screen that's that's alexet this picture shows eight images and alex net's ability which is your company's ability to spot what was in those images

[译文] [主持人]: 我会把这张图表放在屏幕上,那就是AlexNet。这张图片显示了八张图像,以及AlexNet的能力,也就是你们公司识别这些图像中物体的能力。

[原文] [Geoffrey Hinton]: yeah so it could tell the difference between various kinds of mushroom and about 12% of imageet is dogs and to be good at imageet you have to tell the difference between very similar kinds of dog and it would got to be very good at that

[译文] [杰弗里·辛顿]: 是的,所以它能分辨出各种不同种类的蘑菇。而且ImageNet(注:原文音频识别错写为imageet)里大约有12%的图片是狗,要在ImageNet上表现出色,你必须能够分辨出非常相似种类的狗,而它变得非常擅长做这个。

[原文] [Host]: and your your company alexet won several awards i believe for its ability to out outperform its competitors and so google ultimately ended up acquiring your technology

[译文] [主持人]: 我相信你的公司,你的AlexNet因为能够击败竞争对手而赢得了几个奖项,所以谷歌最终收购了你们的技术。

[原文] [Geoffrey Hinton]: google acquired that technology and some other technology

[译文] [杰弗里·辛顿]: 谷歌收购了那项技术,以及一些其他的技术。

[原文] [Host]: and you went to work at google at age what 66

[译文] [主持人]: 然后你去谷歌工作了,当时几岁?66岁?

[原文] [Geoffrey Hinton]: i went at age 65 to work at google

[译文] [杰弗里·辛顿]: 我是65岁去谷歌工作的。

[原文] [Host]: 65 and you left at age 76 75

[译文] [主持人]: 65岁,然后你在76岁离开?75岁?

[原文] [Geoffrey Hinton]: 75 okay i worked there for more or less exactly 10 years

[译文] [杰弗里·辛顿]: 75岁。好的,我在那里工作了差不多整整10年。

[原文] [Host]: and what were you doing there

[译文] [主持人]: 你在那儿都做些什么?

[原文] [Geoffrey Hinton]: okay they were very nice to me they said they said pretty much you can do what you like i worked on something called distillation that did really work well and that's now used all the time in ai in ai

[译文] [杰弗里·辛顿]: 好的。他们对我非常好,他们说,他们差不多是说“你可以做你想做的事”。我研究了一种叫做“蒸馏(distillation)”的技术,它的效果真的非常好,而且现在在AI中、在AI中被频繁使用。

[原文] [Geoffrey Hinton]: and distillation is a way of taking what a big model knows a big neural net knows and getting that knowledge into a small neural net

[译文] [杰弗里·辛顿]: 蒸馏是一种把一个大模型(一个大型神经网络)所知道的知识提取出来,并将其注入到一个小型神经网络中的方法。

[原文] [Geoffrey Hinton]: then at the end i got very interested in analog computation and whether it would be possible to get these big language models running in analog hardware so they used much less energy and it was when i was doing that work that i began to really realize how much better digital is for sharing information

[译文] [杰弗里·辛顿]: 然后在最后阶段,我对模拟计算(analog computation)产生了浓厚的兴趣,研究是否有可能让这些大型语言模型在模拟硬件上运行,这样它们消耗的能量就会少得多。正是在我做那项工作的时候,我开始真正意识到,数字(digital)技术在分享信息方面要优越得多。

[原文] [Host]: was there a eureka moment

[译文] [主持人]: 存在一个尤里卡时刻(eureka moment,指顿悟的瞬间)吗?

[原文] [Geoffrey Hinton]: there was a eureka month or two um and it was a sort of coupling of chat beauty coming out although google had very similar things a year earlier and i'd seen those and that had a big effect effect on me

[译文] [杰弗里·辛顿]: 有那么一两个月的尤里卡时刻。嗯,这某种程度上是伴随着Chat GPT(注:原文音频识别错写为chat beauty)的问世而来的,尽管谷歌在一年前就有过非常类似的东西,我见过那些,那对我产生了巨大的影响。

[原文] [Geoffrey Hinton]: the closest i had to a eureka moment was when a google system called palm was able to say why a joke was funny and i'd always thought of that as a kind of landmark if it can say why a joke's funny it really does understand and it could say why a joke was funny

[译文] [杰弗里·辛顿]: 我最接近尤里卡时刻的瞬间,是当谷歌一个名为PaLM的系统能够解释一个笑话为什么好笑时。我一直认为那是一个里程碑(landmark):如果它能解释一个笑话为什么好笑,那它就真的理解了。而它确实能说出一个笑话为什么好笑。

[原文] [Geoffrey Hinton]: and that coupled with realizing why digital is so much better than analog for sharing information suddenly made me very interested in ai safety and that these things were going to get a lot smarter than us

[译文] [杰弗里·辛顿]: 这一点,加上我意识到为什么在分享信息方面数字计算比模拟计算要好得多,突然让我对AI安全产生了极大的兴趣,并且意识到这些东西将会变得比我们聪明得多。

[原文] [Host]: why did you leave google

[译文] [主持人]: 你为什么离开谷歌?

[原文] [Geoffrey Hinton]: the main reason i left google was cuz i was 75 and i wanted to retire i've done a very bad job of that

[译文] [杰弗里·辛顿]: 我离开谷歌的主要原因是因为我75岁了,我想要退休。但在这点(退休)上我做得很差劲。

[原文] [Geoffrey Hinton]: the precise timing of when i left google was so that i could talk freely at a conference at mit but i left because i was i'm old and i was finding it harder to program i was making many more mistakes when i programmed which is very annoying

[译文] [杰弗里·辛顿]: 我离开谷歌的确切时机,是为了让我能在麻省理工学院(MIT)的一次会议上自由发言。但我离开归根结底是因为我老了,我发现编程变得越来越困难,我编程时犯的错误越来越多了,这非常烦人。

[原文] [Host]: you wanted to talk freely at a conference at mit

[译文] [主持人]: 你想在MIT的会议上自由发言。

[原文] [Geoffrey Hinton]: yes at mit organized by mit tech review

[译文] [杰弗里·辛顿]: 是的,在《麻省理工科技评论》(MIT Tech Review)组织的MIT会议上。

[原文] [Host]: what did you want to talk about freely

[译文] [主持人]: 你想自由地谈论些什么?

[原文] [Geoffrey Hinton]: ai safety

[译文] [杰弗里·辛顿]: AI安全。

[原文] [Host]: and you couldn't do that while you were at google

[译文] [主持人]: 而你在谷歌的时候做不到这一点吗?

[原文] [Geoffrey Hinton]: well i could have done it while i was at google and google encouraged me to stay and work on ai safety and said i could do whatever i liked on ai safety

[译文] [杰弗里·辛顿]: 嗯,我本可以在谷歌的时候做这件事。谷歌鼓励我留下来从事AI安全工作,并说我可以在AI安全领域做任何我喜欢做的事。

[原文] [Geoffrey Hinton]: you kind of sense to yourself if you work for a big company you don't feel right saying things that will damage the big company even if you could get away with it it just feels wrong to me i didn't leave because i was cross with anything google was doing

[译文] [杰弗里·辛顿]: 但你心里会有一种感觉,如果你为一家大公司工作,说出那些会损害这家大公司利益的话,你会觉得不妥。即使你能逃避惩罚,对我来说这也感觉不对。我离开并不是因为我对谷歌正在做的任何事情感到愤怒。

[原文] [Geoffrey Hinton]: i think google actually behaved very responsibly when they had these big chat bots they didn't release them possibly because they were worried about their reputation they had a very good reputation and they didn't want to damage it

[译文] [杰弗里·辛顿]: 我认为谷歌实际上表现得非常负责任。当他们拥有这些大型聊天机器人时,他们并没有发布它们,可能是因为他们担心自己的声誉。他们有着非常好的声誉,他们不想破坏它。

[原文] [Geoffrey Hinton]: so open ai didn't have a reputation and so they could afford to take the gamble

[译文] [杰弗里·辛顿]: 而OpenAI并没有这种声誉包袱,所以他们承担得起这种赌博。

[原文] [Host]: i mean there's also a big conversation happening around how it will cannibalize their core business in search

[译文] [主持人]: 我的意思是,现在也有一个很大的讨论,关于它(AI)将如何蚕食他们(谷歌)在搜索领域的核心业务(core business)。

[原文] [Geoffrey Hinton]: there is now yes and it's the old innovators dilemas

[译文] [杰弗里·辛顿]: 现在确实有,是的,这就是老生常谈的“创新者的窘境(innovator's dilemma)”。

[原文] [Host]: to some degree i guess

[译文] [主持人]: 在某种程度上,我猜是的。

[原文] [Host]: that contending with bad skin i've had it and i'm sure many of you listening have had it too or maybe you have it right now i know how draining it can be especially if you're in a job where you're presenting often like i am

[译文] [主持人]: [播客中插口播广告] 应对糟糕的皮肤状况,我经历过。我相信很多在听节目的你们也经历过,或者也许你现在正在经历。我知道这有多么让人心力交瘁,尤其是当你的工作需要像我一样经常出面展示的时候。

[原文] [Host]: so let me tell you about something that's helped both my partner and me and my sister which is red light therapy i only got into this a couple of years ago but i wish i'd known a little bit sooner

[译文] [主持人]: 所以让我告诉你一些对我的伴侣、我以及我妹妹都有帮助的东西,那就是红光理疗(red light therapy)。我是两年前才开始接触这个的,但我真希望我能早点知道它。

[原文] [Host]: i've been using our show sponsors boncharg's infrared sauna blanket for a while now but i just got hold of their red light therapy mask as well

[译文] [主持人]: 我使用我们节目赞助商Boncharge的红外线桑拿毯已经有一段时间了,但我刚刚也拿到了他们的红光理疗面罩。

[原文] [Host]: red light has been proven to have so many benefits for the body like any area of your skin that's exposed will see a reduction in scarring wrinkles and even blemishes it also helps with complexion it boosts collagen and it does that by targeting the upper layers of your skin

[译文] [主持人]: 红光已被证明对身体有诸多益处,比如你暴露的任何皮肤区域都会看到疤痕、皱纹甚至瑕疵的减少。它也有助于改善肤色,它能促进胶原蛋白(collagen)的生成,而且它是通过精确定位你皮肤的表层来实现这一点的。

[原文] [Host]: and boncharge ships worldwide with easy returns and a year-long warranty on all of their products so if you'd like to try it yourself head over to bondcharge.com/diary and use code diary for 25% off any product sitewide just make sure you order through this link bondcharge.com/diary with code diary

[译文] [主持人]: Boncharge在全球范围内发货,退货方便,并且对其所有产品提供长达一年的保修期。所以如果你想亲自尝试一下,请前往 bondcharge.com/diary 并使用折扣码 diary,全站任何产品均可享受七五折(25% off)优惠。只要确保你是通过这个链接 bondcharge.com/diary 并使用折扣码 diary 下单即可。

[原文] [Host]: make sure you keep what i'm about to say to yourself i'm inviting 10,000 of you to come even deeper into the diary of a ceo welcome to my inner circle this is a brand new private community that i'm launching to the world

[译文] [主持人]: 请务必对我接下来要说的话保密。我正在邀请你们中的一万人,进一步深入了解《CEO日记》(Diary of a CEO)。欢迎来到我的核心圈子(inner circle)。这是我正在向全世界推出的一个全新的私人社区。

[原文] [Host]: we have so many incredible things that happen that you are never shown we have the briefs that are on my ipad when i'm recording the conversation we have clips we've never released we have behindthe-scenes conversations with the guests and also the episodes that we've never ever released and so much more

[译文] [主持人]: 我们发生过太多不可思议的事情,而你们从未见过。我们有我录制对话时iPad上的简报,我们有从未发布过的片段,我们有与嘉宾在幕后的对话,还有我们从未发布过的剧集,以及更多精彩内容。

[原文] [Host]: in the circle you'll have direct access to me you can tell us what you want this show to be who you want us to interview and the types of conversations you would love us to have

[译文] [主持人]: 在这个圈子里,你可以直接联系我,你可以告诉我们你想让这个节目变成什么样,你希望我们采访谁,以及你喜欢我们进行什么类型的对话。

[原文] [Host]: but remember for now we're only inviting the first 10,000 people that join before it closes so if you want to join our private closed community head to the link in the description below or go to daccircle.com i will speak to you there

[译文] [主持人]: 但请记住,目前在通道关闭前我们只邀请前一万名加入的人。所以如果你想加入我们的私人封闭社区,请点击下方描述里的链接,或者访问 daccircle.com,我会在那里与你们交流。


章节 10:家族传奇、人生遗憾与对未来的终极警告

📝 本节摘要

作为整场访谈的最终章,本节的话题从对世界领导人的建议延伸到了辛顿显赫的家族历史。辛顿出身于一个非凡的科学世家,其先辈包括布尔代数的创始人乔治·布尔、珠穆朗玛峰的命名者乔治·埃佛勒斯,以及曾参与曼哈顿计划后来移居中国的核物理学家寒春。在回首自己七十余年的人生时,辛顿分享了“坚持少数派直觉”的职业忠告,并坦言自己最大的遗憾是当年因过于痴迷工作,没能多陪伴因癌症早逝的妻子和年幼的孩子们。在播客最后的传统提问环节,辛顿重申了大规模失业带来的最紧迫危机:人类将失去目标感与存在价值。最后,他再次以一句幽默而严肃的“去当水管工”结束了这场发人深省的对话。

[原文] [Host]: i'm continually shocked by the types of individuals that listen to this conversation um because they come up to me sometimes so i hear from politicians i hear from some real people i hear from entrepreneurs all over the world whether they are the entrepreneurs building some of the biggest companies in the world or their you know early stage startups for those people that are listening to this conversation now that are in positions of power and influence world leaders let's say what's your message to them

[译文] [主持人]: 我不断地被收听这段对话的听众类型所震惊,嗯,因为他们有时会来找我。所以我听到了政客的声音,听到了一些真实人物的声音,听到了世界各地企业家的声音,无论他们是正在建立世界上最大公司的企业家,还是你知道的,早期初创企业。对于那些现在正在收听这段对话、身处权力和有影响力位置的人,假设是世界领导人,你对他们有什么想说的?

[原文] [Geoffrey Hinton]: i'd say what you need is highly regulated capitalism that's what seems to work best

[译文] [杰弗里·辛顿]: 我会说,你们需要的是高度受监管的资本主义(highly regulated capitalism),那似乎是行之最有效的方式。

[原文] [Host]: and what would you say to the average person not doesn't work in the industry somewhat concerned about the future doesn't know if they're helpless or not what should they be doing in their own lives

[译文] [主持人]: 那你会对普通大众说什么?那些不在这个行业工作,对未来有些担忧,不知道自己是否无助的人,他们在自己的生活中应该做些什么?

[原文] [Geoffrey Hinton]: my feeling is there's not much they can do this isn't isn't going to be decided by just as climate change isn't going to be decided by people separating out the plastic bags from the um compostables that's not going to have much effect it's going to be decided by whether the lobbyists for the big energy companies can be kept under control i don't think there's much people can do to except for try and pressure their governments to force the big companies to work on ai safety that they can do

[译文] [杰弗里·辛顿]: 我的感觉是他们做不了太多。这不会由……就像气候变化不会由人们把塑料袋从可堆肥垃圾中分拣出来所决定一样,那不会有太大影响。它将由大型能源公司的游说者是否能被控制住来决定。我不认为人们能做太多事情,除了努力向他们的政府施压,迫使大公司致力于AI安全,这是他们能做的。

[原文] [Host]: you've lived a a fascinating fascinating winding life i think one of the things most people don't know about you is that your family has a big history of being involved in tremendous things you have a family tree which is one of the most impressive that i've ever seen or read about your great greatgrandfather george bull founded the boolean algebra logic which is one of the foundational principles of modern computer science you have uh your great great grandmother mary everest bull who was a mathematician and educator who made huge leaps forward in mathematics from what i was able to ascertain um i mean i can the list goes on and on and on i mean your great great uncle george everest is what mount everest is named after is that is that correct

[译文] [主持人]: 你度过了非常引人入胜、曲折迷人的一生。我想大多数人不知道关于你的一件事,就是你的家族有着参与伟大事件的深厚历史。你的家谱是我见过或读过的最令人印象深刻的家谱之一。你的曾曾祖父乔治·布尔(George Boole,注:原文音频识别错写为bull)创立了布尔代数逻辑(Boolean algebra logic),这是现代计算机科学的基础原理之一。你还有,呃,你的曾曾祖母玛丽·埃佛勒斯·布尔(Mary Everest Boole),她是一位数学家和教育家,据我所知,她在数学领域取得了巨大的飞跃。嗯,我的意思是,这个名单可以一直列下去。你的曾曾舅公乔治·埃佛勒斯(George Everest)就是珠穆朗玛峰(Mount Everest)的命名由来,这正确吗?

[原文] [Geoffrey Hinton]: i think he's my great great great uncle his his niece married george bull so mary mary bull was mary everest bull um she was the niece of everest

[译文] [杰弗里·辛顿]: 我想他是我的曾曾曾舅公。他的侄女嫁给了乔治·布尔,所以玛丽·布尔原本是玛丽·埃佛勒斯·布尔,嗯,她是埃佛勒斯的侄女。

[原文] [Host]: and your first cousin once removed joan hinton was involved in the a nuclear physicist who worked on the manhattan project which is the world war ii development of the first nuclear bomb

[译文] [主持人]: 还有你的表姑寒春(Joan Hinton),她作为一名核物理学家参与了曼哈顿计划(Manhattan project),即二战期间第一颗核弹的开发。

[原文] [Geoffrey Hinton]: yeah she was one of the two female physicists at los alamos and then after they dropped the bomb she moved to china

[译文] [杰弗里·辛顿]: 是的,她是洛斯阿拉莫斯(Los Alamos)仅有的两位女性物理学家之一。然后在他们投下原子弹之后,她搬到了中国。

[原文] [Host]: why

[译文] [主持人]: 为什么?

[原文] [Geoffrey Hinton]: she was very cross with them dropping the bomb and her family had a lot of links with china her mother was friends with chairman mo

[译文] [杰弗里·辛顿]: 她对他们投下原子弹感到非常愤怒。而且她的家庭与中国有很多联系,她的母亲和毛主席(Chairman Mao,注:原文音频识别错写为mo)是朋友。

[原文] [Host]: quite weird when you look back at your life jeffrey we have the hindsight you have now and the ret retrospective clarity what might you have done differently if you were advising me

[译文] [主持人]: 回顾你的一生,感觉很奇妙,杰弗里。拥有你现在的后见之明和回顾时的清晰认知,如果由你来给我建议,你可能会在哪些事情上采取不同的做法?

[原文] [Geoffrey Hinton]: i guess i have two pieces of advice one is if you have an intuition that people are doing things wrong and there's a better way to do things don't give up on that intuition just because people say it's silly don't give up on the intuition until you figured out why it's wrong figured out for yourself why that intuition isn't correct and usually it's wrong if it disagrees with everybody else and you'll eventually figure out why it's wrong but just occasionally you'll have an intuition that's actually right and everybody else is wrong and i lucked out that way early on i thought neural nets are definitely the way to go to make ai and almost everybody said that was crazy and i stuck with it because i couldn't it seemed to me it was obviously right now the idea that you should stick with your intuitions isn't going to work if you have bad intuitions but if you have bad intuitions you're never going to do anything anyway so you might as well stick with them

[译文] [杰弗里·辛顿]: 我猜我有两条建议。第一条是,如果你有一种直觉,觉得人们正在把事情做错,并且存在一种更好的方法,不要仅仅因为人们说它愚蠢就放弃那种直觉。在你自己弄清楚它为什么错、为什么那种直觉不正确之前,不要放弃它。通常,如果它和所有人的观点都不一致,那它就是错的,而且你最终也会弄清楚它为什么错。但偶尔,你会产生一种实际上是正确的直觉,而其他所有人都是错的。我在早期就很幸运地遇到了这种情况,我认为神经网络绝对是实现AI的正确路径,而几乎所有人都说那很疯狂。我坚持了下来,因为我无法……在我看来,这显然是正确的。现在,如果你直觉很差,“坚持你的直觉”这个建议就行不通了。但如果你本来直觉就很差,你反正也做不成什么事,所以你还不如坚持它们呢。

[原文] [Host]: and in your own career journey is there anything you look back on and say "with the hindsight i have now i should have taken a different approach at that juncture."

[译文] [主持人]: 那么在你自己的职业旅程中,有没有什么是你现在回想起来会说“有了现在的后见之明,在那个转折点我本该采取不同的方法”?

[原文] [Geoffrey Hinton]: i wish i'd spent more time with my wife um and with my children when they were little i was kind of obsessed with work

[译文] [杰弗里·辛顿]: 我希望我能多花点时间和我的妻子在一起,嗯,还有在我的孩子们小的时候多陪陪他们。我当时有点太痴迷于工作了。

[原文] [Host]: your wife passed away from ovarian cancer

[译文] [主持人]: 你的妻子死于卵巢癌。

[原文] [Geoffrey Hinton]: no or that was another wife okay um i had two wives to have cancer

[译文] [杰弗里·辛顿]: 不,或者说那是另一任妻子。好吧,嗯,我有两任妻子患了癌症。

[原文] [Host]: oh really sorry

[译文] [主持人]: 哦,真的很抱歉。

[原文] [Geoffrey Hinton]: the first one died of ovarian cancer and the second one died of pancreatic cancer

[译文] [杰弗里·辛顿]: 第一任死于卵巢癌,第二任死于胰腺癌。

[原文] [Host]: and you wish you'd spent more time with her with the second wife

[译文] [主持人]: 你希望你当时能多花点时间陪她,和你的第二任妻子。

[原文] [Geoffrey Hinton]: yeah who was a wonderful person

[译文] [杰弗里·辛顿]: 是的,她是一个非常出色的人。

[原文] [Host]: why did you say that in your 70s what is it that you've you figured out that i might not know yet

[译文] [主持人]: 为什么你在70多岁的时候会这么说?你领悟到了什么我可能还不知道的事情?

[原文] [Geoffrey Hinton]: oh just cuz she's gone and i can't spend more time with her now mhm

[译文] [杰弗里·辛顿]: 哦,只是因为她已经走了,我现在无法再多花时间陪她了,嗯哼。

[原文] [Host]: but you didn't know that at the time at the time you think

[译文] [主持人]: 但你当时并不知道,在那个时候你以为……

[原文] [Geoffrey Hinton]: i mean it was likely i would die before her just cuz she was a woman and i was a man um i didn't i just didn't spend enough time when i could

[译文] [杰弗里·辛顿]: 我的意思是,我当时很可能会比她先死,就因为她是女人而我是男人。嗯,我没有……我只是在可以的时候没有花足够的时间陪她。

[原文] [Host]: i i think i i inquire there because i think there's many of us that are so consumed with what we're doing professionally that we kind of assume immortality with our partners because they've always been there so we

[译文] [主持人]: 我、我想我之所以问这个问题,是因为我认为我们中有很多人,都被自己职业上正在做的事情深深消耗,以至于我们对伴侣产生了一种近乎永生的错觉,因为她们总是在那里,所以我们……

[原文] [Geoffrey Hinton]: i mean she was very supportive of me spending a lot of time working but

[译文] [杰弗里·辛顿]: 我的意思是,她非常支持我花大量时间工作,但是……

[原文] [Host]: and why did you say your children as well what's the what's the

[译文] [主持人]: 那你为什么还提到了你的孩子们?是什么……是什么……

[原文] [Geoffrey Hinton]: well i didn't spend enough time with them when they were little

[译文] [杰弗里·辛顿]: 嗯,在他们小的时候,我没有花足够的时间陪他们。

[原文] [Host]: and you regret that now if you um if you had a closing message for for my for my listeners about ai and ai safety what would that be jeffrey

[译文] [主持人]: 而你现在对此感到后悔。如果,嗯,如果你有一段关于AI和AI安全的结束语想对我的听众说,那会是什么,杰弗里?

[原文] [Geoffrey Hinton]: there's still a chance that we can figure out how to develop ai that won't want to take over from us and because there's a chance we should put enormous resources into trying to figure that out because if we don't it's going to take over

[译文] [杰弗里·辛顿]: 我们仍然有机会弄清楚如何开发出不想接管我们的AI。而且正因为还有这个机会,我们应该投入巨大的资源去试图弄清楚这一点,因为如果我们不这样做,它就会接管一切。

[原文] [Host]: and are you hopeful

[译文] [主持人]: 那你抱有希望吗?

[原文] [Geoffrey Hinton]: i just don't know i'm agnostic

[译文] [杰弗里·辛顿]: 我只是不知道,我持保留态度(agnostic)。

[原文] [Host]: you must get get bed get in bed at night and when you're thinking to yourself about probabilities of outcomes there must be a bias in one direction because there certainly is for me i imagine everyone listening now has a internal prediction that they might not say out loud but of how they think it's going to play out

[译文] [主持人]: 你一定会在晚上上床睡觉,当你自己在思考结果的可能性时,一定会在某个方向上有所偏向。因为对我来说确实是这样的。我想现在每个听节目的人,内心都有一个也许不会大声说出来的预测,关于他们认为事情将会如何发展。

[原文] [Geoffrey Hinton]: i really don't know i genuinely don't know i think it's incredibly uncertain when i'm feeling slightly depressed i think people are toast is going to take over while i'm feeling cheerful i think we'll figure out a way

[译文] [杰弗里·辛顿]: 我真的不知道,我确确实实不知道。我认为这充满了难以置信的不确定性。当我感到有点沮丧时,我觉得人类完蛋了(toast),它会接管一切;而当我心情愉快时,我认为我们会找到办法的。

[原文] [Host]: maybe one of the facets of being a human um is because we've always been here like we were saying about our loved ones and our relationships we assume casually that we will always be here and we'll always figure everything out but there's a beginning and an end to everything as we saw from the dinosaurs i mean

[译文] [主持人]: 也许作为人类的其中一面,嗯,是因为我们一直存在。就像我们刚才谈论的关于我们的爱人和我们的人际关系那样,我们漫不经心地假设我们会一直在这里,我们总会解决一切问题。但万物皆有始终,就像我们从恐龙身上看到的那样,我的意思是……

[原文] [Geoffrey Hinton]: yeah and we have to face the possibility that unless we do something soon we're near the end

[译文] [杰弗里·辛顿]: 是的,而且我们必须面对这种可能性:除非我们尽快采取行动,否则我们离终点不远了。

[原文] [Host]: we have a closing tradition on this podcast where the last guest leaves a question in their diary and the question that they've left for you is with everything that you see ahead of us what is the biggest threat you see to human happiness

[译文] [主持人]: 我们这个播客有一个收尾传统,上一位嘉宾会在他们的日记里留下一个问题。他们留给你的问题是:鉴于你所看到的我们前方的一切,你认为对人类幸福最大的威胁是什么?

[原文] [Geoffrey Hinton]: i think the joblessness is a fairly urgent short-term threat to human happiness i think if you make lots and lots of people unemployed even if they get universal basic income um they're not going to be happy because they need purpose

[译文] [杰弗里·辛顿]: 我认为大规模失业是对人类幸福的一个相当紧迫的短期威胁。我认为如果你让许许多多的人失业,即使他们获得了全民基本收入,嗯,他们也不会快乐,因为他们需要目标感(purpose)。

[原文] [Host]: because they need purpose

[译文] [主持人]: 因为他们需要目标感。

[原文] [Geoffrey Hinton]: yes and struggle they need to feel they're contributing something they're useful

[译文] [杰弗里·辛顿]: 是的,还有去奋斗。他们需要感觉到自己正在做出贡献,自己是有用的。

[原文] [Host]: and do you think that outcome that there's going to be huge job displacement is more probable than not

[译文] [主持人]: 那你认为,将出现巨大的工作岗位流失的这个结果,是大概率事件吗?

[原文] [Geoffrey Hinton]: yes i do

[译文] [杰弗里·辛顿]: 是的,我认为是。

[原文] [Host]: and what sort of

[译文] [主持人]: 那大概是多大程度的……

[原文] [Geoffrey Hinton]: that one i think is definitely more probable than not if i worked in a call center i'd be terrified

[译文] [杰弗里·辛顿]: 那个结果,我认为绝对是大概率事件。如果我在呼叫中心工作,我会感到非常恐惧。

[原文] [Host]: and what's the time frame for that in terms of mass jobs

[译文] [主持人]: 就大规模失业而言,这大概是什么时间范围内的事?

[原文] [Geoffrey Hinton]: i think it's beginning to happen already i read an article in the atlantic recently that said it's already getting hard for university graduates to get jobs and part of that may be that people are already using ai for the jobs they would have got

[译文] [杰弗里·辛顿]: 我认为这已经开始发生了。我最近在《大西洋月刊》(The Atlantic)上读到一篇文章,说现在大学毕业生找工作已经越来越难了,部分原因可能是人们已经在用AI来从事他们原本能得到的工作了。

[原文] [Host]: i spoke to the ceo of a major company that everyone will know of lots of people use and he said to me in dms that they used to have seven just over 7,000 employees he said uh by last year they were down to i think 5,000 he said right now they have 3,600 and he said by the end of summer because of ai agents they'll be down to 3,000 so you've got so it's happening already

[译文] [主持人]: 我和一家每个人都知道、很多人都在使用的大公司的CEO谈过。他在私信(DMs)里告诉我,他们过去有7000多名员工。他说,呃,到去年,他们降到了大约5000人;他说现在,他们有3600人;他说到今年夏天结束时,因为引入了AI代理,他们会裁减到3000人。所以你看,这确实已经发生了。

[原文] [Geoffrey Hinton]: yes

[译文] [杰弗里·辛顿]: 是的。

[原文] [Host]: he's halfed his workforce because ai agents can now handle 80% of the customer service inquiries and other things so it's it's happening already so urgent action is needed

[译文] [主持人]: 他的员工数量减半了,因为AI代理现在可以处理80%的客户服务咨询及其他事务。所以这已经发生了,所以需要采取紧急行动。

[原文] [Geoffrey Hinton]: yep i don't know what that urgent action is that's a tricky one because that depends very much on the political system and political systems are all going in the wrong direction at present

[译文] [杰弗里·辛顿]: 是的,但我不知道那个紧急行动应该是什么。这是一个棘手的问题,因为这很大程度上取决于政治体制。而目前的政治体制全都在朝着错误的方向发展。

[原文] [Host]: i mean what do we need to do save up money like do we save money do we move to another part of the world i don't know what would you tell your kids to do they said "dad like there's going to be loads of job displacement."

[译文] [主持人]: 我的意思是,我们需要做什么?攒钱吗?我们是该攒钱,还是搬到世界其他地方?我不知道。你会告诉你的孩子们怎么做?如果他们说“爸爸,马上会有大量的工作流失”。

[原文] [Geoffrey Hinton]: because i worked for google for 10 years is they have enough money okay okay fuck so they're not typical

[译文] [杰弗里·辛顿]: 因为我在谷歌工作了10年,就是……他们有足够的钱了。好的好的,该死,所以他们不是典型的例子。

[原文] [Host]: what if they didn't have money

[译文] [主持人]: 那如果他们没有钱呢?

[原文] [Geoffrey Hinton]: trained to be a plumber

[译文] [杰弗里·辛顿]: 去培训当水管工。

[原文] [Host]: really jeffrey thank you so much you're the first nobel prize winner that i've ever had a conversation with i think in my life so that's a tremendous honor and you you you received that award for a lifetime of exceptional work and pushing the world forward in so many profound ways that will lead to great and that have led to great advancements and things that matter so much to us and now you've turned this season in your life to shining a light on some of your own work but also on the the the broader risks of ai and how um and how it might impact us adversely and there's very few people that have worked inside the machine of a google or a big tech company that have contributed to the field of ai that are now at the very forefront of warning us against the very thing that they worked upon

[译文] [主持人]: 真的吗。杰弗里,非常感谢你。我想你是我这辈子对过话的第一位诺贝尔奖得主,所以这是极大的荣幸。而且你、你获得这个奖项,是因为你一生的杰出工作,在如此多深刻的层面上推动了世界前进,那将带来、并且已经带来了伟大的进步,以及对我们如此重要的事情。而现在,你把你人生这个阶段的时间,用来照亮你的一些工作,但也用来照亮AI更广泛的风险,以及它,嗯,以及它可能会如何对我们产生不利影响。很少有像你这样在谷歌或大型科技公司的机器内部工作过、对AI领域做出过贡献的人,现在能站在最前沿,警告我们防范你们曾经致力于研究的东西。

[原文] [Geoffrey Hinton]: there are actually surprising number of us now they're not as uh as public and they're actually quite hard to get to have these kinds of conversations because many of them are still in that industry

[译文] [杰弗里·辛顿]: 实际上现在我们这样的人多得惊人。他们只是不像、呃、不像我这么公开。要找到他们进行这种对话其实相当难,因为他们中的许多人还在这个行业里。

[原文] [Host]: so you know someone who tries to contact these people often and ask invites them to have conversations they often are a little bit hesitant to speak openly they speak privately but they're less willing to openly because maybe maybe they still have something at some sort of incentives at play

[译文] [主持人]: 所以,作为一个经常尝试联系这些人并邀请他们进行对话的人,你知道的,他们通常对公开讲话有些犹豫。他们私下会说,但不太愿意公开,因为也许、也许他们还有某种……受某种利益关系所驱使。

[原文] [Geoffrey Hinton]: i have an advantage over them which is i'm older so i'm unemployed so i can say what i

[译文] [杰弗里·辛顿]: 我对他们有一个优势,那就是我更老,所以我现在失业了,所以我可以说出我想说的。

[原文] [Host]: well there you go so thank you for doing what you do it's a real honor and please do continue to do it thank you thank you so much

[译文] [主持人]: 嗯,这就对了。所以感谢你所做的一切,这真是一份荣幸,请继续做下去。谢谢你,非常感谢。