The arrival of AGI | Shane Legg (co-founder of DeepMind)
### 章节 1:引言与AGI的定义 (Introduction & Defining AGI) 📝 **本节摘要**: > 本章涵盖了播客的开场白及主持人 Hannah Fry 对 Shane Legg 的介绍。作为 Google DeepMind 的联合创始人,Shane Legg 曾普及了“...
Category: Podcasts📝 本节摘要:
本章涵盖了播客的开场白及主持人 Hannah Fry 对 Shane Legg 的介绍。作为 Google DeepMind 的联合创始人,Shane Legg 曾普及了“AGI”一词。Shane 在对话中提出了他对“最小化通用人工智能(Minimal AGI)”的定义:即一个至少能完成普通人通常能完成的认知任务的人工智能体。他认为当前的 AI 系统已经不仅仅是展现出智能的“火花”,在语言能力和通用知识等方面甚至已经超越了人类,但在达到完全的通用性之前,仍处于一个不平衡的发展阶段。
[原文] [Shane Legg (Teaser)]: So is human intelligence going to be the upper limit of what's possible? I think absolutely not. I do wonder what all of this means for people. I mean if we are getting to a point where essentially I mean human intelligence is dwarfed by super intelligence what does that mean for society? It means a massive transformation. This is actually something which is going to structurally change the economy and society and all kinds of things and we need to think about how do we structure this new world.
[译文] [Shane Legg (预告)]: 那么人类智能会是可能性的上限吗?我认为绝对不是。我确实在想这一切对人类意味着什么。我的意思是,如果我们到达了这样一个阶段,即人类智能在超级智能(Super Intelligence)面前显得相形见绌,那对社会意味着什么?这意味着一场巨大的变革。这实际上将结构性地改变经济、社会以及各类事物,我们需要思考如何构建这个新世界。
[原文] [Hannah Fry]: Welcome to Google Deep Mind the podcast with me your host Professor Hannah Fry. AGI is coming that's what everyone seems to be saying. Well today my guest on the podcast is Shane Legg, chief AGI scientist and co-founder of Google Deep Mind. Shane has been talking about AGI for decades even back when it was considered in his words the lunatic fringe. He is credited with popularizing the term and making some of the earliest attempts to work out what it might actually be.
[译文] [Hannah Fry]: 欢迎收听 Google DeepMind 播客,我是主持人 Hannah Fry 教授。AGI(通用人工智能)即将来临,这似乎是大家都在说的话。今天我播客的嘉宾是 Google DeepMind 的首席 AGI 科学家兼联合创始人 Shane Legg。Shane 谈论 AGI 已经几十年了,甚至早在它被认为是——用他的话来说——“疯狂边缘(lunatic fringe)”的时候。他被公认为普及了这个术语,并做出了最早的一些尝试来弄清楚它究竟可能是什么。
[原文] [Hannah Fry]: Now in the conversation today we're going to talk to him about how AGI should be defined, how we might recognize it when it arrives, how to make sure that it is safe and ethical and then crucially what the world looks like once we get there and I have to tell you Shane was remarkably candid about the ways that the whole of society will be impacted over the coming decade. It's definitely worth staying with us for that discussion. Welcome to the podcast Shane.,
[译文] [Hannah Fry]: 在今天的对话中,我们将与他探讨应如何定义 AGI、当它到来时我们该如何识别它、如何确保它是安全且合乎伦理的,以及至关重要的一点——一旦我们到达那个阶段,世界会是什么样子。我必须告诉你们,Shane 对未来十年整个社会将受到的影响表现得异常坦诚。绝对值得留下来听听这次讨论。欢迎来到播客,Shane。
[原文] [Hannah Fry]: Uh we last spoke to you five years ago and then you were telling us your your sort of vision for what AGI might look like in terms of the AI citizens that we got now today do you think that they're showing little sparks of being AGI?
[译文] [Hannah Fry]: 呃,我们要上次和你交谈是在五年前,当时你告诉了我们你对 AGI 可能是什么样子的愿景。就我们今天拥有的这些 AI “公民”而言,你认为它们是否展现出了成为 AGI 的一点点火花?
[原文] [Shane Legg]: Yeah I think it's a lot more than sparks. More than sparks? Oh yeah yeah. So my my definition of AGI or sometimes call minimal AGI is it's an artificial agent that can at least do the kinds of cognitive things people can typically do.
[译文] [Shane Legg]: 是的,我认为远不止是火花。(Fry:不止是火花?)噢是的,是的。我对 AGI 的定义,或者有时称之为“最小化通用人工智能(Minimal AGI)”,是指一种至少能做人们通常能做的那些认知类事务的人工智能体。
[原文] [Shane Legg]: Yeah and I like that bar because if it's less than that it feels like well it's failing to do things we'd cognitive things that we'd expect people to be able to do so it feels like we're not really there yet. On the other hand if I set the minimal bar much higher than that I'm setting it at a level where many people a lot of people wouldn't actually be able to do some of the things we're requiring of the AGI.,
[译文] [Shane Legg]: 是的,我喜欢这个标准,因为如果低于这个标准,感觉就像……好吧,它无法完成我们期望人类能做到的那些认知事务,所以感觉我们还没真正到达那里。另一方面,如果我把这个最低标准设得比这高很多,那我设定的水平就是许多人——很多人——实际上无法完成我们要求 AGI 去做的某些事情的水平。
[原文] [Shane Legg]: So you know we we believe people have some sort of I don't know general intelligence you might call it so I feels like if it if an AI can do the kinds of cognitive things people can typically do at least possibly more then we should sort of consider it within that kind of a class.
[译文] [Shane Legg]: 你知道,我们相信人类拥有某种——我不知道该怎么叫——通用智能,所以我觉得如果一个 AI 能做人们通常能做的那类认知事务,甚至可能更多,那我们就应该把它归入那一类。
[原文] [Hannah Fry]: The stuff that we have now where is it on those levels right?
[译文] [Hannah Fry]: 我们现在拥有的东西,在这个层级上处于什么位置呢?
[原文] [Shane Legg]: Um so it's uneven so it's already much much better than people that say speaking languages so it'll speak 150 languages or something nobody can do that uh and it general knowledge is phenomenal i can ask it about uh you know the suburb I grew up in a small town in New Zealand and it happens to know things about it right.
[译文] [Shane Legg]: 嗯,它是参差不齐的。所以在比如讲语言方面,它已经比人类好得多了,它能讲150种语言之类的,没人能做到这一点。而且它的通用知识是惊人的,我可以问它关于——你知道——我在新西兰长大的那个郊区小镇,而它碰巧知道关于那里的事情,对吧。
📝 本节摘要:
尽管 AI 在语言和通识知识上表现出色,Shane 指出它们在“持续学习(Continual Learning)”和“视觉推理(Visual Reasoning)”方面仍存在显著短板。例如,AI 难以像人类那样理解透视关系(近大远小)或准确数出图表中的节点。Shane 认为这些并非不可逾越的根本性障碍。他预测,通过引入新的架构(如情景记忆)和算法优化,而不仅仅是增加数据量,AI 将在未来几年内克服这些弱点,变得更加可靠并具备专业级能力。
[原文] [Shane Legg]: Um on the other hand they still fail to do things that we would expect people typically be able to do. Uh they're not very good at continual learning learning new sort of skills over an extended period of time and that's incredibly important for example if you're taking on a new job you know you're not expected to know everything to be performant in the job when you arrive but you have to learn over time to do it.
[译文] [Shane Legg]: 嗯,另一方面,它们仍然无法完成我们期望普通人通常能做到的事情。呃,它们不太擅长“持续学习(continual learning)”,即在很长一段时间内学习某种新技能。这非常重要,例如,如果你接受了一份新工作,没人指望你刚到岗时就无所不知、表现完美,但你必须随着时间的推移学会如何去做。
[原文] [Shane Legg]: There also have some weaknesses in reasoning particularly things like visual reasoning. So the AI are very good at say recognizing objects they can recognize cats and dogs and all these sort of things they've done that for a while um but if you ask them to reason about things within a within a scene they get a lot more shaky.
[译文] [Shane Legg]: 它们在推理方面也有一些弱点,特别是像视觉推理(visual reasoning)这类事情。AI 非常擅长识别物体,它们能识别猫、狗这类东西,这方面它们已经做到好一阵子了。嗯,但如果你让它们对场景内的物体进行推理,它们就会变得极其不稳定。
[原文] [Shane Legg]: So you might say well you know you can see you can see a red car and a blue car and you ask them which car is bigger. Um people understand that there's perspective involved and maybe the blue car is bigger but it looks smaller cuz it's further away right. Uh AIs are not so good at that.
[译文] [Shane Legg]: 比如你可能会说,你看,你能看到一辆红车和一辆蓝车,然后问它们哪辆车更大。嗯,人类明白这涉及透视关系,也许蓝车其实更大,但因为它离得更远所以看起来更小,对吧。呃,AI 在这方面就不太行。
[原文] [Shane Legg]: Or if you have some sort of diagram with nodes and edges between them like a network a network yeah or a graph as a mathematician would say um and you ask questions about that and has to count the number of um you know edges spokes spokes that are coming out of you know one of the nodes on the graph.
[译文] [Shane Legg]: 或者如果你有某种带有节点和连接它们的边的图表,像一个网络。(Fry:一个网络,是的)。或者是数学家所说的“图(graph)”。嗯,如果你问关于它的问题,比如必须数出从图上某个节点延伸出的边、辐条的数量。
[原文] [Shane Legg]: Um a person does that by paying attention to different points and then actually mentally maybe counting them or what have you um the AI's not very good at doing that type of thing. So there are all sorts of things like this uh that we currently see.
[译文] [Shane Legg]: 嗯,人类做这个是通过关注不同的点,然后在心里数数或者用别的方法。嗯,AI 不太擅长做这类事情。所以我们目前能看到各种各样类似的问题。
[原文] [Shane Legg]: Uh I don't think there are fundamental blockers on any of these things and we have ideas on how to develop systems that can do these things and we see metrics improving over time in a bunch of these areas so my expectation is over a number of years these things will all get addressed but they're not there yet.
[译文] [Shane Legg]: 呃,我不认为在这些事情上存在任何根本性的阻碍,我们对于如何开发能做到这些的系统已经有了想法,并且我们看到在许多此类领域中指标正随着时间推移而改善。所以我的预期是,经过若干年,这些问题都会得到解决,但目前还没到那一步。
[原文] [Shane Legg]: And I think it's going to take a little bit of time to go through that cuz it's quite a long tale of all sorts of cognitive things that people can do uh where the AIs are still below below human performance.
[译文] [Shane Legg]: 我认为要完成这个过程需要一点时间,因为人类能做的各种认知事务是一个很长的长尾列表,而在这些事务上 AI 仍然低于人类的表现。
[原文] [Shane Legg]: As we reach that and I think that's coming in a few years unclear exactly um the AIs will be a lot more reliable and that will increase their value quite a lot in many ways but they'll they will also during that period become increasingly capable like to um professional level and beyond and maybe in coding mathematics already in you know known mult languages general knowledge of the world and stuff like this so it's kind of a it's an uneven thing.
[译文] [Shane Legg]: 当我们达到那个阶段——我认为这会在几年内到来,具体时间不清楚——AI 将会变得更加可靠,这将在许多方面极大地提升它们的价值。但在那个时期,它们也会变得越来越能干,比如达到专业水平甚至更高,可能在编程、数学方面已经是了,你知道的,还有多语言能力、对世界的通用知识这类东西。所以这是一种发展不平衡的状态。
[原文] [Hannah Fry]: If you think that they will become more reliable over time like how is it just a question of making the models bigger doing things at larger scale is it more data I mean do you have a clear path to make them more reliable?
[译文] [Hannah Fry]: 如果你认为它们会随着时间推移变得更可靠,这是如何实现的?仅仅是把模型做得更大、在更大规模上做事的问题吗?是更多数据吗?我的意思是,你们有一条让它们变得更可靠的清晰路径吗?
[原文] [Shane Legg]: Uh I think we do and it's not one particular thing it's just not bigger models or more data. Um in some cases it's more data of a particular kind and then when you collect data that requires that say visual reasoning then the models learn how to do it.
[译文] [Shane Legg]: 呃,我认为我们要有,而且这不是某一件特定的事,不仅仅是更大的模型或更多的数据。嗯,在某些情况下,是某种特定类型的更多数据,比如当你收集需要视觉推理的数据时,模型就会学会如何去做。
[原文] [Shane Legg]: In some cases it requires algorithmic things like new processes within so for example if you want to do continual learning so the AI keeps learning over time you might need some process whereby new information is maybe stored in something some sort of retrieval system an episodic memory if you like and then you might have systems whereby that information over time is trained back into some underlying model so that requires more than just more data it requires some sort of algorithmic and architectural changes.
[译文] [Shane Legg]: 在某些情况下,它需要算法层面的东西,比如内部的新流程。举个例子,如果你想做持续学习,让 AI 随着时间推移不断学习,你可能需要某种流程,将新信息存储在某种检索系统中——如果你愿意的话可以称之为“情景记忆(episodic memory)”——然后你可能需要一些系统,随着时间的推移将这些信息重新训练回底层模型中。这需要的不仅仅是更多数据,它需要某种算法和架构上的改变。
[原文] [Shane Legg]: So I think the answer is a combination of these things and it depends on what the particular issue is.
[译文] [Shane Legg]: 所以我认为答案是这些因素的组合,具体取决于特定的问题是什么。
📝 本节摘要:
Shane 在本章详细梳理了 AI 发展的三个阶段:从预计约两年内实现的“最小化 AGI”,到完全覆盖人类认知谱系的“完全 AGI(Full AGI)”,最后是远超人类能力的“超级智能(ASI)”。此外,他回顾了与 Ben Goertzel 共同创造“AGI”一词的历史背景——最初旨在定义一个区别于狭义 AI 的研究领域,但随后演变为指代特定的人工智能体。Shane 认为 AGI 的到来将是一个重要的历史时刻,标志着机器智能正式进入与人类智能相似的类别。
[原文] [Hannah Fry]: I know that you don't think the AGI should be this this single yes no like a threshold that you cross but but but more of a sort of spectrum as it were that you have these levels just just talk me through that.
[译文] [Hannah Fry]: 我知道你不认为 AGI 应该是一个单一的“是或否”的问题,就像跨过某个门槛那样,而更像是一个谱系,有着不同的层级。能跟我具体讲讲吗?
[原文] [Shane Legg]: Yeah so I have um what I call minimal AGI and that's when you have an artificial agent that it can at least do all the sorts of cognitive things that we would typically expect people to be able to do and um we're not there yet but it could be one year it could be 5 years i'm guessing probably about two or so so that's the lowest level then that's the my what I call minimal AGI that's the point at which I'd say okay this AI is no longer failing in ways that we would find surprising if we gave a person that cognitive task and I think that's the that's the minimum bar.
[译文] [Shane Legg]: 是的,我有我所谓的“最小化 AGI(minimal AGI)”,那就是当你拥有一个人工主体,它至少能做我们通常期望人类能做的所有类型的认知事务。嗯,我们还没到那一步,但这可能是一年,可能是五年,我猜大概是两年左右。所以那是最低层级,那就是我所谓的最小化 AGI。在这个点上,我会说,好吧,如果我们把这个认知任务交给一个人,这个 AI 不会再以让我们感到惊讶的方式失败了。我认为那是最低标准。
[原文] [Shane Legg]: Now that doesn't mean we understand fully how to reach the capabilities of human intelligence because you can have extraordinary people who who go and do amazing you know cognitive feats inventing new theories in physics or maths or developing you know incredible symphonies or doing all writing amazing literature and so on um and just because our AI can do what's typical of human cognition doesn't necessarily mean we know all the recipes and algorithms everything required to achieve um very extraordinary feats of human cognition.
[译文] [Shane Legg]: 这并不意味着我们完全理解了如何达到人类智能的能力,因为会有非凡的人去完成惊人的认知壮举,比如发明物理或数学的新理论,创作不可思议的交响乐,或者写出精彩的文学作品等等。仅仅因为我们的 AI 能做典型的人类认知事务,并不一定意味着我们掌握了实现人类非凡认知壮举所需的所有配方、算法和一切要素。
[原文] [Shane Legg]: Um once we can with our AI achieve the full spectrum of what's possible with human cognition uh then we really know that we've nailed you know at least fully to human level and so we call that full AGI.
[译文] [Shane Legg]: 嗯,一旦我们的 AI 能实现人类认知可能达到的全谱系能力,那时我们就真的知道我们已经搞定了,至少完全达到了人类水平,所以我们称之为“完全 AGI(Full AGI)”。
[原文] [Shane Legg]: And then is there a level beyond that um yeah so I think once you start going beyond what is possible with human cognition you start heading into something that's called um artificial super intelligence or ASI.
[译文] [Shane Legg]: 那么在那之上还有更高的层级吗?嗯,是的。所以我认为一旦你开始超越人类认知的可能性,你就开始进入所谓的“人工超级智能(Artificial Super Intelligence)”或 ASI 的领域。
[原文] [Shane Legg]: Um there aren't really good clear definitions of that um I've actually tried on a number of occasions to come up with a good definition of that every definition I've ever come up with has some sort of significant problems but at least in vague terms it means something like it's it's an AGI so it has the generality of an AGI but it's now so capable in general it's somehow far beyond what uh you know what humans are capable of of reaching.
[译文] [Shane Legg]: 嗯,关于那个还没有真正好的、清晰的定义。实际上我曾多次尝试给出一个好的定义,但我提出的每个定义都有某种重大问题。但至少笼统地说,它的意思是它是 AGI,所以它拥有 AGI 的通用性,但它现在在总体上能力极强,以至于在某种程度上远超人类所能达到的水平。
[原文] [Hannah Fry]: Cuz I know that you were one of the people who helped to coin that phrase AGI do you think that it's still useful as a phrase i mean there's so many competing definitions now it's sort of like the buzz word that everyone's using and you're right that it's sort of it the way that it's described is almost like a yes no like a kind of discrete line that gets crossed rather than this this this continuum almost of levels as you're describing.
[译文] [Hannah Fry]: 因为我知道你是协助创造“AGI”这个短语的人之一,你认为作为一个短语它还有用吗?我的意思是,现在有太多相互竞争的定义,它有点像每个人都在用的流行词。而且你是对的,它被描述的方式几乎像是一个“是或否”,像是一条被跨越的离散界线,而不是像你描述的那种近乎连续的层级。
[原文] [Shane Legg]: Yeah so when I proposed the term I was thinking of it more as a field of study because uh I was uh talking to a guy Ben Girtzol who I'd worked for um a year or so before and he wanted to write a book on sort of the old vision of AI this thinking machines these machines that can do lots and lots of different things rather than it's just specialized it just plays poker it just does text to speech it just does you know very very specific things which were sort of typical at the time.
[译文] [Shane Legg]: 是的,所以我提出这个术语时,更多是把它想作一个研究领域。因为当时我在和 Ben Goertzel 交谈,我大约一年前曾为他工作过。他想写一本关于 AI 旧愿景的书,即“思维机器”,那些能做很多很多不同事情的机器,而不是仅仅专业化的——只会打扑克、只会做语音转文字、只会做非常特定的事情,那是当时典型的状况。
[原文] [Shane Legg]: I was like what about the old dream of AI building a system that has a very general capability and it can learn and reason and do language and write poetry or do maths or maybe paint a picture or you know all sorts of different things what do we call that and uh I said to him well if it's really about the generality we want why don't we just put the word general in in in the name and call artificial general intelligence agi kind of rolls off the tongue.
[译文] [Shane Legg]: 我当时就想,那 AI 的旧梦呢?构建一个拥有非常通用能力的系统,它能学习、推理、处理语言、写诗、做数学,或者画画,你知道,各种各样不同的事情。我们该叫它什么?于是我对他说,如果这真的是关于我们要的“通用性(generality)”,为什么我们不直接把“通用(general)”这个词放进名字里,叫它“人工通用智能(Artificial General Intelligence)”?AGI 念起来也挺顺口的。
[原文] [Shane Legg]: Um maybe we do that but then what happened is that um a number of people started using the term online and then very quickly people started talking about well when will we have AGI and so then AGI moved from being a sort of field of study or a sub field to a category of artifacts right and then it needs a definition so perhaps it was a mistake that I should have gone in and defined it.
[译文] [Shane Legg]: 嗯,也许我们就那样做了。但后来发生的是,许多人开始在网上使用这个术语,然后很快人们开始谈论“我们什么时候会有 AGI”。于是 AGI 从某种研究领域或子领域,变成了“一类人造物(artifacts)”,对吧?那么它就需要一个定义。所以也许这是个错误,我当时应该介入并定义它的。
[原文] [Shane Legg]: Um you know there now it turned out a few years later we found there was a guy Marco Brud who had actually written a paper in 97 uh we had used the term but it was in a nanotech security conference and none of us knew about this um but the way he defined it was actually in reference to the sorts of cognitive things people do in industry and other places like that so it's quite similar flavor to even what I'm I'm using now yeah if it had been fixed more clearly early on that would that that would be helpful.
[译文] [Shane Legg]: 呃,你知道,结果几年后我们发现有个叫 Marco Brud 的人实际上在97年写过一篇论文,使用了这个术语,但那是在一个纳米技术安全会议上,我们当时都不知道这件事。但他定义它的方式实际上是参考人们在工业界和其他类似场所所做的那些认知事务,所以它的味道甚至和我现在使用的定义相当相似。是的,如果早期能更清晰地确立下来,那会很有帮助。
[原文] [Hannah Fry]: Do you regret coin...
[译文] [Hannah Fry]: 你后悔创造……
[原文] [Shane Legg]: No no no because I think it it gave a way for people to refer to this idea of building AIs that were actually general mhm um or at least as general general to the extent that people's you know intelligence is general there was a need for that I think and that's why I think the term caught on because there was sort of you know how do you refer to that if you're not referring to this if people use phrases like advanced AI well alpha fold is an advanced AI in some sense right uh and it's very impactful but it's very very narrow right or alpha go again is very narrow and it's some sort of advanced AI system so how do you refer to systems that are very general.
[译文] [Shane Legg]: 不不不,因为我认为它给人们提供了一种方式来指代“构建实际上通用的 AI”这一理念。(Fry:嗯)。或者至少是像人类智能那样通用的 AI。我认为当时有这个需求,这也是我认为这个术语流行起来的原因。因为,如果不用这个词,你该怎么指代它呢?如果人们使用像“高级 AI(advanced AI)”这样的短语,好吧,AlphaFold 在某种意义上是高级 AI,对吧,它非常有影响力,但它非常非常狭窄。或者 AlphaGo 也是非常狭窄的,尽管它是某种高级 AI 系统。所以你该如何指代那些非常通用的系统呢?
[原文] [Shane Legg]: But then what's happened is that different people saw the term and took on they they they adapted in different ways or they looked at it through different lenses so for some people uh back even in the early days when they thought of AGI they thought of something in the future decades away and that this would be very transformative and so they started thinking about AGI in terms of the transformation it would create in society and so then they started if they try to define it they tend to think about oh it's because it can lead to I don't know economic growth or it's going to do all these sorts of things right.
[译文] [Shane Legg]: 但后来发生的事情是,不同的人看到这个术语,以不同的方式采用了它,或者通过不同的视角来看待它。所以对一些人来说,甚至在早期,当他们想到 AGI 时,他们想到的是未来几十年后的东西,认为这将是非常具有变革性的。因此他们开始从它将给社会带来的变革角度来思考 AGI。所以当他们试图定义它时,他们倾向于认为,噢,这是因为它能带来——我不知道——经济增长,或者它将做所有这类事情,对吧。
[原文] [Shane Legg]: Some I tend to think of it as a more of a historical point in time it's the point in time at which we sort of have to say well these AIs in some sense belong in a similar category to our intelligence and that they can do cognitive things that we typically can do um now that doesn't necessarily revolutionize the world the typical person walking around isn't going to be a Mozart or an Einstein and invent the successor to quantum theory or whatever right.
[译文] [Shane Legg]: 我个人倾向于把它更多地看作一个历史性的时间点。在这个时间点上,我们不得不说,好吧,这些 AI 在某种意义上属于与我们要的智能相似的类别,它们能做我们通常能做的认知事务。这并不一定意味着彻底改变世界,毕竟街上走的普通人也不会是莫扎特或爱因斯坦,去发明量子理论的继任理论或别的什么,对吧。
[原文] [Shane Legg]: Um but it's a very interesting point in time because 10 years ago 20 you know whatever we did not have AIs that were anywhere close to being able to do the cognitive things that people can typically do so I think this is an important sort of historical moment in that AIS are somehow in a similar category to us.
[译文] [Shane Legg]: 但这是一个非常有趣的时间点,因为10年前、20年前,无论何时,我们都没有任何接近能够完成人类通常能做的认知事务的 AI。所以我认为这是一个重要的历史时刻,即 AI 在某种程度上进入了与我们要相似的类别。
[原文] [Shane Legg]: I also think and I think it's useful to try to define it a bit because one of the issues that come up people have these different timelines right some people say "Oh AR I think it's going to be here in 3 years oh I think it's going to be 15 years away or 20 years or whatever um and often when I go and talk to them about that I find that they're using a different definition and so that just leads to a lot of confusion because people use the term to mean different things and in some cases I actually agree with what they think is going to happen they're just using the word in a different way and that just creates quite a lot of confusion.
[译文] [Shane Legg]: 我也认为,尝试对其进行定义是有用的,因为出现的一个问题是人们有这些不同的时间表,对吧?有些人说“噢,我认为 AGI 3年内就会出现”,有些人说“噢,我认为还有15年或20年”。通常当我去和他们谈论这个时,我发现他们使用的是不同的定义。这导致了很多困惑,因为人们用这个术语指代不同的东西。在某些情况下,我实际上同意他们认为会发生的事情,他们只是以不同的方式使用这个词,但这确实制造了很多混乱。
📝 本节摘要:
本章深入探讨了如何具体判定 AGI 是否达成。Hannah 列举了当前流行的几种定义,包括“厨房测试”(机器人能否在陌生厨房做咖啡)和“百万美元测试”(AI 能否自主赚钱)。Shane 认为单纯的经济指标过于狭隘,强调了 AGI 核心在于“通用性(Generality)”。他提出了一套严谨的“两阶段判定法”:首先通过涵盖人类典型认知能力的标准化任务测试;其次进入“对抗性测试(Adversarial Testing)”阶段,由人类团队在数月内竭力寻找其认知缺陷。如果经过长时间的高强度找茬仍无法发现漏洞,则可认为在实用层面上达成了 AGI。此外,Shane 指出,即便没有完美的定义,随着 AI 能力的普及,人们自然会将其视为通用智能。
[原文] [Hannah Fry]: I just want to compare some of the other definitions that people are using for for AGI so um some people have suggested that it's like there's a checklist of tasks or maybe there's uh humanities last exam which is this this sort of language model benchmark of two and a half thousand questions across different subjects so humanities and natural sciences um there's other people that have said oh you uh you it needs to be able to perform in a kitchen this is sort of trained as a chef and be able to be dropped into a different kitchen and perform or or there's even one which is um could it be able to make a million dollars from $100,000 what do you what's your take on those definitions
[译文] [Hannah Fry]: 我想对比一下人们正在使用的其他一些 AGI 定义。有些人建议这就好像有一个任务清单,或者可能有“人文社科终极考试”,这是一种包含跨学科(人文和自然科学)2500个问题的语言模型基准测试。还有人说,噢,它需要在厨房里表现,像是被训练成厨师,然后能被投放到一个陌生的厨房里工作。甚至还有一个定义是:它能不能用10万美元赚到100万美元?你对这些定义怎么看?
[原文] [Shane Legg]: well each one I have a take on go ahead um I mean make was a million dollars from $1,000 or something like that um I mean that that's obviously a very economic kind of perspective on it um I think a lot of people would struggle to do that um it's a very I think in some ways quite narrow perspective on on this i mean maybe you could have I don't know a trading algorithm that trades uh trades on the markets that could do that but that's all it can do is that's not what I'm talking about so I think it's the G that's the G in AGI it's the generality that I find interesting and I I think that's one of the incredible things of the human mind is our flexibility and generality to do many many different things
[译文] [Shane Legg]: 嗯,每一个我都有看法。(Fry: 请讲)。我是说,用1000美元(注:原文口误,应指前文的10万)赚100万美元之类的,显然这是一种非常经济导向的视角。我认为很多人都难以做到这一点。我觉得在某种程度上这是一种相当狭隘的视角。我的意思是,也许你可以有一个在市场上交易的算法能做到这一点,但如果它只能做这个,那就不是我在谈论的东西。所以我认为是“G”,AGI 中的“G”,即“通用性(Generality)”,这才是我觉得有趣的地方。我认为人类心智最不可思议的地方之一就是我们的灵活性和做许多许多不同事情的通用性。
[原文] [Shane Legg]: if you have a particular set of tasks well okay maybe you can build a system that can do those tasks but maybe it's still failing to do basic cognitive things that we'd expect almost anybody to be able to do i think that's unsatisfying it's like oh our AI just failed again because it doesn't understand that really simple thing that I would expect pretty much anybody to understand
[译文] [Shane Legg]: 如果你有一组特定的任务,好吧,也许你能构建一个系统来完成这些任务,但它可能仍然无法完成我们要期望几乎任何人都能做的基本认知事务。我认为那是令人不满意的,这就像是说:“噢,我们的 AI 又失败了,因为它无法理解那个非常简单的事情,而我期望几乎任何人都能理解它。”
[原文] [Shane Legg]: so the way I would operationalize my definition is I would have a suite of tasks where I know what typical performance is from humans from humans and I would see whether the AI can do all those tasks now if it fails at any of those tasks it fails to meet my definition because it's not general enough yeah it's failing to do some cognitive thing that we'd expect people to be able to do
[译文] [Shane Legg]: 因此,我将我的定义付诸操作的方式是:我会设计一套任务,我知道人类在这些任务上的典型表现是什么,然后我会看 AI 是否能完成所有这些任务。如果在其中任何一项任务上失败了,它就不符合我的定义,因为它不够通用。是的,如果它无法完成我们要期望人类能做的某些认知事务,那就是失败。
[原文] [Shane Legg]: if it passes that then I would propose we then go into a second phase which is more adversarial and we say okay it passed the battery of tests so it's not failing at anything in our standard collection of however many thousands of tests or whatever we have now let's do an adversarial test get a team of people give them I don't know a month or two or whatever they're allowed to look inside the AI they're allowed to do whatever they like their job is find something that we believe people can typically do and it's cognitive where the AI fails at
[译文] [Shane Legg]: 如果它通过了那个阶段,那么我会提议我们进入第二阶段,这更具对抗性。我们会说,好吧,它通过了一连串测试,在我们现有的数千个标准测试集中没有失败。现在我们来做对抗性测试(adversarial test):找一组人,给他们——我不知道——一两个月的时间,无论多久。他们被允许查看 AI 内部,允许做任何他们想做的事,他们的工作就是找到某个我们认为人类通常能做到的认知事务,而 AI 在这点上失败了。
[原文] [Shane Legg]: if they can find it it fails by definition if they can't after a few months of probing it and testing it and and scratching the heads and trying to find it I think for intensive purposes most practical purposes we're there because this failure cases now so hard to find even teams of people after an extended period of time can't even find these failure cases
[译文] [Shane Legg]: 如果他们能找到,那么根据定义它就失败了。如果经过几个月的探测、测试、挠头苦思试图找出漏洞后还是找不到,我认为出于所有实用目的(for intents and purposes),我们就算到达那个阶段了。因为这种失败案例现在如此难以发现,以至于即使是人类团队在很长一段时间后都找不到。
[原文] [Hannah Fry]: do you think that we'll ever agree on a definition of what of what intelligence is or or what AGI is indeed
[译文] [Hannah Fry]: 你认为我们会对什么是智能或者确实什么是 AGI 的定义达成一致吗?
[原文] [Shane Legg]: um in terms of AGI itself my guess is that uh some years from now the AIS will become so generally capable in so many different ways people will just talk about them as being AGI and AI will just happen to mean those things and maybe people will be less worried about they will have less arguments about whether this is an AGI or not people will say "Oh I've got the latest Gemini 9 or whatever it is." And it is really good it you know it can it can write poetry you can teach it a card game and it can play with you that you you just made up it can do math it can translate things it can plan a holiday with you or whatever right it's really really generally capable and it'll just seem obvious to people that it has some sort of generality of intelligence
[译文] [Shane Legg]: 嗯,就 AGI 本身而言,我的猜测是,几年后 AI 将在如此多不同的方面变得如此通用地能干,人们会自然地把它们称为 AGI,而 AI 这个词将恰好指代那些东西。也许人们不会那么担心,也不会那么多争论这到底是不是 AGI。人们会说:“噢,我有最新的 Gemini 9 号(或者不管叫什么)。”而且它真的很好,你知道,它能写诗,你可以教它一个你刚编出来的纸牌游戏跟它玩,它能做数学,能翻译东西,能和你一起计划假期,或者不管什么,对吧。它真的真的非常通用地能干,对人们来说,它拥有某种智能的通用性将显得显而易见。
[原文] [Hannah Fry]: but then for now I mean in terms of having before we get there having this kind of defined path on the route to AGI um I mean you talk about the the the risks of not having one that it could like acquire a certain piece of knowledge before another for instance I don't know like being good at chemical engineering before it gets really good at ethics i mean how important is it to have this work now in advance of of getting there so work around understanding its capabilities in different dimensions
[译文] [Hannah Fry]: 但就目前而言,我的意思是在我们到达那里之前,在通往 AGI 的道路上有一条明确定义的路径……我的意思是,你谈到了没有这样一条路径的风险,比如它可能先于另一个知识获取某种知识,例如——我不知道——比如在它真正擅长伦理学之前就擅长化学工程了。我的意思是,在到达那里之前,现在就把这项工作做好,即理解它在不同维度上的能力,有多重要?
[原文] [Shane Legg]: uh I think it's very important um because we have to think about how do how do we being society navigate uh the arrival of powerful capable machine intelligence and you can't just put it on a single dimension it may be superhumanly capable at some things it may be very fragile and weak in some other areas and if you don't understand what that distribution looks like you're going to not understand the opportunities that exist you're also not going to understand the risks or the ways in which it could be misapplied because you know oh it's super capable over here but you need to understand that it's very very weak over here and so certain things can go wrong
[译文] [Shane Legg]: 呃,我认为这非常重要。因为我们必须思考,作为一个社会,我们该如何驾驭强大能干的机器智能的到来。你不能只把它放在单一维度上看。它可能在某些方面具有超人类的能力,而在其他领域可能非常脆弱和软弱。如果你不理解这种能力分布是什么样的,你就无法理解存在的机会,你也无法理解风险或它可能被误用的方式。因为你知道,噢,它在这方面超级能干,但你需要理解它在这方面非常非常弱,所以某些事情可能会出问题。
[原文] [Shane Legg]: so I think it's just an important part of society navigating and understanding what the current situation is so you know I think a lot of the dialogue around AI already tends to talk about as being so so capable or sort of being not really that capable and it's overhyped or whatever i think the reality is much more complicated it is incredibly capable in some ways and it is quite fragile in others you have to take the whole picture essentially
[译文] [Shane Legg]: 所以我认为这只是社会驾驭和理解当前局势的重要组成部分。你知道,我认为很多关于 AI 的对话往往倾向于说它非常非常能干,或者说它其实没那么能干、是被过度炒作了之类的。我认为现实要复杂得多:它在某些方面令人难以置信地能干,而在其他方面又相当脆弱。本质上你必须看全景。
📝 本节摘要:
本章深入探讨了如何构建合乎伦理的 AI。Shane Legg 引入了心理学家丹尼尔·卡尼曼(Daniel Kahneman)的“系统1(直觉)”与“系统2(深思)”理论,提出了“系统2安全(System 2 Safety)”的概念。他认为,通过强制 AI 展示“思维链(Chain of Thought)”,我们可以监控其决策背后的逻辑推理过程。Shane 甚至大胆推测,由于 AI 能更一致地应用伦理准则且不受情绪干扰,它们在未来可能比人类更具伦理道德。此外,本章还讨论了“现实接地(Grounding)”问题,以及如何通过监控、测试和解释性工具(Interpretability)来防范生物武器开发或黑客攻击等极端风险。
[原文] [Hannah Fry]: So okay if we've got we've sort of got performance and generality the other sort of arm of this that I want to talk to you about is is ethics how does that fit into all of this?
[译文] [Hannah Fry]: 好的,如果我们已经有了性能和通用性,我想和你探讨的另一个分支是伦理。它是如何融入这一切的?
[原文] [Shane Legg]: There are many aspects to ethics and and AI. Um one aspect is simply does the AI itself have a good understanding of what ethical behavior is and is it able to analyze uh possible things it can do in terms of this ethical behavior and do that robustly in a way that we can trust.
[译文] [Shane Legg]: 伦理和 AI 有很多方面。嗯,其中一个方面仅仅是 AI 本身是否对什么是伦理行为有良好的理解,以及它是否能够根据这种伦理行为来分析它可能做的各种事情,并以一种我们可以信任的方式稳健地执行。
[原文] [Hannah Fry]: So the AI itself can reason about the ethics of what it's doing?
[译文] [Hannah Fry]: 所以 AI 自己可以推理它正在做的事情的伦理性?
[原文] [Shane Legg]: Yes.
[译文] [Shane Legg]: 是的。
[原文] [Hannah Fry]: How does that work then? How do you embed that within it?
[译文] [Hannah Fry]: 那是怎么运作的呢?你是怎么把它嵌入进去的?
[原文] [Shane Legg]: I have a few thoughts on that but there's not a solved problem but it's I think it's a very very important problem. I like something which some people call chain of thought monitoring uh I've talked about this uh I've given some short talks on it and so on i call it system two safety and this is the Daniel Kahneman system one system two thinking.
[译文] [Shane Legg]: 我对此有一些想法,但这还不是一个已解决的问题,但我认为这是一个非常非常重要的问题。我喜欢一种被某些人称为“思维链监控(chain of thought monitoring)”的东西。呃,我谈过这个,我就此做过一些简短的演讲等等,我称之为“系统2安全(system two safety)”,这就是丹尼尔·卡尼曼(Daniel Kahneman)的系统1和系统2思维。
[原文] [Hannah Fry]: Exactly.
[译文] [Hannah Fry]: 没错。
[原文] [Shane Legg]: And so the basic idea is something like this say as a person if you're faced with a difficult ethical situation um it's often not sufficient just to go with your gut instinct right you actually need to sit down and think about okay this is the situation these are the various complexities nuances these are the possible actions that could be taken these are the likely consequences of taking different actions and then analyze all of that with respect to some system of ethics and norms and morals and what have you that you have and maybe you have to reason about that quite a bit to really understand how all this fits together and then use that understanding to decide what what should be done.
[译文] [Shane Legg]: 基本思路大概是这样的:作为一个这种人,当你面临一个艰难的伦理处境时,嗯,通常仅仅凭直觉行事是不够的,对吧?你实际上需要坐下来思考:好吧,这就是当下的情况,这些是各种复杂性和细微差别,这些是可以采取的可能行动,这些是采取不同行动可能产生的后果。然后根据你拥有的某种伦理体系、规范、道德等等来分析所有这些。也许你必须对此进行相当多的推理,才能真正理解这一切是如何通过某种方式组合在一起的,然后利用这种理解来决定应该做什么。
[原文] [Shane Legg]: So let's say that the way that the human brain works in this situation I mean this is the Kahneman stuff right is that uh you know someone annoys you say you have a rush of anger you want to react that's your system one sort of quick thinking instinctive but you take a breath you think it through consider the consequences that's your system two thinking and then you might choose a different a different path.
[译文] [Shane Legg]: 所以比如说,人脑在这种情况下的运作方式——我是说这就是卡尼曼的理论,对吧——就是,呃,你知道有人惹恼了你,比如说你突然一阵愤怒,想要做出反应,那是你的系统1,某种快速思维、本能反应。但你深吸一口气,你想通了,考虑了后果,那是你的系统2思维,然后你可能会选择一条不同的路径。
[原文] [Hannah Fry]: Yes.
[译文] [Hannah Fry]: 是的。
[原文] [Shane Legg]: So you might say for example I don't know lying is bad right so we're not going to lie but you could be in a particular situation where I don't know you you know there's some bad people coming to get somebody and if you tell a lie you can save their life and then the ethical thing to do is maybe to lie right.
[译文] [Shane Legg]: 所以你可能会说,比如,我不知道,撒谎是不好的,对吧,所以我们不打算撒谎。但你可能处于某种特定情况下,比如——我不知道——有些坏人来抓某个人,如果你撒个谎就能救他们的命,那么合乎伦理的做法也许就是撒谎,对吧。
[原文] [Shane Legg]: And so the the the simple rule is not always adequate to really make the right decision so sometimes you need a little bit of logic and reasoning to really think through well in this case it's a it is actually the ethical thing to do is to tell a lie and maybe save someone's life or what have you right.
[译文] [Shane Legg]: 所以,简单的规则并不总是足以真正做出正确的决定。所以有时你需要一点逻辑和推理来真正想清楚:好吧,在这种情况下,实际上合乎伦理的做法是撒谎,也许能救某人的命或者别的什么,对吧。
[原文] [Shane Legg]: But it gets very complicated and you have you know you probably heard of all these trolley problems and all these sorts of things right where our instincts and the analysis in some cases start diverging and causes a lot of confusion right.
[译文] [Shane Legg]: 但这会变得非常复杂。你知道,你可能听说过所有这些“电车难题(trolley problems)”和各类类似的事情,对吧?在这些情况下,我们的本能和分析开始出现分歧,并导致很多困惑,对吧。
[原文] [Shane Legg]: So these are this is not simple territory at all and we have AIs now that do this these thinking AIs right and so you can actually see the chain of thought that the the AIS use and so when you give an AI some question has a moral aspect to it some ethical aspect you can actually see it go away and reason about the situation.
[译文] [Shane Legg]: 所以这绝不是简单的领域。而我们现在拥有能做这种事的 AI,这些“会思考的 AI”,对吧。所以你实际上可以看到 AI 使用的“思维链(chain of thought)”。当你给 AI 一个带有道德层面、伦理层面的问题时,你实际上可以看到它去对情况进行推理。
[原文] [Shane Legg]: And if we can make that reasoning really really tight and has a really strong understanding of uh some ethics and morals that we want it to adhere to I think it should in principle actually be be a become more ethical than people because it can more consistently apply and reason at maybe a superhuman level um the decision you know the choices that it's faced with and so on because that switches ethics into a reasoning problem as it were rather than just a sort of a a feeling thing.
[译文] [Shane Legg]: 如果我们能让这种推理变得非常非常严密,并且让它对我们要其遵守的某些伦理道德有非常深刻的理解,我认为原则上它实际上应该变得比人类更具伦理道德。因为它可以更一致地应用准则,并可能在超人类的水平上对它面临的决定、选择等等进行推理。因为这把伦理变成了一个推理问题,而不仅仅是某种感觉上的事情。
[原文] [Hannah Fry]: But then at the same time I do wonder when you're saying that I do wonder a bit about grounding i mean these things certainly for now are like not living in the world as humans is it possible to sort of take what it feels like to experience the world from a human perspective and truly ground these machines in in in in sort of human ethics?
[译文] [Hannah Fry]: 但与此同时,当你这么说的时候,我确实有点怀疑“接地(grounding)”的问题。我的意思是,这些东西目前肯定不像人类那样生活在世界上。是否可能提取出那种从人类视角体验世界的感觉,并真正将这些机器扎根于某种人类伦理之中?
[原文] [Shane Legg]: Um well there's a few complexities one complexity there is that there is not one human ethics.
[译文] [Shane Legg]: 嗯,这有几个复杂之处。其中一个复杂点在于,并不存在一种单一的人类伦理。
[原文] [Hannah Fry]: Agree.
[译文] [Hannah Fry]: 同意。
[原文] [Shane Legg]: And there are different uh ideas about this uh that vary between people but also between cultures and regions and so on so it'll have to understand that in certain places the norms are and expectations are a bit different um and to some extent the models do know quite a lot of this actually because they absorb data from all around the world um but yeah it will need to be uh really good at that in terms of grounding in reality.
[译文] [Shane Legg]: 而且对此有不同的看法,人与人之间不同,文化与地区之间也不同等等。所以它必须理解在某些地方规范和期望是有点不同的。嗯,在某种程度上,模型实际上已经知道了很多这方面的内容,因为它们吸收了来自世界各地的数据。但是是的,就扎根于现实而言,它需要在这一点上做得非常好。
[原文] [Shane Legg]: At the moment we're building these agents by collecting lots of data from the world training them into these big models and then they become relatively static objects that we then interact with and they don't really learn much new or anything like that um that's shifting and we're bringing in uh more learning algorithms and all that kind of thing but we're also making the systems more agentic so they're not just a a system that you talk to and then it processes and gives a response but there may be a system that can go and do something.
[译文] [Shane Legg]: 目前,我们构建这些智能体的方式是收集世界上大量的数据,将它们训练成这些大模型,然后它们就变成了相对静态的对象,我们再与它们交互,它们实际上不会学到太多新东西或类似的。嗯,这种情况正在改变,我们正在引入更多的学习算法和所有那一类东西。但我们也在让系统变得更具“代理性(agentic)”,所以它们不仅仅是一个你跟它说话、它处理然后给出回应的系统,而可能是一个能去执行某些任务的系统。
[原文] [Shane Legg]: So you can say to it okay I want you to write some software that does such and such oh I want you to go and um I don't know come up with a plan for my trip to Mexico and I want to see this and this but I don't like this or whatever and then those agents will also start to become more embodied in robotics and things like that some of them will be software agents they'll do those sorts of things um but they'll with time I think they'll become more they'll turn up in robots and all that kind of thing.
[译文] [Shane Legg]: 所以你可以对它说,好吧,我要你写一些做某某事的软件;噢,我要你去——我不知道——为我的墨西哥之行制定一个计划,我想看这个和那个,但我不喜欢这个或其他什么。然后那些智能体也将开始更多地具身化(embodied)于机器人之类的东西中。它们有些会是软件智能体,做那一类事情,但我认为随着时间的推移,它们会更多地出现在机器人和所有那一类东西中。
[原文] [Shane Legg]: And as you keep going along this this track the AIS become more connected to reality through all sorts of different things and they actually have to learn through interaction and experience rather than just sort of a large data set that sort of goes in at the beginning that's where the connection to reality tightens up a lot.
[译文] [Shane Legg]: 当你沿着这条轨道继续前行时,AI 会通过各种不同的事物与现实产生更多联系,它们实际上必须通过互动和经验来学习,而不仅仅是在开始时输入的那种大数据集。这就是与现实的联系变得紧密得多的地方。
[原文] [Hannah Fry]: This idea of the AI being better at ethics than than humans themselves how do you until you get there until like the reasoning is as good as ours how do you make sure that it's implemented in a safe way?
[译文] [Hannah Fry]: 关于 AI 比人类本身更擅长伦理的这个想法,在你到达那里之前,比如在它的推理能力和我们一样好之前,你如何确保它是以安全的方式实施的?
[原文] [Hannah Fry]: I mean yeah it's a big stop i don't know like so for example you know a utilitarian argument right that that works quite well for for driverless cars on the roads is like you want to save as many lives as possible but then in medicine that same idea right it it doesn't work anymore you can't sacrifice one healthy patient to save the lives of five others how do you make sure that it ends up reasoning in the correct direction?
[译文] [Hannah Fry]: 我的意思是,是的,这是一个大问题。我不知道,比如说,你知道功利主义论点(utilitarian argument),对吧?这在道路上的无人驾驶汽车上很管用,比如你想尽可能多地挽救生命。但在医学上,同样的想法就不管用了,你不能为了救五个人的命而牺牲一个健康的病人。你如何确保它最终是朝着正确的方向推理的?
[原文] [Shane Legg]: Uh you can't guarantee everything the space of possibilities of action in the world is so huge that 100% reliability is not a thing but it's not a thing in a lot of the world as it exists if you need a surgery and you go and talk to the surgeon and you say "Well you know I'm going to get something removed or whatever." And the surgeon says to you "It's 100% safe." As a mathematician you know that they're not telling you the truth right nothing is ever 100%.
[译文] [Shane Legg]: 呃,你不能保证一切。世界上行动可能性的空间如此巨大,以至于100%的可靠性是不存在的。但这在现实世界的很多方面也是不存在的。如果你需要做手术,你去和外科医生谈,你说“好吧,你知道我要切除什么东西之类的。”然后外科医生对你说“这100%安全。”作为一名数学家,你知道他们没在说实话,对吧,没有什么东西是绝对100%的。
[原文] [Shane Legg]: So what we have to do is we have to test these systems um and make them as safe and reliable as possible and we have to trade off the benefits and the risks and we also have to you know we have to do other things like monitor them so when they're in deployment we we monitor them keep track of what's going on so if we start seeing that you know there are failure cases that are beyond what we consider acceptable we may have to roll back and stop them or do whatever right.
[译文] [Shane Legg]: 所以我们必须做的是,我们必须测试这些系统,让它们尽可能安全可靠,我们必须权衡利益和风险。而且我们还必须——你知道——做其他事情,比如监控它们。所以当它们部署后,我们监控它们,跟踪正在发生的事情。如果通过监控我们开始看到出现了超出我们认为可接受范围的失败案例,我们可能不得不回滚并停止它们,或者采取其他行动,对吧。
[原文] [Shane Legg]: So there's a whole range of different things we need to do we need to we need to do testing before it goes out we need to monitor it when it when they are out there doing things we need to do things like interpretability we're able to look inside the system that's one nice thing about system two if it's safety if it's implemented the right way you can actually see it reasoning about things but you got to check that this reasoning is actually an accurate reflection of what it's really trying to do.
[译文] [Shane Legg]: 所以我们需要做一系列不同的事情:我们需要在发布前进行测试;我们需要在它们在外工作时进行监控;我们需要做像“可解释性(interpretability)”这样的事情,即我们能够查看系统内部。这就是系统2的一个好处,如果它是安全的,如果实施得当,你实际上可以看到它对事情进行推理。但你必须检查这种推理是否真实反映了它真正试图做的事情。
[原文] [Shane Legg]: But you know if you have ways to look inside the system and really see why they're doing things that can maybe give you another level of reassurance as to that they are sort of you know trying to act in the right way because that's another important you know subtlety it's not always just about the outcome but maybe the intention right.
[译文] [Shane Legg]: 但是你知道,如果你有办法查看系统内部并真正看到它们为什么要做这些事情,这也许能给你另一层保证,确信它们在某种程度上是试图以正确的方式行事。因为那是另一个重要的微妙之处,不仅仅总是关于结果,也许还关于意图(intention),对吧。
[原文] [Hannah Fry]: So then do you sort of limit the the amount that these things can interact with the real world how quickly you release them and so on and so on until you feel confident that they're they're at the safety threshold?
[译文] [Hannah Fry]: 那么,你会限制这些东西与现实世界互动的数量、发布的快慢等等,直到你对它们达到安全阈值感到自信为止吗?
[原文] [Shane Legg]: Yeah so we have all kinds of testing benchmarks and tests and we we we run them you know internally for a while and we we have particular things that we test for that are that risky areas um like what we try to see if the system will help develop I don't know like a boweapon or something like that right and obviously it should not.
[译文] [Shane Legg]: 是的,所以我们有各种测试基准和测试,我们在内部运行它们一段时间。我们有特定的测试项目针对那些风险领域。比如我们试着看系统是否会帮助开发——我不知道——比如生物武器之类的东西,对吧,显然它不应该这么做。
[原文] [Shane Legg]: And so if we start seeing that it it it we can somehow trick it or force it into being helpful in that area that's a problem right hacking is another one will it help people you know hack things and so on so so yeah we have uh at the moment a collection of these tests and these collection keeps growing over time and then we assess how powerful it is in some of these areas and then we have mitigations appropriate to each level of capability that we see it could mean that we don't release the model it could mean that various different things depending on what we find.
[译文] [Shane Legg]: 所以如果我们开始看到我们能某种程度上欺骗它或强迫它在该领域提供帮助,那就是个问题,对吧。黑客攻击(hacking)是另一个例子,它是否会帮助人们黑进系统等等。所以是的,目前我们有一系列这类测试,而且这些测试集随着时间推移不断增加。然后我们评估它在其中一些领域的强大程度,接着我们会针对我们看到的每个能力水平采取适当的缓解措施。这可能意味着我们不发布该模型,也可能意味着各种不同的事情,取决于我们发现了什么。
📝 本节摘要:
本章触及了 AI 领域最玄妙的话题——意识。Shane 坦言,即使是全球顶尖专家也无法判定未来的高级 AI 是否具备意识,但这不妨碍人们在主观上认为它们“有意识”。随后,话题转向更具物理基础的讨论:人类智能是否是宇宙的上限?Shane 通过对比人脑的生物限制(20瓦功耗、电化学信号速度)与数字计算的物理潜力(兆瓦级功耗、光速信号),有力地论证了“超级智能(Super Intelligence)”的必然性——在硬件层面,硅基智能拥有超越碳基大脑数个数量级的潜力。
[原文] [Hannah Fry]: There are questions like um so okay we've got powerful AGI and it's reasonably safe is it conscious do we is that even a meaningful question do you have a stance on that even
[译文] [Hannah Fry]: 还有一些问题,比如,好吧,我们有了强大的 AGI,而且它相当安全。那它有意识吗?这甚至是一个有意义的问题吗?你对此有立场吗?
[原文] [Shane Legg]: Uh well we've got a group looking at that and we've talked to a lot of uh leading experts in the world who study this and I think the short answer is nobody really knows.
[译文] [Shane Legg]: 呃,好吧,我们有一个小组在研究这个问题,而且我们和世界上许多研究这个领域的顶尖专家谈过。我认为简短的回答是:没人真正知道。
[原文] [Shane Legg]: To be absolutely absolutely clear we're talking about full AGI here rather than the stuff we have at the moment.
[译文] [Shane Legg]: 必须要绝对、绝对清楚的是,我们要谈论的是完全 AGI(Full AGI),而不是我们目前拥有的这些东西。
[原文] [Hannah Fry]: Yes are you comfortable the stuff at the moment is not
[译文] [Hannah Fry]: 是的。你确信目前的东西没有意识吗?
[原文] [Shane Legg]: I don't think it is um as we go into some future AGI years in the no 10 years in the future or something uh which is very very capable will that system be conscious
[译文] [Shane Legg]: 我不认为它们有。嗯,当我们进入未来的某种 AGI 时代——比如10年后或什么时候——那时系统非常非常能干,那个系统会有意识吗?
[原文] [Shane Legg]: When I talk to some of the most famous experts in the world that study this there are various people who have arguments for there are various people who are arguments against but when I actually put a concrete scenario to them and I say "Look we've got Gemini 10 here and it's embodied in a you know humanoid robot and it it learns and it integrates information across sensors and it can remember its own history as an agent in the world and and do all these sorts of things."
[译文] [Shane Legg]: 当我与世界上一些研究这个的最著名的专家交谈时,有些人有支持的论据,有些人有反对的论据。但当我实际上给他们设定一个具体的场景,我说:“看,我们要这儿有 Gemini 10 号,它具身于一个类人机器人中,它能学习,能整合跨传感器的信息,能记住自己作为世界上一个智能体的历史,还能做所有这类事情。”
[原文] [Shane Legg]: Uh and also talks about its own consciousness because you can actually get AI models to talk about that consciousness now if you you know you prompt them in the right kind of way is it conscious and when I put that to uh people in the field they're like well I think probably not or I think probably yes but actually I'm not ab absolutely sure and who knows maybe we will have an answer to that
[译文] [Shane Legg]: “呃,而且它还会谈论自己的意识——因为如果你用正确的方式提示它们,你现在实际上可以让 AI 模型谈论那种意识——它是有意识的吗?”当我把这个问题抛给该领域的人时,他们的反应是:“好吧,我觉得可能没有”,或者“我觉得可能有,但我实际上并不绝对确定”。天知道,也许以后我们要会有答案。
[原文] [Shane Legg]: I think it's a long-standing question and it's a very difficult question to even make into a strict scientific question because we don't know how to frame this as a measurable thing
[译文] [Shane Legg]: 我认为这是一个长期存在的问题,甚至很难把它变成一个严格的科学问题,因为我们不知道如何将其构建为一个可测量的事物。
[原文] [Shane Legg]: What I am sure is going to happen is that some people will think they are conscious and some people will think they are not that is certainly going to happen um particularly in the absence of a really well-accepted scientific definition and way of measuring it and then how are we going to navigate that that's a very interesting question as well
[译文] [Shane Legg]: 我确定会发生的是,有些人会认为它们有意识,而有些人会认为它们没有。这肯定会发生。特别是缺乏一个真正被广泛接受的科学定义和测量方法的情况下。然后我们将如何应对这种情况?这也是一个非常有趣的问题。
[原文] [Hannah Fry]: But this is just one question of you know we have things like um are we going to go from AGI say full AGI are we going to go towards super intelligence that's far far beyond human intelligence um is it going to happen quickly slowly never and if it does go to super intelligence what is that super intelligence what's the what's the cognitive profile of that super intelligence
[译文] [Hannah Fry]: 但这只是其中一个问题。你知道我们要还有这样的问题:我们会从 AGI——比如说完全 AGI——走向远超人类智能的“超级智能(Super Intelligence)”吗?这会发生得很快、很慢还是永远不会发生?如果真的走向超级智能,那个超级智能是什么?那个超级智能的认知侧写(cognitive profile)是什么?
[原文] [Hannah Fry]: Are there certain things where it's going to be far far beyond human we already see it can speak 200 languages or something that that's clear and are there other things where maybe because of the computational complexity or whatever is not actually going to be much better than humans right um do we have any idea of that
[译文] [Hannah Fry]: 是否在某些方面它会远超人类?我们已经看到它能讲200种语言之类的,那是显而易见的。而在其他方面,也许由于计算复杂性或其他原因,它实际上不会比人类好多少,对吧?我们对此有什么概念吗?
[原文] [Hannah Fry]: That seems like a really important question for you know humanity to be thinking about are we going to go into super intelligence in a decade or two decades or something like that do you have a stance on that do you think it will go to super intelligence
[译文] [Hannah Fry]: 这似乎是人类需要思考的一个非常重要的问题:我们会在十年或二十年之类的时间内进入超级智能时代吗?你对此有立场吗?你认为会走向超级智能吗?
[原文] [Hannah Fry]: Um I mean I'm sort of thinking here about like um you know Einstein for example came up with general relativity will we be in a position where you have AGI that can theorize about the world come up with genuine scientific understanding that goes beyond what humans have managed
[译文] [Hannah Fry]: 嗯,我的意思是,我在这儿想到的是像——你知道——爱因斯坦提出了广义相对论。我们会处于这样一个位置吗:拥有能对世界进行理论推导、提出超越人类所能达成的真正科学理解的 AGI?
[原文] [Shane Legg]: Uh I think it will based on computation and the human brain is a a mobile processor it weighs a few pounds it consumes I think around 20 watts um signals are sent within the brain uh through dendrites um the frequency on the channel is about order of 100 hertz or maybe 200 htz in in the cortex um and the signals themselves are electrochemical wave propagations they move at about 30 m/s okay
[译文] [Shane Legg]: 呃,基于算力,我认为会。人脑是一个移动处理器,它重几磅,功耗我想大约是20瓦。大脑内的信号通过树突传递,通道上的频率大约是100赫兹,或者在皮层中可能是200赫兹。而且信号本身是电化学波传播,它们的移动速度大约是每秒30米,好吗?
[原文] [Shane Legg]: So if you compare that to what we see in a data center instead of 20 watts you could have 200 megaww instead of a few pounds you could have several million pounds instead of 100 hertz on the channel you can have 10 billion hertz on the channel right and instead of uh electrochemical wave propagation at 30 meters/s you can be at the speed of light 300,000 kilometers/s right
[译文] [Shane Legg]: 所以如果你把它和我们在数据中心看到的相比:不再是20瓦,你可以有200兆瓦;不再是几磅重,你可以有几百万磅重;通道上不再是100赫兹,你可以有100亿赫兹,对吧;不再是每秒30米的电化学波传播,你可以达到光速,每秒30万公里,对吧?
[原文] [Shane Legg]: So in terms of energy consumption space bandwidth on the channel speed of signal propagation you've got six seven maybe eight orders of magnitude in all four dimensions simultaneously right so is human intelligence going to be the upper limit of what's possible i think absolutely not
[译文] [Shane Legg]: 所以就能耗、空间、通道带宽、信号传播速度而言,你在所有这四个维度上同时拥有六个、七个甚至八个数量级的优势,对吧?所以,人类智能会是可能性的上限吗?我认为绝对不是。
[原文] [Shane Legg]: And so I think we as our understanding of how to build intelligent systems develops we're going to see these AIs go far beyond human intelligence um in the same way that you know humans you know we can't outrun a top fuel dragster over 100 meters right we can't lift more than a crane right we can't see further than the Hubble telescope
[译文] [Shane Legg]: 因此我认为,随着我们构建智能系统的理解不断发展,我们将看到这些 AI 远超人类智能。就像人类无法在100米赛跑中跑赢顶级燃料加速赛车(Top Fuel Dragster)一样,对吧?我们举重比不过起重机,对吧?我们要看得没哈勃望远镜远。
[原文] [Shane Legg]: I mean it's we already see machines in particular areas that can you know fly faster than the fastest bird and all these sorts of things right uh I think we'll see that in cognition as well
[译文] [Shane Legg]: 我的意思是,我们已经看到机器在特定领域能——你知道——飞得比最快的鸟还快,以及所有这类事情,对吧?呃,我认为我们在认知领域也会看到这一点。
[原文] [Shane Legg]: We've already seen in some aspects you know you don't know more than Google right um and so on on like information storage and stuff like that we already gone beyond what the human brain is capable of I think we're going to start seeing that in reasoning and all kinds of other domains so yes I think we are going to go towards super intelligence
[译文] [Shane Legg]: 我们已经在某些方面看到了,你知道你懂的没 Google 多,对吧?嗯,所以在诸如信息存储之类的方面,我们已经超越了人脑的能力。我认为我们将开始在推理和各种其他领域看到这一点。所以是的,我认为我们将走向超级智能。
📝 本节摘要:
本章聚焦于超级智能可能引发的宏观社会变革。Shane 担忧如果人类智能在超级智能面前相形见绌,现有的“以劳动换资源”的经济模式可能不再适用,从而导致巨大的贫富差距。尽管生产力的提升会让“蛋糕”变得更大,但分配机制亟待重构。为此,他曾向英国罗素集团(Russell Group)的大学副校长们建议,所有学科(从法律到哲学)都必须重新审视 AGI 对其领域的冲击。最后,Hannah 将当前时刻比作“2020年3月(疫情爆发前夕)”,Shane 认同这一类比,指出人类往往难以直观理解指数级增长的临界点。
[原文] [Shane Legg]: I do wonder what all of this means for people. I mean if we are getting to a point where essentially I mean human intelligence is dwarfed by super intelligence what does that mean for society? Does that mean just massive inequality that you have the people who no longer have value essentially in what they they can they can offer the economy um being completely left behind?
[译文] [Shane Legg]: 我确实在想这一切对人类意味着什么。我的意思是,如果我们到达了这样一个阶段,即人类智能在超级智能面前显得相形见绌,那对社会意味着什么?这是否意味着巨大的不平等?那些在能为经济提供的价值方面实质上不再有价值的人,会被完全抛在后面吗?
[原文] [Shane Legg]: It means a massive transformation. I think the current system where people contribute their um their mental and physical labor in return to access to resources that are generating the economy uh that may not work the same anymore and we may need different ways of doing things.
[译文] [Shane Legg]: 这意味着一场巨大的变革。我认为当前的系统——即人们贡献他们的脑力和体力劳动,以换取获取经济产出的资源——可能不再像以前那样运作了,我们可能需要不同的行事方式。
[原文] [Shane Legg]: Now the pie should get much bigger so there's there's not a problem of a lack of goods and services that are produced if anything that's getting much much better but we need to think carefully about what is the what's the system for people what is how do we distribute uh the wealth that exists in society.
[译文] [Shane Legg]: 现在的“蛋糕”应该会变得大得多,所以不存在生产出的商品和服务匮乏的问题,如果有的话,情况其实会变得好得多。但我们需要仔细思考,适合人类的系统是什么?我们该如何分配社会中存在的财富?
[原文] [Shane Legg]: I think there needs to be a lot more thought going into this of how a post AGI economy works and how the structure of a post AI um society works as well.
[译文] [Shane Legg]: 我认为需要对“后 AGI 时代(post-AGI)”的经济如何运作,以及“后 AI 时代”的社会结构如何运作投入更多的思考。
[原文] [Shane Legg]: I gave a talk to the um Russell Group vice chancellor so in the UK the Russell Group is the top universities in the UK and um I said to them look this AGI thing's coming and it's not that far away you know in 10 years we're going to have it and it's going to start being able to do a significant fraction of all kinds of cognitive labor and work and things that people do right.
[译文] [Shane Legg]: 我给罗素集团(Russell Group)的副校长们做过一次演讲——在英国,罗素集团代表顶尖大学。我对他们说:“听着,AGI 这东西就要来了,而且并不遥远。你知道,在10年内我们就会拥有它,它将开始能够完成人类所做的各种认知劳动和工作的很大一部分,对吧。”
[原文] [Shane Legg]: We actually need people in all these different aspects of society and how society works to think about what that means in their particular area so we really need every faculty and every department that you have in your university to take this seriously and think what does it mean for education right what does it mean for law what does it mean for engineering mathematics um city planning uh literature politics economics finance medicine dot dot dot dot dot dot right.
[译文] [Shane Legg]: 我们实际上需要社会各个方面、以及研究社会运作方式的人们去思考这对他们特定领域意味着什么。所以我们真的需要你们大学里的每一个学院、每一个系都认真对待这件事,思考这对教育意味着什么?对法律意味着什么?对工程、数学、城市规划、文学、政治、经济、金融、医学……等等等等意味着什么,对吧?
[原文] [Shane Legg]: So basically every faculty every department studies something where human intelligence is a really important thing and so if you have the presence of cheap abundant capable machine intelligence turning up that thing needs to be thought about again what is the implications of this should it be done in a different way what are the opportunities what are the risks and so on.
[译文] [Shane Legg]: 所以基本上,每个学院、每个系研究的东西里,人类智能都是一个非常重要的因素。因此,如果有廉价、丰富且能干的机器智能出现,那件事就需要被重新思考:这意味着什么?是否应该以不同的方式去做?机会是什么?风险是什么?等等。
[原文] [Shane Legg]: So I think there's an enormous opportunity here but just like you know any revolution like the industrial revolution or anything um it's complicated it has all kinds of effects on society in all kinds of ways and to get the benefits of that and and minimize the risks and the costs of that we need to navigate this carefully and at the moment I think nowhere near enough people are thinking about what AGI means for this particular thing and we need a lot more people doing that.
[译文] [Shane Legg]: 所以我认为这里有巨大的机会,但就像任何革命——比如工业革命或任何事情——一样,它是复杂的,它在各方面对社会产生各种影响。为了获得其中的利益并将其风险和代价降到最低,我们需要小心地驾驭它。而目前,我认为思考“AGI 对这件事究竟意味着什么”的人还远远不够,我们需要更多的人来做这件事。
[原文] [Hannah Fry]: Do you remember in March 2020 when the experts were saying there's this pandemic coming it's really it's it's we're really standing on the on the edge of an exponential curve and then everyone was still sort of in pubs and you know going to football games and things and the experts were increasingly shouting about what was coming do you sort of feel a little bit like that?
[译文] [Hannah Fry]: 你还记得2020年3月吗?当时专家们说大流行病要来了,真的,我们真的站在指数曲线的边缘。而当时大家还都有点像是待在酒吧里,你知道,去球赛之类的,而专家们对即将到来的事情的呼喊声越来越大。你会觉得有点像那种感觉吗?
[原文] [Shane Legg]: I remember those days well um it does feel a bit like that people find it very hard to believe that a really big change is coming because most of the time the story that something really huge is about to happen it's not always the physical out to nothing right and so as a kind of a huristic if somebody tells you some crazy crazy big things are going to happen as a heristic probably you can ignore most of those.
[译文] [Shane Legg]: 我很清楚地记得那些日子。嗯,确实感觉有点像那样。人们很难相信一个真正巨大的变化即将到来,因为大多数时候,关于“某件真正巨大的事情即将发生”的故事最后往往不了了之,对吧?所以作为一种启发式思维(heuristic),如果有人告诉你一些疯狂、巨大的事情要发生了,通常你可以忽略其中大部分。
[原文] [Shane Legg]: But you do have to pay attention sometimes there are fundamentals that are driving these things and if you understand the fundamentals you need to take seriously the idea that a big change does come and you know sometimes big changes do come.
[译文] [Shane Legg]: 但你有时候确实必须注意,因为有基本面因素(fundamentals)在推动这些事情。如果你理解了这些基本面,你就需要认真对待“巨变确实会到来”这一想法。而且你知道,有时候巨变真的会来。
📝 本节摘要:
在访谈的最后部分,Shane 描绘了未来几年的具体图景:AI 将从单纯的“工具”转变为承担实质性经济工作的“代理”,这一转变将首先冲击软件工程等领域。他提出了一个判断职业风险的经验法则:如果你的工作可以通过笔记本电脑远程完成,那它就很可能面临被 AI 取代的风险;反之,像水管工这样需要复杂肢体操作的工作,由于机器人技术发展的滞后,在短期内相对安全。Shane 重申了他著名的预测:2028年实现“最小化 AGI”的概率为50%,而“完全 AGI”将在随后的十年内到来。尽管存在失业风险,但他最终以乐观的态度结束对话,将这场变革比作认知领域的“工业革命”,认为若引导得当,人类将迎来一个物质与精神极度繁荣的“黄金时代”。
[原文] [Hannah Fry]: What does this mean though because I mean okay you describe a sort of a a long-term vision where you have full AGI and there's like prosperity that can you know potentially be shared and so on but but getting there I mean we're talking about some really big I mean that's an understatement massive economic disruption structural risks here just talk us through what you expect the next few years to look like i mean tell us what we didn't know in March 2020.
[译文] [Hannah Fry]: 但这到底意味着什么呢?因为,好吧,你描述了一个长期的愿景,那时我们有了完全 AGI,会有某种——你知道——可能被共享的繁荣等等。但在到达那里之前……我的意思是,我们正在谈论一些非常大的——这还是轻描淡写了——巨大的经济破坏和结构性风险。这就跟我们讲讲你预期未来几年会是什么样子吧?我的意思是,告诉我们在2020年3月时所不知道的事情。
[原文] [Shane Legg]: I think what we'll see in the next few years is not those big disruptions you're talking about i think we'll see in the next few years is AI systems going from being very useful tools to actually taking on more of a uh load in terms of doing really economically valuable uh work and I think it'll be quite uneven it'll happen in certain domains fast than others.
[译文] [Shane Legg]: 我认为我们在未来几年看到的不会是你所说的那种巨大的破坏。我认为未来几年我们会看到的是,AI 系统从非常有用的工具,转变为在从事真正具有经济价值的工作方面承担更多负荷。我认为这将是非常不平衡的,它在某些领域会比其他领域发生得更快。
[原文] [Shane Legg]: So for example in software engineering I think in the next few years the fraction of software being written by AI is going to go up and so in a few years where prior you needed a 100 software engineers maybe you need 20 and those 20 use advanced AI tools.
[译文] [Shane Legg]: 举个例子,在软件工程领域,我认为在未来几年内,由 AI 编写的软件比例将会上升。所以几年后,原本你需要100名软件工程师的地方,也许只需要20名,而这20人使用的是先进的 AI 工具。
[原文] [Shane Legg]: Over a few years we'll see AI going from kind of just a sort of a a useful tool to being to doing really meaningful productive work and increasing the productivity of people that work in those areas it'll also create some disruption in uh in the labor market in certain areas.
[译文] [Shane Legg]: 几年下来,我们将看到 AI 从仅仅是某种有用的工具,转变为从事真正有意义的生产性工作,并提高那些领域工作人员的生产力。这也会在某些领域的劳动力市场造成一些破坏。
[原文] [Shane Legg]: And then as that happens um I think a lot of the discussion around AI is going to um shift and become a lot more serious and so it's going to shift from being just sort of like oh this is really cool you can ask it to plan your holiday and help you with your you know children's if they're stuck in something and they don't understand their homework or whatever things like this um through to something that's like okay this is not some nice new tool this is actually something which is going to structurally change the economy and society and all kinds of things and we need to think about how do we structure this new world.
[译文] [Shane Legg]: 然后随着这发生,我认为很多关于 AI 的讨论将会转变,变得严肃得多。它将从“噢这真酷,你可以让它计划假期,或者帮你孩子辅导他们不懂的作业”之类的事情,转变为像“好吧,这不是什么漂亮的新工具,这实际上将结构性地改变经济、社会以及各类事物,我们需要思考如何构建这个新世界。”
[原文] [Shane Legg]: Because I do believe that if we can harness this capability this could be a real golden age cuz we now have machines that can dramatically increase production of many types of things right and advance science and and um relieve us of all kinds of labor that maybe we don't need to be doing if the machines can do it right so there's an opportunity here but that is only good if we can somehow translate this incredible capability of machines into a vision of society where there is some flourishing of people as individuals and as groups of people in society that benefit from all this capability.
[译文] [Shane Legg]: 因为我确实相信,如果我们能驾驭这种能力,这可能是一个真正的黄金时代。因为我们现在拥有能大幅增加许多类型产品产量的机器,对吧?还能推进科学,并将我们要从各种或许根本不需要我们要去做的劳动中解放出来——如果机器能做的话,对吧?所以这里有一个机会。但这只有当我们能以某种方式将这种不可思议的机器能力转化为一种社会愿景时才是好的——在这种愿景中,作为个人和作为社会群体的我们要能够通过这种能力蓬勃发展并从中受益。
[原文] [Hannah Fry]: Because in the meantime you have those 80 software engineers who are no longer needed and all of the other people the the entry level employees at the moment you know graduates who are sort of noticing that they're the first ones to to be affected by this are there any industries that are not going to be impacted by this uh in the short to medium term.
[译文] [Hannah Fry]: 因为在此期间,你会有那80个不再被需要的软件工程师,还有所有其他人,目前的入门级员工,你知道,那些已经注意到自己是第一批受影响者的毕业生。在短期到中期内,有什么行业是不会受到这种影响的吗?
[原文] [Shane Legg]: I think there'll actually be quite a lot of things so plumbers often go right um I think in the next in the coming years we're not going even if the AI does develop quite quickly then it's purely cognitive sense I don't think robotics will be at the point which could be a plumber and then even when that is possible um I think it's going to take quite a while before it's price competitive with a human plumber right.
[译文] [Shane Legg]: 我认为实际上会有很多。所以水管工通常是安全的,对吧?嗯,我认为在未来几年里,即使 AI 在纯粹的认知层面发展得非常快,我不认为机器人技术能达到可以做水管工的程度。即便在那成为可能之后,我认为要让它在价格上与人类水管工竞争,还需要相当长的一段时间,对吧。
[原文] [Shane Legg]: And so I think there are all kinds of uh work which is not purely cognitive uh that will be relatively protected from some of this stuff the interesting thing is that a lot of uh work which currently commands uh very high compensation is sort of elite cognitive work right so it's people doing I don't know um sort of highowered lawyers that are doing complex merger and acquisition deals across the globe and uh people doing advanced stuff in finance or now people doing you know advanced machine learning software engineering all these types of things um mathematicians.
[译文] [Shane Legg]: 所以我认为有各种不是纯认知的这类工作,会相对免受这些影响。有趣的是,许多目前薪酬很高的工作其实是某种精英认知工作,对吧?比如——我不知道——那种在全球范围内做复杂并购交易的高级律师,还有在金融领域做高级工作的人,或者现在做高级机器学习软件工程的人,所有这类事情。还有数学家。
[原文] [Shane Legg]: One rule of thumb that I quite like is if you can do the job remotely over the internet just using a laptop so you're not some full haptic body suit with some robot you know controlling whatever just normal interface keyboard screen camera speaker microphone you know mouse if you can do your work completely that way uh then it's probably very much cognitive work so if you're in that category uh I think that uh advanced AI uh will be able to operate in that base um to to to some extent.
[译文] [Shane Legg]: 我很喜欢的一个经验法则是:如果你能仅仅使用一台笔记本电脑通过互联网远程完成工作——所以你不是穿着某种全触觉紧身衣去控制机器人之类的,只是普通的接口:键盘、屏幕、摄像头、扬声器、麦克风、鼠标——如果你能完全以这种方式完成你的工作,那么这可能很大程度上是认知工作。如果你属于这一类,我认为高级 AI 将在某种程度上能够在那个领域进行操作。
[原文] [Shane Legg]: The the other thing that uh is I think protective is even if it is cog sort of cognitive work there can be a human aspect to some types of um work and things that people do so for example let's say you are I don't know an influencer right and you work you can do that work maybe remotely but the fact that you're a particular person with a particular personality and people know there is a person behind you know what's going on there that may be valuable in many cases right.
[译文] [Shane Legg]: 另一件我认为具有保护作用的事情是,即使它是某种认知工作,某些类型的工作和人们做的事情中可能包含“人”的因素。举个例子,假设你是一个网红(influencer),对吧?你的工作也许可以远程完成,但事实是你是一个特定的拥有特定个性的人,而且人们知道这背后有一个人,你知道那意味着什么,这在许多情况下可能是有价值的,对吧。
[原文] [Hannah Fry]: That leaves a lot of people though doesn't it.
[译文] [Hannah Fry]: 但那还是剩下了很多人(受影响),不是吗?
[原文] [Shane Legg]: I think what we what we need is is sort of along the lines of what I suggested to the the Russell group is we we need people who study all these different aspects of society to take Agi seriously and my impression is that a lot of these people are not and when I go and talk to people who are interested in one of these particular things like oh yeah it's kind of like you know it's it's an interesting tool it's kind of amusing whatever but they haven't internalized the idea that what they're seeing now and any current limitations that they currently know of which by the way are often out of date often these people say "Oh I tried to do something with it a year ago." It's like a year ago is now ancient history compared to what the current models are doing and one year from now it's going to be a lot better um they're not seeing that trend in some ways.
[译文] [Shane Legg]: 我认为我们需要的,正是我向罗素集团建议的那一类:我们需要研究社会所有这些不同方面的人认真对待 AGI。我的印象是,这些人中的很多人并没有这么做。当我与那些对其中某一特定事物感兴趣的人交谈时,他们就像:“噢,是的,这有点像——你知道——这只是个有趣的工具,挺好玩的”,诸如此类。但他们还没有内化这个想法,即他们现在看到的东西以及他们所知的任何当前局限性——顺便说一句,通常都已经过时了。这些人常说“噢,我一年前试过用它做点什么。”感觉一年前相比当前模型所做的已经是古代历史了,而一年后它会变得更好。嗯,他们在某种程度上没有看到这种趋势。
[原文] [Shane Legg]: I actually think many people in the general public are ahead of the experts because I think there's a human tendency you know if I talk to non- tech people about um current AI systems some of the people say to me oh well doesn't it already have like human intelligence it speaks more languages than me it can do math and physics problems better than I could ever do at high school uh it knows more recipes than me uh it can help me with all kinds of things i was confused about my tax return and explain something to me or whatever they're like "So in what way is it not intelligent?"
[译文] [Shane Legg]: 我实际上认为很多普通大众比专家们更超前。因为我认为有一种人类倾向,你知道,如果我和非技术人员谈论当前的 AI 系统,有些人会对我说:“噢,好吧,它不就已经拥有人类智能了吗?它说的语言比我多,它做数学和物理题比我在高中有史以来做得都好,它知道的食谱比我多,它能帮我做各种事情,我对报税单感到困惑它能给我解释清楚等等。”他们的反应就像是:“那么它在哪方面不算智能呢?”
[原文] [Hannah Fry]: I think I want to end with your now quite famous prediction about AGI and you have stayed incredibly consistent on this um for over a decade in fact you have said that there is a 50/50 chance of AGI by 2028 yes is that that's minimal AGI yes wow and um are you still 50/50 by 2028?
[译文] [Hannah Fry]: 我想用你现在相当著名的关于 AGI 的预测来结束。在这个问题上你一直保持着令人难以置信的一致性。实际上十多年来你一直说,到2028年实现 AGI 的几率是50/50。(Shane:是的)。那是最小化 AGI 吗?(Shane:是的)。哇。那你现在还是认为2028年是50/50吗?
[原文] [Shane Legg]: Yes 2028 and you can see that on my blog from 2009.
[译文] [Shane Legg]: 是的,2028年。你可以在我2009年的博客上看到这一点。
[原文] [Hannah Fry]: And what do you think about full AGI what's your timeline for that?
[译文] [Hannah Fry]: 那你对完全 AGI 怎么看?你的时间表是什么?
[原文] [Shane Legg]: Uh there some years later could be three four five six years later yeah within a decade yeah I think it'll be within a decade.
[译文] [Shane Legg]: 呃,那会是几年后。可能是三、四、五、六年后。是的,十年之内。是的,我认为会在十年之内。
[原文] [Hannah Fry]: Do you ever just feel a bit nihilistic with all of this knowledge?
[译文] [Hannah Fry]: 拥有所有这些知识,你会感到有一点虚无主义吗?
[原文] [Shane Legg]: I think there is an enormous opportunity here a lot of people do a lot put a lot of effort into doing a lot of work and not all of it is that much fun and I think there's an incredible opportunity here to just like the industrial revolution en sort of took the harnessing of energy to do all sorts of mechanical work which created a lot more wealth in society now we can harness data and algorithms and computation to do all kinds of more cognitive work as well and so that can enable a huge amount of wealth to exist for people and wealth not just in terms of production of goods and services and so on but you know new technologies new medicines and all kinds of things like this so this is technology that has an incredible potential for benefit.
[译文] [Shane Legg]: 我认为这里有巨大的机会。很多人付出了很多努力去做很多工作,并不是所有的工作都那么有趣。我认为这里有一个不可思议的机会,就像工业革命利用能源来做各种机械工作,从而为社会创造了更多财富一样;现在我们可以利用数据、算法和算力来做各种更多的认知工作。这可以为人类创造巨量的财富,不仅仅是商品和服务的生产,还有新科技、新药物以及所有这类事物。所以这是具有惊人潜在利益的技术。
[原文] [Shane Legg]: Now the challenge is how do we get those benefits while dealing with the risks and potential costs and so on can we imagine a future world where we're really benefiting from having intelligence really helping us to flourish and what does that look like and that's you know I can't just answer that i'm I'm very interested in that i'm going to try and understand the best I can but this is a really profound question it touches on philosophy and economics and psychology and ethics and all kinds of questions right um and we need we need a lot more people thinking about this and trying to imagine what that positive future looks like.
[译文] [Shane Legg]: 现在的挑战是,我们如何在应对风险和潜在成本的同时获得这些利益?我们能否想象一个未来的世界,在那里我们真正受益于智能,帮助我们要蓬勃发展?那会是什么样子?你知道,我无法单独回答这个问题。我对它非常感兴趣,我会尽我所能去理解,但这真的是一个深刻的问题,它触及哲学、经济学、心理学、伦理学以及各种各样的问题,对吧?我们需要,我们需要更多的人思考这个问题,并尝试想象那个积极的未来是什么样子的。
[原文] [Hannah Fry]: Shane thank you so much that was mindexpanding to say the least. Humans are not very good at exponentials and right now at this moment we are standing right on the bend of the curve agi is not a distant thought experiment anymore what I found so interesting about that conversation with Shane is that he thinks the general public understand this better than the experts and if his timelines are anything like correct and he's had a habit of being right in the past we might not have the luxury of time for slow reflection and realization here we have got difficult urgent and potentially genuinely exciting questions that need some serious attention now.
[译文] [Hannah Fry]: Shane,非常感谢你。至少可以说,这真是让人大开眼界(mind-expanding)。人类不太擅长理解指数增长,而此时此刻,我们正站在曲线的拐点上。AGI 不再是一个遥远的思想实验了。关于与 Shane 的那次对话,我觉得非常有趣的一点是,他认为普通大众比专家更理解这一点。如果他的时间表有任何正确性——而他过去有着“总是正确”的习惯——我们可能没有奢侈的时间去进行缓慢的反思和觉醒了。我们要面临着困难、紧迫且可能真正令人兴奋的问题,这些问题现在就需要得到认真的关注。
文档整理完毕。
以上即为 Shane Legg 访谈录的完整双语精读文档。希望这份结构化的内容能帮助您深入理解 AGI 领域的关键洞见。