The LLM Revolution Is Over. The Physical AI Revolution Is Coming Fast

章节 1:通往人类水平智能的路径与AGI的误区

📝 本节摘要

在本节中,访谈者询问了当前人工智能发展的阶段。LeCun 首先澄清了他对“AGI(通用人工智能)”这一术语的反对意见,指出人类智能本身并非完全“通用”,因此该术语存在误称。他确认未来机器确实会超越人类智能,但这需要若干关键的“概念性突破”,因此不会在未来一两年内迅速实现。

[原文] [Interviewer]: My first question is where are we on the path to AGI? We are on the path to human level intelligence or to super intelligence?

[译文] [采访者]: 我的第一个问题是,我们在通往 AGI(通用人工智能)的道路上处于什么位置?我们是在通往人类水平的智能,还是超级智能?

[原文] [Yann LeCun]: I famously don't like the phrase AGI. And it's not because I don't think we're going to get machines that are smarter than humans. It's because I don't think human intelligence is general. So calling human level AI AGI is a misnomer.

[译文] [Yann LeCun]: 众所周知,我不喜欢“AGI”这个短语。这并不是因为我不相信我们会造出比人类更聪明的机器。而是因为我不认为人类智能是通用的。所以把人类水平的 AI 称为 AGI 是一个用词不当(misnomer)。

[原文] [Yann LeCun]: Uh unfortunately that ship has sailed but yeah we're going to get machines that would be smarter than humans at some point. It's not going to happen next year. It's not going to happen in two years because we need a few conceptual breakthroughs for that and those are things I've been working on and I'm still working on.

[译文] [Yann LeCun]: 呃,不幸的是,木已成舟(这个叫法已经普及了),但是是的,我们在某个时候会拥有比人类更聪明的机器。这不会在明年发生,也不会在两年内发生,因为我们需要为此取得一些概念上的突破,而这些正是我一直以来并且目前仍在致力于研究的内容。


请告诉我,是否继续输出第2章内容?

章节 2:LLM的局限性与“世界模型”的必要性

📝 本节摘要

在本节中,LeCun 指出当前 AI 发展的核心误区:仅靠扩大现有的大语言模型(LLM)规模无法实现人类水平的智能。他警告称,基于 LLM 构建“代理系统(Agentic Systems)”是灾难性的,因为 LLM 缺乏对物理世界的理解、记忆和规划能力。他通过人类驾驶学习的高效性与自动驾驶训练的低效性对比,论证了“世界模型(World Models)”的必要性——即系统必须能够预测行为后果并理解物理世界的复杂性,而这是生成式架构目前无法处理的。

[原文] [Interviewer]: So what do most leaders misunderstand about today's AI capabilities and why does that misunderstanding matter for policy regulation and uh capital allocation decisions being made right now

[译文] [采访者]: 那么,大多数领导者对当今 AI 能力的误解是什么?为什么这种误解对于目前正在制定的政策监管和资本配置决策至关重要?

[原文] [Yann LeCun]: okay we're not going to get to human intelligence human level intelligence or super intelligence by scanning ups or by even refining the paradigm There is a need for a change of paradigm

[译文] [Yann LeCun]: 好的,我们是不可能通过单纯扩大规模(scanning ups,此处口误应为scaling up)甚至改良现有范式来达到人类智能、人类水平的智能或超级智能的。我们需要一种范式的转变。

[原文] [Yann LeCun]: Uh and I've been seeing this you know for a number of years now And I think we see we're starting to see the limits of the LLM paradigm A lot of people this year have been talking about agentic systems and basing agentic systems on LLMs is a recipe for disaster because how can a system possibly plan a sequence of actions if it can't predict the consequences of its actions

[译文] [Yann LeCun]: 呃,这种观点我已经持有好几年了。我认为我们正在开始看到 LLM(大语言模型)范式的局限性。今年很多人都在谈论代理系统(agentic systems),而基于 LLM 构建代理系统简直是灾难的配方,因为如果一个系统无法预测其行为的后果,它怎么可能规划一系列行动呢?

[原文] [Yann LeCun]: Right so if you want intelligent behavior you need a system to be able to anticipate and what's going to happen in the world and and also predict the consequences of its actions If you can do this and it can plan a sequence of actions to arrive at a particular objective and that's what's missing That's the concept of world models You don't have that in LMS You're not going to get intelligent behavior without that You're not going to get efficient learning without that You're not going to get zero shot uh you know task solving

[译文] [Yann LeCun]: 对,所以如果你想要智能行为,你需要一个能够预判世界将会发生什么、并且预测自身行为后果的系统。如果你能做到这一点,它就能规划一系列行动来达到特定的目标,而这正是目前所缺失的。这就是“世界模型(World Models)”的概念。在 LLM 中你没有这个。没有它,你无法获得智能行为;没有它,你无法获得高效的学习;没有它,你也无法实现零样本(zero shot)的任务解决。

[原文] [Yann LeCun]: So the first time you ask a 10-year-old to solve a simple task they will do it without necessarily being trained you know the first 10 hours that a 17year-old drives a car within 10 hours the 17-year-old can drive the car Uh we had millions of hours of training data to train autonomous autonomous cars and we still don't have level five autonomous driving So it tells you the the basic architecture is not is not there Okay

[译文] [Yann LeCun]: 比如,当你第一次要求一个 10 岁的孩子去解决一个简单的任务时,他们不需要经过专门训练就能做到。你知道,一个 17 岁的少年学开车的最初 10 个小时,在 10 小时内,这个 17 岁的少年就能学会开车。呃,我们用了数百万小时的训练数据来训练自动驾驶汽车,但我们仍然没有实现 L5 级的自动驾驶。所以这告诉你,基础架构还没有到位,好吗。

[原文] [Interviewer]: Embedded assumptions about intelligence Much of the global AI debate seems to rest on implicit assumptions about how intelligence actually works You've long argued that intelligence is not primarily about language but about understanding the physical and social world What is missing in today's dominant AI models and what kinds of architectures or learning paradigms are actually required to move closer to real intelligence if there is one idea about intelligence human or machine that you wish every world leader at Davos truly understood what would it be

[译文] [采访者]: 关于智能的内在假设。全球关于 AI 的许多辩论似乎都建立在关于智能如何实际运作的隐性假设之上。您长期以来一直认为,智能主要不在于语言,而在于理解物理和社会世界。当今主流的 AI 模型缺失了什么?实际上需要什么样的架构或学习范式才能更接近真正的智能?如果有关于智能(无论是人类还是机器)的一个观点是您希望达沃斯的每位世界领导人都能真正理解的,那会是什么?

[原文] [Yann LeCun]: okay the real world is way more complicated than the world of language Okay this is paradoxical because as humans we think language is sort of the epitome of human intelligence But it turns out predicting the next word in the text is not that complicated And you can accumulate a lot of knowledge in an LED which is why they need to be so big and why you need to train them on so much data

[译文] [Yann LeCun]: 好的,现实世界比语言世界要复杂得多。这很矛盾,因为作为人类,我们要么认为语言是人类智能的缩影。但事实证明,预测文本中的下一个词并没有那么复杂。你可以在一个 LLM(原文口误为LED)中积累大量的知识,这就是为什么它们需要如此巨大,以及为什么要用这么多数据来训练它们。

[原文] [Yann LeCun]: But real intelligence comes from an an understanding of the real world Unfortunately the real world is messy Sensory data is high dimensional continuous noisy and generative architectures do not work with this kind of data So the type of architecture that we use for LLM generative AI does not apply to the real world

[译文] [Yann LeCun]: 但真正的智能来自于对现实世界的理解。不幸的是,现实世界是混乱的。感官数据是高维的、连续的、嘈杂的,而生成式架构无法处理这类数据。所以我们用于 LLM 生成式 AI 的那种架构并不适用于现实世界。


**

章节 3:物理AI革命:超越语言的真实世界理解

📝 本节摘要

接续上一节关于“世界模型”的讨论,LeCun 在本节正式提出了“物理 AI(Physical AI)”的概念。他预言下一场 AI 革命将不再局限于语言,而是转向理解真实世界。这类系统将能够处理高维、连续且嘈杂的传感器数据(如视频),构建环境演变的预测模型,并具备规划、推理以及核心层面的安全可控性。

[原文] [Yann LeCun]: The next revolutionary AI which is coming fast is going to be AI systems that understand the real world Systems that understand high continuous noisy data like video like sensor data systems that can build predictive models of how the their environment and is is going to evolve and what their effect on the environment is

[译文] [Yann LeCun]: 下一场即将快速到来的革命性 AI,将是理解真实世界的 AI 系统。这些系统能够理解像视频、传感器数据这样高维、连续、嘈杂的数据;这些系统能够建立预测模型,预测环境将如何演变以及它们对环境会产生什么影响。

[原文] [Yann LeCun]: Systems that can plan they can reason at the core level systems that are controllable and safe so that you give them a task and they accomplish it Okay So we're going to we're going to see as another AI revolution We've seen the deep learning revolution the L&M revolution Now it's going to be the physical AI revolution if you want

[译文] [Yann LeCun]: 这些系统能够规划,能够在核心层面进行推理;这些系统是可控且安全的,这样当你给它们一个任务时,它们能够完成它。好的。所以我们将看到另一场 AI 革命。我们已经看到了深度学习革命、LLM(大语言模型)革命。现在,如果你愿意这么称呼的话,这将是物理 AI 的革命。


**

章节 4:开源研究:过去十年AI进步的核心驱动力

📝 本节摘要

在本节中,LeCun 回顾了过去十年 AI 快速发展的核心原因。他认为,相比于具体的某项发明(如 Transformer),研究的开放性(Open Research)才是推动领域进步的最大动力。然而,他痛斥当前西方科技巨头(如 OpenAI、Google 等)逐渐封闭的趋势,认为这是灾难性的倒退。与此同时,他高度评价了中国目前的开源模型,指出它们已达到顶尖水平,甚至成为全球研究界的通用工具。

[原文] [Interviewer]: All right My next question is about your time at Meta You spent 12 years leading AI research at Meta during a period of extraordinary acceleration What do you see as the most important breakthrough that enabled AI's rapid progress over the last decade and looking ahead what critical scientific or research breakthroughs still need to happen for AI to live up to its long-term promise rather than plateau or go into an AI winter

[译文] [采访者]: 好的,我的下一个问题是关于您在 Meta 的时光。您在 Meta 领导了 12 年的 AI 研究,这是一段非凡的加速时期。您认为过去十年中,推动 AI 快速发展的最重要的突破是什么?展望未来,为了让 AI 实现其长期承诺而不是停滞不前或进入 AI 寒冬,还需要发生哪些关键的科学或研究突破?

[原文] [Yann LeCun]: well there's been an astonishing number of uh innovations that or or inventions that have kind of propelled the field forward but I'll tell you the biggest factor in progress was not any particular contribution We could site of course a bunch of them like transformers and stuff like that but but it's not any particular contribution It's the fact that AI research was open right people would do a piece of research write a paper post it on archive eventually submit it to conference or a journal or something uh open source their code And that made the field progress extremely fast because the more people can contribute to something the faster progress takes place

[译文] [Yann LeCun]: 嗯,有数量惊人的创新或者发明推动了这个领域的发展,但我告诉你,进步的最大因素不是任何特定的贡献。当然我们可以列举一堆,比如 Transformer 之类的,但并不是某个特定的贡献。真正的原因是 AI 研究是开放的,对吧。人们会做一项研究,写一篇论文,把它发布在 arXiv 上,最终提交给会议或期刊或其他什么地方,呃,开源他们的代码。这使得该领域的进步极其迅速,因为能为某件事做贡献的人越多,进步发生得就越快。

[原文] [Yann LeCun]: And what to my you know despair what's been happening the last uh few years is that increasingly more industry research labs have been climbing up open you can close AI anthropic was never open in fact very closed Google became slightly open and now more closed fair was very open but now there is kind of a change of mist parity at meta which you know may change how this how it operates and I think it's disastrous because it's going to slow down progress particularly in the west particularly in the US and simultaneously the more open research labs industry research lab are in China the best open source models at the moment come from China they're really good and so everybody is in in the research community is using Chinese models

[译文] [Yann LeCun]: 而令我绝望的是,过去几年发生的事情是,越来越多的行业研究实验室开始封闭起来(原文 climbing up 疑为 clamming up),OpenAI 变成了 Closed AI(原文 open you can close AI 疑为 OpenAI became Closed AI),Anthropic 从来没有开放过,实际上非常封闭。Google 曾稍微开放过,现在也更封闭了。FAIR(Facebook AI Research)曾经非常开放,但现在 Meta 内部似乎发生了一些变化(原文 mist parity 疑为 disparity 或 priority change),你知道这可能会改变它的运作方式。我认为这是灾难性的,因为它将减缓进步,特别是在西方,特别是在美国。与此同时,更多的开放研究实验室、行业研究实验室出现在中国。目前最好的开源模型来自中国,它们真的很好,所以研究界的每个人都在使用中国的模型。

[原文] [Yann LeCun]: uh okay my colleagues are my former colleagues at Meta working on kind of a new version of the successor to Lana if you want but uh which which may turn out to be good uh is it going to be open not entirely clear so I think that's a huge mistake we're slowing down progress because because of that

[译文] [Yann LeCun]: 呃,好的,我的同事们,也就是我在 Meta 的前同事们,正在开发 Llama(原文口误为 Lana)的后续版本的新版本,如果你想这么叫的话,呃,结果可能会很好。呃,它会开源吗?目前还不完全清楚。所以我认为这是一个巨大的错误,我们正因此而减缓进步。


**

章节 5:AMI项目与JEPA架构:非生成式的预测模型

📝 本节摘要

本节中,LeCun 澄清了外界的误解,指出“高级机器智能(AMI)”并非一家新公司,而是他在 Meta 内部推动的一项自下而上的研究计划(在法语中意为“朋友”)。他详细介绍了该计划的技术蓝图:利用JEPA(联合嵌入预测架构),在抽象的表征空间(Representation Space)而非像素级进行预测。他通过“数字孪生”和量子场论的类比,论证了为了理解复杂系统(如工业流程或生物细胞),必须建立抽象的现象学模型,而这正是当前生成式模型所缺乏的能力。

[原文] [Interviewer]: okay your new venture advanced machine intelligence this is a perfect transition to your next chapter uh you've recently launched this company advanced machine intelligence um uh publicly it's reporting you suggest AMI is focusing on building a fundamentally new generation of AI systems based on world models Systems that learn from video physical interaction and spatial data rather than language alone Can you share more about the problem AMI is trying to solve that today's leading systems cannot and realistically how long do you think it'll take to develop the architectures uh required for robust world models

[译文] [采访者]: 好的,您的新事业“高级机器智能(Advanced Machine Intelligence, AMI)”,这是一个过渡到您下一篇章的完美话题。呃,您最近推出了这家名为“高级机器智能”的公司,呃,公开报道称您建议 AMI 专注于基于世界模型构建全新一代的 AI 系统,这些系统从视频、物理交互和空间数据中学习,而不仅仅是语言。您能否分享更多关于 AMI 试图解决而当今领先系统无法解决的问题?以及实际上,您认为开发强大的世界模型所需的架构需要多长时间?

[原文] [Yann LeCun]: right So advanced machine intelligence we actually pronounce it ABI Okay that means friends of French This is actually the the name of the the research project that uh I I was kind of driving at at Meta I was actually an individual contributor at fair I was the manager of nobody People worked on that project because they wanted to to work on it and wanted to work with me not because I was their boss which is the best situation in a research environment So u not top down not top down bottom up That's the way research should should uh should take place A lot of people don't understand this but that's really the way it should it should work

[译文] [Yann LeCun]: 对的。关于“高级机器智能”,我们实际上发音为 Ami(原文误听为 ABI),好的,这在法语中是“朋友”的意思。这实际上是我在 Meta 推动的一个研究项目的名称。其实我在 FAIR(Facebook AI Research)只是一名个人贡献者,我没有管理任何人。人们参与这个项目是因为他们想做这个,想和我一起工作,而不是因为我是他们的老板,这在研究环境中是最好的情况。所以,呃,不是自上而下,不是自上而下,而是自下而上。这才是研究应该进行的方式。很多人不理解这一点,但这确实是它应该运作的方式。

[原文] [Yann LeCun]: So uh we we've had this project for quite a long time at uh at fair uh advanced machine intelligence which is the name we gave to this idea of building an AI system that can learn from sensory data from video learn world models state of the world at time t action that the system imagine taking Can you predict the state of the world at time t plus one that will result from this action if you have such a world model you can plan a sequence of actions to accomplish a task

[译文] [Yann LeCun]: 所以,呃,我们在 FAIR 进行这个项目已经很长一段时间了,呃,“高级机器智能”是我们给这个想法起的名字,即构建一个能够从感官数据、从视频中学习的 AI 系统;学习世界模型,即给定时间 t 的世界状态,以及系统想象采取的行动,你能否预测出由该行动导致的时间 t+1 的世界状态?如果你有这样一个世界模型,你就可以规划一系列行动来完成一项任务。

[原文] [Yann LeCun]: Okay this is the blueprint I wrote a big vision paper 60 pages You can read just the beginning you get an idea of it or you can listen to a talk I've I've given on this um which I I I put online in 2022 where I explained where AI research should go in my opinion and then we've been sort of building it since then and making a lot of progress

[译文] [Yann LeCun]: 好的,这就是蓝图。我写了一篇 60 页的大型愿景论文。你可以只读开头就能了解大概,或者你可以听听我关于这个主题的演讲,呃,我在 2022 年把它放到了网上,我在那里解释了我认为 AI 研究应该朝哪个方向发展,从那以后我们就一直在构建它,并取得了很大进展。

[原文] [Yann LeCun]: So we have systems now that we can train completely self-supervised on unlabeled videos and those systems understand video represent it really well can predict missing parts in a video and they also have acquired a certain sense of common sense If you show them a video where something impossible happens they tell you this is impossible like you throw a ball in the air and the the ball stops or it disappears prediction error goes to the roof because the system says like no this is completely incompatible with what what I've observed during my training

[译文] [Yann LeCun]: 所以我们现在的系统完全可以在未标记的视频上进行自监督训练,这些系统能很好地理解和表征视频,可以预测视频中缺失的部分,并且它们还获得了一定的常识感。如果你给它们看一段发生不可能事件的视频,它们会告诉你这是不可能的。比如你把一个球扔到空中,球突然停住或消失了,预测误差会飙升,因为系统会说,不,这与我在训练期间观察到的完全不符。

[原文] [Yann LeCun]: So um so we have the elements of that um and it's based on a non-generative architecture called Japa joint evading predictive architecture that makes predictions in a representation space and there's a trick it's complicated to train a system that is not generative to basically tell it to extract as much information as possible about the input and represent as much of the input as possible but also predict in that face It's crucial I'm not going to go into why but I think it's a really crucial aspect and it's complete departure from what you know most of the industry certainly is is working on

[译文] [Yann LeCun]: 所以,呃,我们已经具备了这些要素,呃,它是基于一种非生成式架构,称为 JEPA,即联合嵌入预测架构(Joint Embedding Predictive Architecture,原文误听为 Japa joint evading...)。它在表征空间中进行预测。这里有个技巧,训练一个非生成式的系统很复杂,基本上要告诉它尽可能多地提取关于输入的信息,并尽可能多地表征输入,同时还要在这个空间里进行预测。这至关重要,我不打算详述原因,但我认为这是一个非常关键的方面,这与你知道的大多数行业目前正在做的事情完全背道而驰。

[原文] [Yann LeCun]: Um so that's the plan for for AM you know uh develop this architecture We already have prototypes that work but we want to generalize the methodology so that it applies to any modality any data any sensor data So then we can build from data phenomen phenomenological models of complex systems that perhaps we control optimally be it industrial process of any kind manufacturing process chemical plant a turbo jet engine a whole airplane perhaps you know chemical reactions you know a cell a living cell

[译文] [Yann LeCun]: 呃,所以这就是 AMI 的计划,你知道,呃,开发这种架构。我们要已经有可行的原型,但我们想推广这种方法论,使其适用于任何模态、任何数据、任何传感器数据。这样我们就可以从数据中构建复杂系统的现象学模型(phenomenological models),从而可能对其进行最优控制。无论是任何类型的工业流程、制造流程、化工厂、涡轮喷气发动机、整架飞机,也许你知道,化学反应,你知道,一个细胞,一个活细胞。

[原文] [Yann LeCun]: So everything in the world is complicated because it's an emerging collective phenomenon of really complex systems and we can only build phenological models of those things Yes sir This is idea of digital twin that I'm sure you have heard of Right So people sort of trying to kind of accurately model a physical system so you can simulate it The problem is that if you simulate a system too accurately you can't predict anything

[译文] [Yann LeCun]: 所以世界上的每件事都是复杂的,因为它是真正复杂系统的涌现集体现象,我们只能为这些事物建立现象学模型。是的,先生。这就是“数字孪生(digital twin)”的想法,我相信你听说过,对吧。人们试图像某种程度上精确地模拟一个物理系统,以便你可以仿真它。问题是,如果你把一个系统模拟得太精确,你就什么也预测不了。

[原文] [Yann LeCun]: I could make I could explain everything that takes place in this room at the moment right now in terms of quantum field theory or something like that right but that would be completely impractical It would explain everything that takes place in this in this room including all of our thought process and everything right we can simulate everyone's brain But of course that's completely impractical

[译文] [Yann LeCun]: 我可以,我可以用量子场论或类似的东西来解释此刻在这个房间里发生的一切,对吧?但这完全是不切实际的。它确实能解释这个房间里发生的一切,包括我们要所有的思维过程和一切,对吧,我们可以模拟每个人的大脑。但这当然是完全不切实际的。

[原文] [Yann LeCun]: The way we can understand what's taking place right now in this room is by is through psychology maybe a little bit of science you know things like that economics maybe even uh but not at the level of quantum field theory or particle physics or atomic physics or molecules or proteins or organels or cells or organisms Right this is much higher level So the idea that you have to develop a an abstract representation of a phenomenon to allow you to make these predictions is absolutely crucial and generative models don't do that

[译文] [Yann LeCun]: 我们理解此时此刻这个房间里正在发生什么的方式,是通过心理学,也许还有一点科学,你知道这类东西,甚至可能是经济学,呃,但绝不是在量子场论、粒子物理学、原子物理学、分子、蛋白质、细胞器、细胞或生物体的层面上。对吧,这是更高层面的。所以,你必须开发一种现象的抽象表征(abstract representation)以允许你进行这些预测,这一观点绝对至关重要,而生成式模型做不到这一点。


**

章节 6:开源与闭源之争:避免AI权力的集中

📝 本节摘要

本节中,LeCun 深入探讨了“开源”对于 AI 未来的决定性意义。他以互联网的历史为例,指出就像 Linux 最终取代了 Sun 和 HP 的专有系统一样,AI 作为一个基础设施平台,必然会走向开源。他强调,为了确保 AI 能涵盖全球的多语言和文化数据,必须依靠开源社区而非单一私营公司。此外,他强烈反驳了“AI 毁灭人类”的论调,认为当前最大的真实风险是 AI 被少数几家公司垄断,从而控制人类的“数字信息饮食(Digital Diet)”。为了维护民主和文化多样性,世界需要多样化的 AI 系统,正如我们需要多样化的媒体一样。

[原文] [Interviewer]: Open versus closed AI You've been one of the strongest advocates for open research and open models Even as AI power becomes increasingly concentrated among a smaller number of companies and governments what risk do you see if Frontier AI becomes primarily closed proprietary and geopolitically siloed is openness ultimately a competitive advantage or a public good that must be actively protected and where if anywhere should openness stop

[译文] [采访者]: 关于开源与闭源 AI。您一直是开源研究和开源模型最坚定的倡导者之一。即便在 AI 权力日益集中于少数几家公司和政府手中的情况下,如果前沿 AI 变得主要是封闭的、专有的并在地缘政治上各自为政,您认为会有什么风险?开放性最终是一种竞争优势,还是一种必须积极保护的公共产品?如果是后者,开放性应该在何处止步?

[原文] [Yann LeCun]: i think AI is fast becoming a platform and historically platforms have always become open source Um this reminds me of the debates people were having in the 90s about internet right the the infrastructure of the internet was you know distributed open but you had to buy a server from Sun Micros or HP and then run a propri proprietary operating system on it with proprietary web servers and etc All of this was completely wiped out

[译文] [Yann LeCun]: 我认为 AI 正迅速成为一个平台,而从历史上看,平台总是会变成开源的。呃,这让我想起了 90 年代人们关于互联网的争论,对吧。互联网的基础设施是分布式的、开放的,但你当时必须从 Sun Microsystems 或 HP 购买服务器,然后在上面运行专有的操作系统、专有的网络服务器等等。所有这些后来都被彻底淘汰了。

[原文] [Yann LeCun]: The entire internet runs on Linux and the entire software stack of the internet is open source from you know low-level protocols to operating systems to web servers to applications on top of it If it's not open source it will just not be adopted I think is a similar phenomenon that is bound to occur for AI and I think should be promoted particularly for by uh countries that are neither China nor the US

[译文] [Yann LeCun]: 整个互联网都运行在 Linux 上,互联网的整个软件栈都是开源的,从底层协议到操作系统,到网络服务器,再到上面的应用程序。如果它不是开源的,就不会被采用。我认为类似的现象必然会在 AI 领域发生,而且我认为这应该得到推广,特别是由那些既不是中国也不是美国的国家来推广。

[原文] [Yann LeCun]: because we want AI systems to particularly L&M if we kind of stick with the current paradigm we want them to become the repository of all human knowledge and we're not going to be no private company as big as it can uh can do this by itself You need access to multilingual data to cultural data that is local You need contributions from governments from you know local people to fine-tune the system and you're not going to get that with uh with proprietary systems

[译文] [Yann LeCun]: 因为我们希望 AI 系统,特别是 LLM(如果我们要坚持当前的范式的话),我们要它们成为全人类知识的宝库。而没有任何一家私营公司,无论它有多大,能够独自做到这一点。你需要获取多语言数据、本地的文化数据;你需要政府的贡献,需要当地人的贡献来微调系统,而你无法通过专有系统获得这些。

[原文] [Yann LeCun]: So what I've been advocating for a few years is the idea of uh a consortium where various regions in the world will contribute to training a global open-source N&M that could constitute the repository of all human knowledge and this is absolutely crucial because the biggest risk of AI people are talking about you know AI taking over the world and killing us all and we had a debate on this two years ago That's BS if you pardon my French

[译文] [Yann LeCun]: 所以这几年来我一直倡导的想法是建立一个联盟(consortium),世界各地区共同为一个全球性的开源 LLM(原文口误为 N&M)做出贡献,使其构成全人类知识的宝库。这绝对是至关重要的,因为人们谈论的 AI 最大风险——你知道,什么 AI 接管世界并杀死我们要所有人,我们两年前就此辩论过——那完全是胡扯(BS),请原谅我的粗话。

[原文] [Yann LeCun]: Uh the most important risk of AI is that in the near future where our entire digital diet will be mediated by AI systems if those AI systems come from a handful of proprietary uh companies on the west coast of the US or China we're in big trouble for the health of democracy cultural diversity linguistic diversity value systems So we need a highly diverse population of AI assistance for the same reason we need diversity in the press and that can only happen with open source

[译文] [Yann LeCun]: 呃,AI 最重要的风险在于,在不久的将来,我们要所有的“数字饮食(digital diet)”都将由 AI 系统作为中介。如果这些 AI 系统仅来自美国西海岸或中国的少数几家专有公司,那么我们的民主健康、文化多样性、语言多样性和价值体系将面临大麻烦。所以,我们需要高度多样化的 AI 助手群体,原因正如我们需要新闻媒体的多样性一样,而这只能通过开源来实现。


**

章节 7:AI风险辨析:从末日论、经济影响到对齐问题

📝 本节摘要

在本节中,LeCun 反驳了“AI 末日论”,认为最大的真实风险并非 AI 毁灭人类,而是 AI 权力的集中化导致信息控制。在经济影响方面,他引用著名经济学家 Philippe Aghion 和 Erik Brynjolfsson 的观点,预测 AI 将带来每年约 6% 的生产力增长,而非导致大规模失业,因为技术普及的速度受限于人类的学习速度。关于“对齐(Alignment)”问题,他批评了基于 LLM 的对齐思路,指出 LLM 本质上无法保证安全,而未来的“目标驱动型 AI(Objective-Driven AI)”将通过在推理时强制满足“护栏(guardrails)”条件来从根本上解决安全问题。

[原文] [Interviewer]: You you've pushed back on apocalyptic AI narratives arguing that they can distract from the more immediate concerns for leaders in this room What are the real AI risks in the next 5 to 10 years that deserve serious attention which of these do you see as most pressing and which are overrated concentration of power among the companies or governments human misuse of AI systems economic displacement in terms of jobs or other systematic risks we're underestimating

[译文] [采访者]: 您一直在反驳世界末日般的 AI 叙事,认为它们可能会分散在座各位领导者对更紧迫问题的关注。未来 5 到 10 年内,真正值得认真关注的 AI 风险是什么?您认为其中哪些是最紧迫的,哪些被高估了?是公司或政府间的权力集中?人类滥用 AI 系统?就业方面的经济置换?还是我们低估了其他系统性风险?

[原文] [Yann LeCun]: Yeah I think capture and uh you know centralized control of AI is the biggest danger because it will mediate all of our information diet as I just said uh and so you don't want that and I think people around the world would just refuse that and so we need to build an open infrastructure that would give an alternative

[译文] [Yann LeCun]: 是的,我认为 AI 的捕获(capture)以及由此导致的集中控制是最大的危险,因为正如我刚才所说,它将中介我们要所有的信息饮食(information diet)。所以你不希望发生这种情况,我认为世界各地的人们会直接拒绝它,因此我们需要建立一个开放的基础设施来提供另一种选择。

[原文] [Yann LeCun]: so the other risks um right you had uh you know human misuse yeah that's a problem But you know it's like everything in the world It could be misused And you know there's there's going to be sort of countermeasures for this I'm not like overly worried about it Some people some of my friends are but you know I think it's just yet another risk Not not a particularly existential one

[译文] [Yann LeCun]: 至于其他风险,呃,比如人类的滥用,是的,那是个问题。但你知道,就像世界上的任何东西一样,它都可能被滥用。而且你知道,会有相应的反制措施。我并不是特别担心这一点。有些人,包括我的一些朋友很担心,但在我看来,这只是又一种风险而已,并不是什么特别的生存危机风险。

[原文] [Yann LeCun]: Um economic displacement So I'm not an economist I'm actually having dinner with two very prominent economists tonight and I'm just going to parrot them This is Philip Aong Nobel Prize winner and Eric Benson from Stanford Um and there's a lot of people in the economics What they're predicting is that AI over time is going to improve productivity by something like 6% per year Okay this is not going to be like a hot takeoff or anything like that 6% per year is actually big It's nothing to sneeze at

[译文] [Yann LeCun]: 呃,关于经济置换(economic displacement)。我不是经济学家。实际上我今晚要和两位非常著名的经济学家共进晚餐,我只是复述他们的观点。他们是 Philippe Aghion(原文听录为 Philip Aong),诺贝尔奖级别的学者,以及来自斯坦福的 Erik Brynjolfsson(原文听录为 Eric Benson)。经济学界很多人都在预测,随着时间的推移,AI 将使生产力每年提高大约 6%。好的,这不会像那种“急剧起飞(hot takeoff)”之类的,但每年 6% 实际上已经很大了,绝不容小觑。

[原文] [Yann LeCun]: Um but it's not yet measurable but it that's that's what they're predicting um it's not going to create major unemployment like mass unemployment and the reason is because the this the what limits the speed at which the technology disseminates in the economy is how fast people can learn to use it So it's sort of a built-in regulatory mechanism

[译文] [Yann LeCun]: 呃,虽然目前还无法准确衡量,但这正是他们的预测。呃,它不会造成严重的失业,比如大规模失业,原因在于限制技术在经济中传播速度的因素,是人们学会使用它的速度。所以这就像是一种内置的调节机制。

[原文] [Interviewer]: Is alignment the right frame many policy makers are focused on AI alignment but alignment to whose values uh and enforcement by whom is alignment ultimately a technical challenge or a political and institutional one and are we asking too too much of engineers to solve what fundamentally um be governance questions

[译文] [采访者]: “对齐(Alignment)”是一个正确的框架吗?许多政策制定者都专注于 AI 对齐,但对齐谁的价值观?由谁来执行?对齐最终是一个技术挑战,还是一个政治和制度挑战?我们是否对工程师要求过高,让他们去解决本质上属于治理的问题?

[原文] [Yann LeCun]: okay so the polymer alignment is a very interesting one because a lot of people think of it in terms of LLM like how do I align my LLM to you know not produce ridiculously insulting uh answers or or or you know things that are you know tasteless and it's the wrong way to think about it you know because AI architectures are going to change a lot They're going to be different

[译文] [Yann LeCun]: 好的,关于对齐的问题(原文 polymer alignment 应为 problem of alignment)非常有趣,因为很多人是从 LLM(大语言模型)的角度来思考它的,比如我如何对齐我的 LLM,让它不要产生荒谬的侮辱性回答,或者你知道,那些低俗的内容。但这是一种错误的思考方式,因为 AI 架构将会发生很大变化,它们将会变得不同。

[原文] [Yann LeCun]: The type of blueprint that I I described earlier which I call objectived driven AI are systems that are given an objective and they can the only thing they can do is fulfill this objective and you can make that subject to guard rails which are which have to be satisfied at at inference time Okay so this is very different from the way we coersse or train LLMs to behave properly

[译文] [Yann LeCun]: 我之前描述的那种蓝图,我称之为“目标驱动型 AI(Objective-Driven AI)”,是这样的系统:你给它们一个目标,它们唯一能做的就是完成这个目标,并且你可以设定护栏(guard rails),这些护栏必须在推理时(inference time)被满足。好的,所以这与我们要强迫或训练 LLM 行为得体的方式截然不同。

[原文] [Yann LeCun]: We can never be sure that an LLM would behave properly because the the data that we train it on is a very small subset of all the pumps that that people can can fit it So we can never guarantee the the safety or the the behavior of an LLM So if you try to project if you imagine that future AI systems that have humanlike intelligence will be LLMs which of course is not going to happen you say "Oh my god that's going to be dangerous." It's the wrong approach

[译文] [Yann LeCun]: 我们永远无法确定 LLM 会行为得体,因为我们用来训练它的数据只是人们可能输入的所有提示词(prompts,原文听录为 pumps)的一小部分。所以我们永远无法保证 LLM 的安全性或行为。因此,如果你试图推测,如果你想象未来具有类人智能的 AI 系统将是 LLM(这当然不会发生),你会说:“噢,天哪,那将会很危险。”但这是一种错误的方法。


**

章节 8:未来的工作与教育:学习“如何学习”

📝 本节摘要

在本节中,面对关于 AI 如何重塑工作以及给年轻人的建议时,LeCun 强调了基础教育的重要性。他建议学生在选择课程时,应优先选择像量子力学这样具有“长保质期”的基础学科,而不是像移动应用编程这样可能很快过时的具体技术。他指出,由于技术加速发展,未来的职业生涯必然面临转行,因此最关键的能力是掌握基础原理(如统计物理学与机器学习的联系)以及学会“如何学习(learn to learn)”。

[原文] [Interviewer]: So is it me i have a lightning round at the very end I want to get to that AI laborer and human agency A AI is already reshaping work but not always in the ways people expect Where do you see AI augmenting human intelligence rather than replacing it and um where do you think society is underestimating the transitional costs the transition costs are we asking the wrong questions about job loss in your opinion and what is what is your advice to all the young pe young people educators and workforce leaders in our audience about how best to prepare for an AI rich future

[译文] [采访者]: 是轮到我了吗?我在最后有一个“闪电问答(lightning round)”环节,我想谈谈“AI 劳动力与人类能动性”。AI 已经在重塑工作,但方式并不总是人们预期的那样。您认为 AI 在哪里是在增强人类智能而不是取代它?呃,您认为社会在哪里低估了过渡成本?在我们看来,关于失业的问题是否问错了?对于我们观众中的所有年轻人、教育工作者和劳动力领袖,关于如何最好地为 AI 丰富的未来做准备,您有什么建议?

[原文] [Yann LeCun]: okay So I'm going to answer the second the second question first I think clearly technology progress is accelerating and what that means is that you know everyone who is studying right now is going to have to change job because technology evolves so quickly

[译文] [Yann LeCun]: 好的,所以我先回答第二个问题。我认为很明显技术进步正在加速,这意味着,你知道,现在正在学习的每个人未来都不得不更换工作,因为技术演变得太快了。

[原文] [Yann LeCun]: So what what students need to learn are fundamentals things that have a long shelf life will not be uh you know out of fashion in five years or 10 years very fundamental thing I tell students if you you know if you if you study if you have the choice between taking a course in I don't know mobile app programming or quantum mechanics take quantum mechanics even if you're a computer scientist because the methods that you will learn doing this will allow you to learn to learn and also you'll have basic techniques you can reuse in all kinds of different uh contexts like how would you know in advance that all the underlying mathematics of machine learning basically comes from statistical physics right which is why there are so many physicists who do AI these days so learn fundamentals um learn to learn and then be ready to change expertise to change job so that was the second you question and I forgot the first one

[译文] [Yann LeCun]: 所以学生们需要学习的是基础知识(fundamentals),那些具有长保质期的东西,不会在五年或十年后过时。非常基础的东西。我告诉学生,如果你面临选择,比如选修一门——我不知道——移动应用编程课程,还是一门量子力学课程,去选量子力学,即使你是一名计算机科学家。因为你在学习过程中掌握的方法将让你学会“如何学习(learn to learn)”,而且你还将掌握可以在各种不同情境下重复使用的基本技术。比如,你怎么能预先知道机器学习的所有底层数学基本上都来自于统计物理学呢?对吧,这就是为什么现在有这么多物理学家在做 AI。所以,学习基础知识,呃,学会如何学习,然后准备好改变专业领域,准备好换工作。这就是你的第二个问题,我忘了第一个问题是什么了。

[原文] [Interviewer]: yeah you know for time we're going to move on so my last question is going to be what does 2035 look like

[译文] [采访者]: 是的,你知道因为时间关系我们要继续了。所以我的最后一个问题是,2035 年会是什么样子?


**

章节 9:展望2035与闪电问答:AI作为人类智能的放大器

📝 本节摘要

在最后的闪电问答环节,LeCun 推荐了弗兰斯·德瓦尔(Frans de Waal)关于动物智能的书籍,再次强调智能不等于语言。针对 2035 年的愿景,他描绘了一个“成功”的未来:AI 将不仅理解物理世界,还会通过智能眼镜等设备成为人类的日常助手。他将人与超级智能的关系比作领导者与更聪明的幕后参谋团队的关系——AI 的核心作用是放大人类的智能(Amplify Intelligence),帮助我们做出更理性的决策,而非单纯的替代或威胁。

[原文] [Interviewer]: to be what does 2035 look like um what what is uh the success and failure of of I have what would you say it looks like in 2035 but before I do that I have uh five lightning questions I want you to just answer real quick What is the most overrated idea in AI right now what's the most underrated research direction models

[译文] [采访者]: 2035 年会是什么样子?呃,什么算是成功,什么算是失败?我想问你 2035 年看起来像什么,但在此之前,我有五个闪电问题,希望你快速回答。目前 AI 领域最被高估的想法是什么?最被低估的研究方向是什么?(LeCun 回答):模型(Models)。

[原文] [Interviewer]: one book or thinker outside AI that most shaped how you think about intelligence

[译文] [采访者]: 推荐一本 AI 领域之外的、最能塑造你对智能思考的书或思想家。

[原文] [Yann LeCun]: Okay Frank Deval unfortunately died recently He wrote a book Are we intelligent enough to understand how intelligent animals are we think of intelligence as related to language It's not Animals are really intelligent and that's the kind of intelligence that we currently cannot reproduce with Rasheeds Read that book He's busy

[译文] [Yann LeCun]: 好的,Frank Deval(注:应为 Frans de Waal,著名灵长类学家),不幸的是他最近去世了。他写了一本书叫《我们可以聪明到理解动物有多聪明吗?》(Are We Smart Enough to Know How Smart Animals Are?)。我们通常认为智能与语言有关,但其实不是。动物非常聪明,而这正是我们目前无法用机器(原文听录为 Rasheeds,应为 machines)复现的那种智能。去读那本书吧。

[原文] [Interviewer]: Okay Which leaders scientific corporate like David Rubenstein um he's right over there uh or political do you think will most shape AI's trajectory over the next decade

[译文] [采访者]: 好的。您认为哪些领导者——无论是科学界的、企业界的(比如 David Rubenstein,他就在那边),还是政界的——将在未来十年最深刻地塑造 AI 的轨迹?

[原文] [Yann LeCun]: i'm not sure how to answer this Okay here is an answer It will be on Okay All right All right

[译文] [Yann LeCun]: 我不确定该怎么回答这个问题。好吧,这里有一个答案:它会继续发展(It will be on)。好的,行吧。

[原文] [Interviewer]: What do you think is most missing in how Davos covers AI and I don't mean I mean everything in Davos I mean given how intense Davos is already like if you add something you know we're all going to die during the week or something like you know it's fine Okay All right So last question Who's who's enjoyed Yan so far who Who's glad he didn't retire to an island after he he left the last job and that he's on on the case all right All right You should have talked to my wife Yeah Yeah Okay All right

[译文] [采访者]: 您认为达沃斯在报道 AI 方面最缺失的是什么?我不只是指达沃斯的全部内容,我是说考虑到达沃斯已经如此高强度了,如果你再加点什么,比如“我们这周都要死掉了”之类的东西……你知道,这没关系。好的。那么最后一个问题:到目前为止谁喜欢 Yann?谁很高兴他在离开上一份工作后没有退休去岛上,而是继续在这个领域奋斗?(观众鼓掌)好的,你应该去跟我妻子说说。是的,是的,好的。

[原文] [Interviewer]: Last question The long view You have a rare depth of perspective on how AI evolves over decades If we look ahead 10 to 15 years to roughly 2035 how may might our economies institutions and even forums like Davos look meaningfully different from today because of AI what would success look like and what would failure look like

[译文] [采访者]: 最后一个问题,长远视角。对于 AI 在数十年间如何演变,您有着罕见的深度视角。如果我们展望未来 10 到 15 年,大约到 2035 年,因为 AI 的缘故,我们的经济、机构甚至像达沃斯这样的论坛,看起来会与今天有什么本质的不同?什么是成功,什么是失败?

[原文] [Yann LeCun]: so success would look would involve AI systems that understand the physical world but also perhaps reach something like humanlike intelligence and of course in certain domains is going to be more intelligent than humans because we know computers can do a lot of things better than humans So that would be success

[译文] [Yann LeCun]: 所以,成功将意味着 AI 系统不仅理解物理世界,而且可能达到类似人类水平的智能,当然在某些领域它们会比人类更聪明,因为我们知道计算机在很多事情上能做得比人类更好。所以那就是成功。

[原文] [Yann LeCun]: I imagine that will occur with some non-negligible likelihood within the next 10 years It's not going to happen next year It's not going to take two years Unlike some of my more optimistic colleagues there's still a lot of work to do It's not going to be an event like it's so a lot of people are in their mind that you know there's gonna be one secret to AGI whatever they call it and the next day computers are going to take over the world This is ridiculous It never happens this way

[译文] [Yann LeCun]: 我想象这在未来 10 年内发生的可能性是不可忽略的。它不会在明年发生,也不会在两年内发生。与其我一些更乐观的同事不同,我认为还有很多工作要做。它不会是一个单一的“事件”。很多人脑子里认为,你知道,AGI 会有一个秘密诀窍,或者不管他们怎么称呼它,然后第二天计算机就会接管世界。这太荒谬了。事情从来不是这样发生的。

[原文] [Yann LeCun]: There's going to be a bunch of conceptual breakthroughs which are going to be in obscure research papers that nobody is going to pay attention to until five years later when someone demonstrates how powerful they are Okay that's what happened with deep learning to some extent That's what happened with transformers uh and also with LLN So we're going to we're going to see this So you know read the papers that the scientific community pays attention to or or does not yet pay attention to because they're going to cause a revolution over the next 5 years

[译文] [Yann LeCun]: 将会有一系列概念上的突破,它们会出现在那些晦涩难懂的研究论文中,一开始没人会注意,直到五年后有人展示了它们有多么强大。好的,这在某种程度上就是深度学习所经历的,也是 Transformer 所经历的,呃,以及 LLM(原文口误为 LLN)所经历的。所以我们将看到这一幕。所以,去读那些科学界正在关注或者尚未关注的论文吧,因为它们将在未来 5 年引发一场革命。

[原文] [Yann LeCun]: And so what is AI going to to look like you know what is it going to look look like 5 10 years from now we'll have those assistants work you know assisting us at all times perhaps in our smart glasses at least that's the vision at meta or other wearable devices those systems are going to be assisting us a amplifying our intelligence um perhaps allowing us to make more rational decisions and intelligence is the commodity that is the most required in the world right so the purpose of increasing the total amount of intelligence on the planet I think is A very good one That's intrinsically good

[译文] [Yann LeCun]: 那么 AI 会是什么样子?你知道,5 到 10 年后它会是什么样子?我们将拥有这些助手,随时随地协助我们工作,也许是通过智能眼镜——至少这是 Meta 的愿景——或者其他可穿戴设备。这些系统将协助我们要,放大我们的智能,呃,也许让我们做出更理性的决定。而智能是世界上最紧缺的商品,对吧?所以我认为增加地球上的智能总量这个目标是非常好的,这本质上是好事。

[原文] [Yann LeCun]: That's going to be under our control Our relationship with super intelligence systems is going to be the same relationship as a business academic or uh political leader with their staff Politicians certainly are surrounded by staff of people who are smarter than them right certainly true for professors too actually Right Our purpose is to yeah make our students smarter than us And in business it's the same thing too In research certainly um you know the best that can happen to you is work with uh with people who are smarter than you

[译文] [Yann LeCun]: 这将会在我们的控制之下。我们与超级智能系统的关系,将与商业、学术或政治领袖与其员工的关系一样。政客们身边肯定围绕着一群比他们更聪明的工作人员,对吧?这对教授来说实际上也是如此,对吧,我们的目标就是让学生比我们要更聪明。在商业中也是一样。在研究中当然更是如此,呃,你能遇到的最好的事情就是和比你更聪明的人一起工作。

[原文] [Interviewer]: So in the last minute let me ask you this five years ago and on this stage we had Mera Wilche um hold up a book that was written by Chat GPT2 and I think it was one of the few times chat GPT2 was even mentioned in Davos that that year Five years ago A lot of pe a lot of things that people were talking about in AI they predicted what's happening now was 90 years away The last 5 years has moved really fast Maybe not for you What are the next five years going to look like is it going to feel even faster and how can we all prepare um to thrive as a species and a society over this great change

[译文] [采访者]: 所以在最后一分钟,让我问您这个问题。五年前,在这个舞台上,我们曾让 Mera Wilche 举起一本由 Chat GPT-2 写的书,我想那是那一年 Chat GPT-2 在达沃斯被提及的少数几次之一。五年前,人们在 AI 领域讨论的很多事情,他们预测现在发生的事情要在 90 年后才会发生。过去 5 年发展得非常快,也许对您来说不是这样。接下来的五年会是什么样子?会感觉更快吗?我们所有人该如何准备,以便作为一个物种和一个社会在这次巨大的变革中繁荣发展?

[原文] [Yann LeCun]: so it looks very different depending on whether you are you know in the trenches trying to kind of make uh science and technology progress whether it's conceptual breakthroughs that you realize not right away that they really are breakthroughs until you kind of start to make them work and and things like that uh but from the outside from the public what they see are discontinuous change right so the public saw chad GPT which was you know GPT3 whatever as a discontinuous change it wasn't the technology was developed over years before that a lot of labs had similar systems internally it's just that it became visible uh at that time before that uh the brand the DARPA grand challenge right uh that opened the eyes of the public to the possibility of self-driving cars Ladies and gentlemen Yan Lun

[译文] [Yann LeCun]: 这取决于你的视角。如果你是在战壕里试图推动科学和技术进步的人,情况看起来会非常不同,不管是概念上的突破——你可能不会立刻意识到它们真的是突破,直到你开始让它们运作起来等等;但从外部,从公众的角度来看,他们看到的是非连续性的变化(discontinuous change),对吧。所以公众把 ChatGPT(也就是 GPT-3 之类的)看作是一种非连续性的突变,其实不是。这项技术在此之前已经发展了很多年,许多实验室内部都有类似的系统,只是在那时它变得可见了。在此之前,呃,DARPA 挑战赛,对吧,它让公众看到了自动驾驶汽车的可能性。(主持人:女士们先生们,Yann LeCun。)


全文档整理完毕。