章节 1:开场与引言:踏入“AI帝国”的调查之路
📝 本节摘要:
本节为访谈的开篇。主持人引入话题,并就AI行业的“优胜劣汰”规律提出疑问,认为利用AI加速研究的文明可能成为更高级的文明。科技记者Karen Hao直接反驳,指出当下巨头正在利用这种神话制造恐慌以榨取大众利益,并深刻揭示了AI行业的“帝国”本质,包括劳动力剥削、知识产权攫取和环境危机。随后,Karen分享了她从麻省理工学院机械工程系跨界进入科技新闻领域的旅程,以及她历时多年、采访超两百人以撰写《AI帝国》一书的心路历程。为了照顾普通听众,主持人提议在后续交流中尽量避开深奥的技术术语,将对话保持通俗易懂。
[原文] [Speaker A]: so much of what's happening today in the AI industry is extremely inhumane but this is me playing devil's advocate and logically it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization
[译文] [主持人]: 如今在AI(人工智能)行业发生的许多事情极其不人道,但我在这里充当一下“魔鬼代言人”,从逻辑上讲,利用AI加速其研究的文明有可能成为更高级的文明。
[原文] [Speaker B]: no it's not this is a prediction that you're making right making Zuckerberg's making and do you know what the common feature of all of them is they profit enormously off of this myth you know I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit
[译文] [嘉宾]: 不,并不是这样的,这是一个你正在做出的预测,对吧,马克·扎克伯格(Zuckerberg)也在做出的预测,而且你知道他们所有人的共同特征是什么吗?他们从这个神话中赚取了巨额利润,你知道我掌握了所有这些内部文件,证明他们是在故意试图在公众中制造这种感觉,以便他们能够榨取、剥削、再榨取、再剥削。
[原文] [Speaker A]: so what do we do about it
[译文] [主持人]: 那我们该怎么做?
[原文] [Speaker B]: we need to break up the empires of AI you know I've been covering the tech industry for over 8 years interviewed over 250 people including former or current OpenAI employees and executives and I can tell you that there are many parallels between the empires of AI and the empires of old right like Lelay claimed the intellectual property of artists writers and creators in the pursuit of training these models second they exploit an extraordinary amount of labor which breaks the career ladder because someone gets laid off and then they work to train the models on the very job that they were just laid off in which will then perpetuate more layoffs if that model then develops that skill and when they talk about that there's going to be some new jobs created that we can't even imagine a lot of the jobs that are created are way worse than the jobs that were there and then there's the environmental and public health crisis that these companies have created and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and will censor researchers that are inconvenient to the empire's agenda but what I'm saying is not that these technologies don't have utility it's that the production of these technologies right now is exacting a lot of harm on people but we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences so let's talk about all of that
[译文] [嘉宾]: 我们需要打破AI帝国的垄断,你知道我报道科技行业已经超过8年了,采访了超过250人,包括前任或现任OpenAI员工和高管,我可以告诉你,这些AI帝国和古老的帝国之间有许多相似之处,比如他们为了训练这些模型而声称拥有艺术家、作家和创作者的知识产权(Intellectual Property);其次,他们剥削了海量劳动力,这打破了职业上升通道,因为有人被裁员后,却要去训练那些能替代他们刚失去的工作岗位的模型,如果该模型掌握了这项技能,就会导致更多的裁员;当他们谈论说会创造出我们甚至无法想象的新工作时,其实很多被创造出来的工作比原本的工作还要糟糕得多;此外,这些公司还造成了环境和公共卫生危机,他们还能豪掷数亿美元试图扼杀任何可能阻挡他们的立法,并会审查那些对帝国议程造成不便的研究人员;但我要说的并不是这些技术没有实用价值(Utility),而是目前这些技术的生产过程正在对人们造成大量伤害,但我们的研究表明,同样的能力完全可以通过另一种没有这些意外后果的方式开发出来,所以让我们来谈谈所有这些吧。
[原文] [Speaker A]: this is super interesting to me my team given me this report to show me how many of you that watch this show subscribe and some of you have told us according to this that you are unsubscribed from the channel randomly so favor to ask all of you please could you check right now if you've hit the subscribe button if you are a regular viewer of the show and you like what we do here we're approaching quite a significant landmark on this show in terms of a subscriber number so if there was one simple free thing that you could do to help us my team everyone here to keep this show free to keep it improving year over year and week over week it is just to hit that subscribe button and to double check if you've hit it only thing I'll ever ask of you do we have a deal if you do it I'll tell you what I'll do i'll make sure every single week every single month we fight harder and harder and harder and harder to bring you the guests and conversations that you want to hear i've stayed true to that promise since the very beginning of the D of Sio and I will not let you down please help us really appreciate it let's get on with the show karen how you've written this book in front of me here called Empire of AI: Dreams and Nightmares in Sam Alman's Open AI i guess my first question is what is the research and the journey you went on in order to write this book we're going to talk about and the subjects within it today
[译文] [主持人]: 这对我来说非常有趣,我的团队给了我这份报告,向我展示了看这个节目的你们中有多少人订阅了,根据报告,你们中有些人告诉我们,你们被随机取消订阅了频道,所以想请你们帮个忙,如果您是本节目的常客并且喜欢我们在这里做的内容,能否现在检查一下是否点击了订阅按钮;我们这个节目的订阅人数正在接近一个相当重要的里程碑,所以如果有一件简单的免费事情你可以做来帮助我们、我的团队以及这里的每个人保持这个节目免费并让它年复一年、周复一周地改进,那就是按下那个订阅按钮并再次检查你是否按下了;这是我唯一会要求你们做的事,我们达成共识了吗?如果你做了,我会告诉你我将怎么做,我将确保每一周、每一个月我们都会越来越努力地为你带来你想听到的嘉宾和对话;自《CEO日记》开播以来,我一直坚守着这个承诺,我不会让你们失望的,请帮助我们,非常感激,让我们回到节目中;卡伦(Karen),你在我面前写了这本名为《AI帝国:萨姆·奥特曼的OpenAI中的梦想与梦魇》(Empire of AI: Dreams and Nightmares in Sam Alman's Open AI)的书,我想我的第一个问题是,为了写这本书、以及今天我们要谈论的其中主题,你进行了怎样的研究,经历了怎样的旅程?
[原文] [Speaker B]: I took a strange route into journalism I studied mechanical engineering at MIT and so when I graduated I moved to San Francisco I joined a tech startup I became part of Silicon Valley and I basically received an education in what Silicon Valley is about because a few months into joining a very missiondriven startup that was focused on building technologies that would help facilitate the fight against climate change the board fired the CEO because the company was not profitable and this was in hindsight a very pivotal moment for me because I thought if this hub is ultimately geared towards building profitable technologies and many of the problems in the world that I think need solved are not profitable problems like climate change then what are we actually doing here like what how did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of profit in that moment I had a bit of a crisis where I thought well I just spent 4 years trying to set myself up for this career that I now don't think I am cut out for and I thought well I might as well just try something totally different i've always liked writing and that's how after 2 years I landed at a role at MIT technology review covering AI full-time and that gave me a space to then explore all of these questions of who gets to decide what technologies we build how does money and ideology also drive the production of those technologies and how do we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world and so that is kind of how I then set off on this journey of ultimately writing a book i didn't realize that I was working towards writing a book but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I I document in it
[译文] [嘉宾]: 我踏入新闻业的路径有些奇特,我在麻省理工学院(MIT)学习机械工程,所以毕业后我搬到了旧金山(San Francisco),加入了一家科技初创公司(Startup),我成为了硅谷(Silicon Valley)的一部分,并基本上接受了一场关于硅谷究竟是怎么回事的教育;因为在加入一家非常由使命驱动的初创公司(该公司专注于开发有助于对抗气候变化的技术)几个月后,董事会解雇了CEO,原因是公司没有盈利;事后看来,这对我来说是一个非常关键的时刻,因为我在想,如果这个中心最终的目的是开发能够盈利的技术,而世界上我认为需要解决的许多问题(比如气候变化)却不是有利可图的问题,那么我们实际上在这里做什么呢?比如,我们是如何走到这样一步的——创新其实不一定是为了公共利益服务,有时甚至在追求利润的过程中破坏公共利益?在那一刻,我经历了一点危机,我想,好吧,我刚花了4年时间试图为自己的职业生涯做准备,但我现在认为自己并不适合它;我想,好吧,我不如就尝试一些完全不同的事情,我一直喜欢写作,这也是为什么两年后,我获得了在《麻省理工科技评论》(MIT Technology Review)全职报道AI的职位;这给了我一个空间来探索所有这些问题:谁有权决定我们构建什么样的技术?金钱和意识形态(Ideology)又是如何驱动这些技术的生产的?我们最终如何确保我们真正重新构想创新生态系统,使其能为全世界广泛的大众服务?这大概就是我踏上写书之旅的过程,我一开始并没有意识到我正朝着写书的方向努力,但从2018年我接下那份工作开始,本质上就是我开始研究所记录的这个故事的时刻。
[原文] [Speaker A]: a very timely time to start working in artificial intelligence for anyone that doesn't know this is pre OpenAI chat GPT launch moment that shook the world but in writing this book you interviewed a lot of people and went to a lot of places can you give me a flavor of how many people you've interviewed where it's taken you around the world etc
[译文] [主持人]: 这是一个开始在人工智能领域工作的非常及时的时间点,对于不知道的人来说,这是在OpenAI发布ChatGPT这一震撼世界的时刻之前;但是在写这本书时,你采访了很多人,去了很多地方,你能给我大概讲讲你采访了多少人,它带你去了世界各地的哪些地方等等吗?
[原文] [Speaker B]: i interviewed over 250 people so over 300 interviews over 90 of those people were former or current OpenAI employees and executives so the book covers the inside story of opening eyes's first decade and how it ultimately got to where it is today but I didn't want to write a corporate book i felt very strongly that in order to help people understand the impact of the AI industry we would also have to travel well beyond Silicon Valley these companies tell us that AI is going to benefit everyone and that's their mission but you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley that speak nothing like Silicon Valley and that have a history and culture that are fundamentally different as well and that's where you start to really understand the true reality of how this industry is unfolding around us
[译文] [嘉宾]: 我采访了超过250人,进行了超过300次采访,其中超过90人是前任或现任的OpenAI员工和高管;所以这本书涵盖了OpenAI前十年的内幕故事,以及它是如何走到今天这一步的;但我不想写一本关于企业的书,我强烈感觉到,为了帮助人们理解AI行业的影响,我们也必须走到远超硅谷的地方;这些公司告诉我们AI将使所有人受益,那是他们的使命,但当你去了那些看起来一点也不像硅谷、语言和硅谷毫无关系、历史和文化也完全不同的地方时,你真的开始看到那种修辞正在瓦解;在那里,你才真正开始理解这个行业在我们周围展开的真实面貌。
[原文] [Speaker A]: karen I often try and steer conversations but in this situation I feel like it's probably my responsibility to follow so with that in mind I'm going to ask you where does this journey begin and where should we be starting if we're talking about the subjects of empire of AI AI generally artificial intelligence and also I'd say one thing I'm really keen to do in this conversation which is I often see in conversations is left out is let's assume that our viewers know nothing about AI yeah so they don't know what scaling laws are or GPUs or comput or whatever and let's try and keep this as simple as we possibly can in terms of language or explain all the complicated language so that we can bring as much people with us as we possibly can yes where should we start
[译文] [主持人]: 卡伦,我通常会试着主导对话,但在这种情况下,我觉得我的责任可能是跟随;考虑到这一点,我想问你,这段旅程是从哪里开始的?如果我们谈论《AI帝国》的主题,谈论一般意义上的AI(人工智能),我们应该从哪里开始?还有,我想说在这场对话中我非常渴望做的一件事——这是我经常看到在其他对话中被遗漏的——那就是让我们假设我们的观众对AI一无所知,是的,所以他们不知道什么是“缩放定律”(Scaling Laws),什么是GPU,什么是计算力(Compute)等等;让我们试着在语言上保持尽可能简单,或者解释所有复杂的术语,以便我们尽可能带上更多的人一起探讨,是的,我们应该从哪里开始?
章节 2:定义之争:没有固定终点的人工智能与AGI
📝 本节摘要:
本节追溯了“人工智能”概念的历史源头。嘉宾卡伦(Karen)回顾了1956年达特茅斯会议,指出AI从一开始就是一个人为设定的模糊概念,因为科学界对“人类智能”本身并无共识。她敏锐地剖析了当今科技巨头如何利用“通用人工智能(AGI)”缺乏明确定义的漏洞,针对不同受众(如政府、消费者、投资人)随意变换AGI的叙事与愿景,以获取免于监管的自由和巨额资本。随后,节目揭露了萨姆·奥特曼(Sam Altman)在2015年发表“AI生存风险论”博客的潜在动机——通过刻意模仿埃隆·马斯克(Elon Musk)的话术来迎合其核心恐惧,从而为日后说服马斯克注资并共同创立OpenAI做铺垫。
[原文] [Guest]: i think we should start with when AI started as a field so this was back in 1956 and there were a group of scientists that gathered at Dartmouth University to start a new discipline a scientific discipline to try and chase an ambition and specifically an assistant professor at Dartmouth University John McCarthy decided to name this discipline artificial intelligence
[译文] [嘉宾]: 我认为我们应该从AI作为一个领域开始的时候讲起,那是在1956年,一群科学家聚集在达特茅斯学院(Dartmouth University),为了开创一门新的学科,一门试图追逐某种野心的科学学科,具体来说,达特茅斯学院的一位助理教授约翰·麦卡锡(John McCarthy)决定将这门学科命名为人工智能(Artificial Intelligence)。
[原文] [Guest]: this was not the first name that he tried the previous year he tried to name it Automata Studies and the reason why some of his colleagues were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence and back then as is true today we have no scientific consensus around what human intelligence is there's no definition from psychology biology neurology and in fact every attempt in history to quantify and rank human intelligence has been driven by nefarious motives it's been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people
[译文] [嘉宾]: 这并不是他尝试的第一个名字,前一年他试图将其命名为自动机研究(Automata Studies),而他的一些同事对这个名字感到担忧的原因是,它将这门学科的理念与重塑人类智能(Human Intelligence)挂钩了,而在当时,就像今天一样,我们在什么是人类智能这个问题上没有科学共识,心理学、生物学、神经学都没有给出定义,事实上,历史上每一次试图量化和对人类智能进行排名的尝试都是由邪恶的动机驱动的,是由一种试图在科学上证明某些人群劣于其他人群的欲望所驱动的。
[原文] [Guest]: there are no goalposts for this field and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans how do we even define what that means and when are we going to get there if we don't know how to define the destination and what that effectively means is that these companies can just use the term artificial general intelligence which is now the term to refer to this ambitious um goal to recreate human intelligence they can use it however they want to and they can define and redefine it based on what is convenient for them
[译文] [嘉宾]: 这个领域没有球门柱(固定目标),这个行业也没有球门柱,当他们说他们最终试图重塑出能和人类一样聪明的AI系统时,我们甚至该如何定义那意味着什么?如果我们不知道如何定义终点,我们又将在何时到达那里?这实际上意味着这些公司可以随意使用通用人工智能(Artificial General Intelligence, AGI)这个术语——这个现在被用来指代重塑人类智能这一宏大目标的术语——他们想怎么用就怎么用,而且他们可以根据对他们有利的情况来定义和重新定义它。
[原文] [Guest]: so in OpenAI's history it has defined and redefined it many times when Sam Alman is talking with Congress AGI is a system that's going to cure cancer solve climate change cure poverty when he's talking with consumers that he's trying to sell his products to it's the most amazing digital assistant that you're ever going to have when he was talking with Microsoft you know in the deal that OpenAI and Microsoft struck where Microsoft invested in the company it was defined as a system that will generate hundred billion of revenue and on OpenAI's own website they define it as highly autonomous systems that outperform humans in most economically valuable work
[译文] [嘉宾]: 因此在OpenAI的历史上,它已经多次定义并重新定义了它:当萨姆·奥特曼(Sam Alman)与国会交谈时,AGI是一个将治愈癌症、解决气候变化、消除贫困的系统;当他与他试图推销产品的消费者交谈时,它是你将拥有的最神奇的数字助手;当他与微软(Microsoft)交谈时,你知道在OpenAI和微软达成微软向该公司投资的交易中,它被定义为一个将产生千亿美元收入的系统;而在OpenAI自己的网站上,他们将其定义为在大多数具有经济价值的工作中超越人类的高度自治系统。
[原文] [Guest]: this is like not a coherent vision of one technology these are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy in into the the industry's quest or to get more capital more resources for continuing on this journey with ambiguous definitions
[译文] [嘉宾]: 这并不像是一种技术的连贯愿景,这些是非常不同的定义,是对那些需要被动员起来以抵御监管,或者为了让更多消费者认同该行业的追求,或者是为了获得更多资本和更多资源以继续这段带有模糊定义的旅程的受众大声说出的。
[原文] [Host]: i mean speaking about different definitions through time in 2015 in a blog post that Sam Waltman wrote before open air was officially announced he explicitly outlined the existential risk by saying "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity there are other threats that I think are more certain to happen for example an engineered virus but AI is probably the most likely way to destroy everything in general."
[译文] [主持人]: 我是说,谈到随着时间推移出现的不同定义,在2015年,在OpenAI正式宣布之前,萨姆·奥特曼写的一篇博客文章中,他明确概述了生存风险(Existential Risk),他说:“超人类机器智能的发展可能是对人类继续生存的最大威胁,我认为还有其他更肯定会发生的威胁,例如工程病毒(Engineered Virus),但总的来说,AI可能是最有可能摧毁一切的方式。”
[原文] [Guest]: When Alman is writing for the public or speaking for the public he does not just have the public as the audience in mind there are other people that he is trying to motivate or mobilize when he says these things and in that particular moment Alman was trying to convince Elon Musk to join him on co-founding OpenAI and Musk in particular was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose
[译文] [嘉宾]: 当奥特曼为公众写作或向公众演讲时,他脑子里想的不仅仅是把公众当做受众,当他说这些事情的时候,他还在试图激励或动员其他人,而在那个特定的时刻,奥特曼正试图说服埃隆·马斯克(Elon Musk)加入他,共同创立OpenAI,而马斯克当时尤其是把他所有的时间都花在了对他所认为的AI可能构成的巨大生存威胁敲响警钟上。
[原文] [Guest]: and so in that blog post if you look at the the language that Alman uses side by side with the language that Musk was using at the time it mirrors all the things that Musk was saying identical
[译文] [嘉宾]: 所以在那篇博客文章中,如果你把奥特曼使用的语言与当时马斯克使用的语言并排对比来看,它完全镜像反映了马斯克当时所说的一切,一模一样。
[原文] [Host]: i mean 10 years ago Musk was going on podcast saying tweeting whatever that the greatest existential risk to humanity was AI yeah
[译文] [主持人]: 我是说,10年前马斯克就上播客、发推特什么的,说人类面临的最大生存风险就是AI,是的。
[原文] [Guest]: and so you know like his parenthetical there are other things that we that might actually be more likely to happen like engineered viruses it's because up until then Alman had been talking just about engineered viruses and so now that he needs to pivot to speak to an audience of one to Musk he needs to kind of resolve the contradiction between what he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying so that's why he's like I think this is now even though before I said this
[译文] [嘉宾]: 所以你知道,就像他的补充说明一样,他说还有其他可能更可能发生的事情,比如工程病毒,这是因为在那之前,奥特曼一直只在谈论工程病毒,所以现在既然他需要转变态度去跟只有马斯克这一个人的受众对话,他就需要某种程度上解决他现在提升为新的核心恐惧(与马斯克新的核心恐惧相同)的事情与他之前所说的话之间的矛盾,所以这就是为什么他表现得像:虽然我以前那样说,但我认为现在是这样了。
[原文] [Host]: and are you saying that Sam Alman manipulated Musk because Elon did end up donating a huge amount of money to um open AAI and co-founding it I believe with Sam Alman
[译文] [主持人]: 那你的意思是萨姆·奥特曼操纵了马斯克吗?因为埃隆确实最终捐赠了一大笔钱给,嗯,OpenAI,并且据我所知是和萨姆·奥特曼共同创立了它。
章节 3:OpenAI的创立内幕与马斯克的出局
📝 本节摘要:
本节揭秘了OpenAI早期的权力斗争与埃隆·马斯克(Elon Musk)出局的幕后故事。当OpenAI决定从非营利组织向营利性实体转型时,时任首席科学家伊利亚(Ilya Sutskever)和首席技术官格雷格·布罗克曼(Greg Brockman)最初选定马斯克担任新实体的CEO。然而,萨姆·奥特曼私下向格雷格游说,强调马斯克作为公众人物的不可控性及掌控超级智能技术的潜在危险。这一举动成功促使两人倒戈支持奥特曼,导致马斯克愤而退出,这也解释了为何马斯克至今对奥特曼抱有极强的个人恩怨与被“背叛”的愤怒。
[原文] [Host]: and are you saying that Sam Alman manipulated Musk because Elon did end up donating a huge amount of money to um open AAI and co-founding it I believe with Sam Alman
[译文] [主持人]: 那你的意思是萨姆·奥特曼(Sam Alman)操纵了马斯克吗?因为埃隆确实最终捐赠了一大笔钱给,嗯,OpenAI,并且据我所知是和萨姆·奥特曼共同创立了它。
[原文] [Guest]: elon Musk did end up co-ounding it with Altman and certainly from Musk's perspective he does feel manipulated because he feels like Alman was engineering his language in a way that would make Musk trust him as a a partner in this endeavor and of course then Musk is leaves um and through some of the documents that came out during the the lawsuit that Musk and Altman are engaged in now it has become clear that there was a degree to which Musk was actually muscled out a little bit and so that's why he's left with this very intense personal vendetta against Altman saying that somehow Alman tricked him into being part of this
[译文] [嘉宾]: 埃隆·马斯克确实最终与奥特曼共同创立了它,而且显然从马斯克的角度来看,他确实感觉被操纵了,因为他觉得奥特曼在精心设计他的语言,以使马斯克信任他作为这项事业的合作伙伴,当然,然后马斯克离开了,嗯,通过现在马斯克和奥特曼正在进行的诉讼中披露的一些文件,情况已经变得很清楚,在某种程度上马斯克实际上是被稍微排挤出去的,所以这就是为什么他现在对奥特曼留下了这种非常强烈的个人恩怨,声称奥特曼不知怎么地欺骗了他参与其中。
[原文] [Host]: so in in 2015 Sam Alman is writing these blog posts saying this is you know one of the greatest existential threats at the same time in 2015 Musk is doing some very famous speeches at the time at MIT he said that AI was the biggest existential threat and compared developing AI to summoning the demon and what you're saying here is you're saying that Samman was just mirroring the language that Elon was using to get Elon involved in open open AAI and later it appears and again there's a legal case taking place now that Sam might have muscled Elon out in some capacity
[译文] [主持人]: 所以在2015年,萨姆·奥特曼写了这些博客文章,说这是,你知道,最大的生存威胁(Existential Threats)之一;与此同时在2015年,马斯克在麻省理工学院(MIT)做了一些当时非常著名的演讲,他说AI是最大的生存威胁,并将开发AI比作召唤恶魔(Summoning the demon);而你在这里所说的是,你的意思是萨姆·奥特曼只是在模仿埃隆当时使用的语言,以让埃隆参与到OpenAI中,而后来似乎——同样,现在正在进行一场法律诉讼——萨姆可能在某种程度上把埃隆排挤出去了。
[原文] [Guest]: yeah so we know from the lawsuit and the documents that have come out in the lawsuit that Ilia Sgver who is the chief scientist of OpenAI at the time and Greg Brockman chief technology officer at the time when they were deciding whether or not to maintain OpenAI as a nonprofit because it was originally founded as a nonprofit they decided okay we need to create a for-profit entity but the question was who should be the CEO of this for-profit entity should it be Musk or should it be Alman because it's they were the two co-chairmen of the nonprofit and in the emails it became clear that Ilia and Greg first chose Musk to be the CEO but through my reporting I discovered that Altman then appealed personally to Greg Brockman who was a friend of his that they had known they had known each other for many years through the Silicon Valley scene and said "Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company this new for-profit entity because you know he's a famous guy he has a lot of pressures in the world he could be threatened he could act erratically he could be unpredictable and do we really want a technology that could be super powerful in the future to end up in the hands of this man and that convinced Greg and Greg then convinced Ilia you know I think there's a point here do we really want to give this much power to Musk and that is why Musk then leaves because then they the two switch their allegiances they say "Actually we want Altman to be the CEO." And then Musk is like "If I'm not CEO I'm out."
[译文] [嘉宾]: 是的,所以我们从诉讼以及诉讼中披露的文件了解到,当时的OpenAI首席科学家伊利亚(Ilia Sgver)和当时的首席技术官格雷格·布罗克曼(Greg Brockman),当他们在决定是否维持OpenAI作为一个非营利组织(Nonprofit)时——因为它最初是作为一个非营利组织成立的——他们决定,好吧,我们需要创建一个营利性实体(For-profit Entity),但问题是,谁应该成为这个营利性实体的CEO?应该是马斯克还是应该是奥特曼?因为他们是非营利组织的两位联合主席;在电子邮件中情况很明显,伊利亚和格雷格首先选择了马斯克作为CEO,但通过我的报道我发现,奥特曼随后亲自向格雷格·布罗克曼游说,格雷格是他的朋友,他们通过硅谷的圈子已经认识彼此很多年了,他说:“你不觉得让马斯克担任这家公司、这个新的营利性实体的CEO会有一点危险吗?因为你知道他是个名人,他在世界上承受着很多压力,他可能会受到威胁,他可能会行为古怪,他可能是不可预测的,我们真的希望一项在未来可能超级强大的技术最终落入这个人手中吗?”这说服了格雷格,然后格雷格说服了伊利亚,你知道,我觉得这里有道理,我们真的想给马斯克这么多权力吗?这就是为什么马斯克后来离开了,因为后来他们两人转变了立场(Allegiances),他们说:“其实我们想让奥特曼当CEO。”然后马斯克就说:“如果我不当CEO,我就退出”。
[原文] [Host]: So it sounds like Sam again managed to persuade someone to do something mhm
[译文] [主持人]: 所以听起来萨姆再次设法说服了某人去做某事,嗯。
章节 4:萨姆·奥特曼:极化的人物与“造梦者”争议
📝 本节摘要:
本节聚焦于OpenAI现任CEO萨姆·奥特曼(Sam Altman)充满争议的个人特质。嘉宾指出,受访者对奥特曼的评价呈现出极端的两极分化:支持者视其为比肩乔布斯的当代伟大科技领袖,而反对者则认为他是一个满嘴谎言的操纵者。这种分歧的根本在于人们是否认同他关于未来的愿景。接着,嘉宾以Anthropic的CEO达里奥·阿莫迪(Dario Amodei)以及OpenAI前首席科学家伊利亚(Ilya Sutskever)为例,讲述了这两位核心高管是如何在合作中感到被奥特曼“操纵”,最终因理念不合而选择分道扬镳的。本节末尾还引用了伊利亚在2019年的一段令人毛骨悚然的比喻:未来极其强大的AI对待人类,或许就像人类对待动物一样——修建高速公路时,人类从不会去征求沿途动物的同意。
[原文] [Host]: i guess this begs the question what do you think of Sam Orman
[译文] [主持人]: 我想这就引出了一个问题,你觉得萨姆·奥尔曼(Sam Orman,此处为发音口误,指奥特曼 Altman)这个人怎么样?
[原文] [Guest]: i think he's a very controversial figure
[译文] [嘉宾]: 我认为他是一个极具争议(Controversial)的人物。
[原文] [Host]: you did an interesting pause it's a pause where someone tries to select their words
[译文] [主持人]: 你做了一个有趣的停顿,这是一种人们试图斟酌用词时的停顿。
[原文] [Guest]: well this is this is this is what's so interesting about those interviews is people are extremely polarized on Alman there no one has in between feelings about him either they think he's the greatest tech leader of this generation akin to the Steve Jobs of the modern era or they think that he's really manipulative and an abuser and a liar and what I realized because I interviewed so many people is it really comes down to what that person's vision of the future is and what their goals are so if you align with Altman's vision of the future you're going to think he's the greatest asset ever to have on your side because this man is really persuasive he's incredible at telling stories he's incredible at mobilizing capital at recruiting talent at getting all the inputs that you need to then make that future happen but if you don't agree with his vision of the future then you begin to feel like you're being manipulated by him to support his vision even if you fundamentally don't agree with it
[译文] [嘉宾]: 嗯,这就是这些采访中最有趣的地方,人们对奥特曼的看法极其两极分化(Polarized),没有人对他抱有中间态度的感情:他们要么认为他是这一代最伟大的科技领袖,类似于现代的史蒂夫·乔布斯(Steve Jobs);要么认为他真的非常善于操纵(Manipulative),是一个虐待者(Abuser)和骗子(Liar);而因为我采访了那么多人,我意识到这归根结底取决于那个人的未来愿景是什么以及他们的目标是什么;所以如果你与奥特曼的未来愿景(Vision of the future)保持一致,你会认为他是你有史以来最伟大的资产,因为这个人真的很有说服力,他在讲故事方面令人难以置信,他在动员资本、招募人才以及获取实现该未来所需的所有投入方面都令人难以置信;但如果你不同意他的未来愿景,那么你就会开始觉得你被他操纵去支持他的愿景,哪怕你从根本上并不同意它。
[原文] [Guest]: and this is the story especially of Daria Amade CEO of Enthropic who was originally an executive at OpenAI
[译文] [嘉宾]: 而这尤其是达里奥·阿莫迪(Daria Amade,转录拼写错误,指Dario Amodei)的故事,他是Anthropic的CEO,最初也是OpenAI的高管。
[原文] [Host]: so for people that don't know Dario now runs anthropic which is the maker of Claude a lot of people probably are more familiar with Claude yeah and it's one of the biggest competitors to OpenAI
[译文] [主持人]: 给不知道的听众解释一下,达里奥现在运营着Anthropic,也就是Claude的创造者,很多人可能对Claude更熟悉,是的,而且它是OpenAI最大的竞争对手之一。
[原文] [Guest]: and Amade at the time when he was an ex executive at OpenAI he thought that Alman was on the same page with him and then over time began to feel that Altman was actually on exactly the opposite page of him and felt that Altman had used Amade's intelligence capabilities skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with and so that's why people end up with this bad taste in their mouths and so you know I've been covering the tech industry for over eight years and covered many companies i've covered Meta Google Microsoft in addition to Open AI and OpenAI and Altman is it's the only figure that I've seen this degree of polarization with where people cannot decide whether he's the greatest or the worst
[译文] [嘉宾]: 阿莫迪在当时作为OpenAI高管时,他认为奥特曼与他是在同一战线的,但随着时间的推移,他开始觉得奥特曼实际上与他完全背道而驰,并觉得奥特曼利用了阿莫迪的智力、能力和技能去构建事物,并带来了一个他实际上根本不同意的未来愿景;所以这就是为什么人们最终心里会感到很不是滋味;你知道,我报道科技行业已经超过八年了,报道过许多公司,除了OpenAI之外,我还报道过Meta、谷歌(Google)、微软(Microsoft),而OpenAI和奥特曼是我见过的唯一一个两极分化达到如此程度的人物,人们无法决定他是最伟大的还是最糟糕的。
[原文] [Host]: you mentioned Dario there and I found it really what I found really interesting is to look at how people's quotes evolve over time with their incentives so I was looking at all of the all of the things they've said on the record on podcasts in their blog post to see how it's evolved over time and Dario who was the former VP of research open AAI and has now moved on to enthropic who are taking a slightly different approach to developing AI said back in 2017 while he was still at open AI that this is a quote I think at the extreme end is the Nick Bostonramm style of fear that an AGI could destroy humanity i can't see any reason in principle why that couldn't happen my chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%
[译文] [主持人]: 你在那提到了达里奥,我发现非常有意思的是去观察人们的语录是如何随着他们的动机随时间而演变的,所以我查看了他们所有在播客、博客文章中公开说过的话,看看它是如何随时间演变的;作为前OpenAI研究副总裁、现已跳槽到Anthropic(他们正在采取略微不同的方法开发AI)的达里奥,在2017年他还在OpenAI时曾说过——这是一段引用——“我认为在极端情况下,就是尼克·博斯特罗姆(Nick Bostonramm,指Nick Bostrom)式的恐惧,即AGI可能会摧毁人类。我在原则上看不出有任何理由说明这不可能发生。我认为在人类文明的尺度上,事情发生相当灾难性错误的几率可能在10%到25%之间。”
[原文] [Host]: and also you mentioned Ilia who was a co-founder of OpenAI and then left i guess the first question I'd ask is why did I leave
[译文] [主持人]: 另外你提到了伊利亚(Ilia),他是OpenAI的联合创始人,后来离开了,我想问的第一个问题是,为什么伊利亚(原文口误为I,即我)会离开?
[原文] [Guest]: it's a great question so he was instrumental in trying to get Sam Alman fired and he's another one of the people who over time began to feel like he was being manipulated by Alman towards contributing something that he didn't believe in and for you know because I interviewed a lot of people Ilia in particular had two pillars that he cared about deeply one is making sure we get to so-called AGI and the other is making sure that we get to it safely and he felt that Altman was actively undermining both things he felt that Alman was creating a very chaotic environment within the company where he was pitting teams against each other where he was telling different things to different people
[译文] [嘉宾]: 这是一个好问题,所以他在试图解雇萨姆·奥特曼的过程中发挥了关键作用,他也是随着时间推移开始感觉自己被奥特曼操纵、去为一个他不相信的事物做贡献的那些人之一;因为你知道,我采访了很多人,特别是伊利亚有两个他深切关心的支柱(Pillars),一个是确保我们达到所谓的AGI,另一个是确保我们安全地达到它;而他觉得奥特曼在积极地破坏这两件事,他觉得奥特曼在公司内部制造了一个非常混乱的环境,让团队互相对立内耗(Pitting teams against each other),他对不同的人说不同的话。
[原文] [Host]: have you ever spoken to him
[译文] [主持人]: 你曾经和他交谈过吗?
[原文] [Guest]: i have so so I interviewed him in 2019 for a profile that I did of OpenAI um for MIT Technology Review and back in 2019 he has a quote where he says "The future's going to be good for AIs regardless it would be nice if it was also good for humans as well it's not that it's going to actively hate humans or want to harm them but it's just going to be so powerful and I think a good analogy would be the way that humans treat animals it's not that we hate animals i think humans love animals and I have a lot of affection for them but when the time comes to build a highway between two cities we are not asking the animals for permission we just do it because it's important to us and I think by default that's the kind of relationship that's going to be between us and AI which are truly autonomous and operating on their own behalf
[译文] [嘉宾]: 我有过,所以我在2019年采访了他,为了我给《麻省理工科技评论》(MIT Technology Review)写的一篇关于OpenAI的专题报道;在2019年他有一段语录,他说:“无论如何,未来对AI们来说都将是美好的,如果它对人类也同样美好那就太好了。这并不是说它会主动讨厌人类或想伤害人类,而是它将变得如此强大;我认为一个很好的比喻就是人类对待动物的方式,并不是说我们讨厌动物,我认为人类爱动物,我也对它们有很多喜爱,但是当到了要在两座城市之间修建一条高速公路的时候,我们并不会去征求动物的同意,我们直接就建了,因为这对我们很重要;我认为在默认情况下,这就是我们与那些真正自治并为自己利益运作的AI之间将会存在的关系。”
章节 5:AI帝国的核心特征:数据掠夺、劳力剥削与知识垄断
📝 本节摘要:
本节探讨了“什么是智能”这一底层问题。嘉宾指出,诸如Ilya等AI巨头高管坚信人脑只是巨大的统计模型,并将此假设作为构建AGI并最终取代人类的理论基础。嘉宾强烈质疑了这种试图“复制人类”的技术发展方向,认为科技的初衷应是促进人类繁荣(如加速医疗研发),而非替代人类。随后,嘉宾正式抛出了“AI帝国”的核心概念,将其与古代帝国类比,深刻揭示了其三大特征:第一,对数据、知识产权和土地等资源的疯狂掠夺;第二,对劳动力的极度剥削与对劳工权利的自动化侵蚀;第三,对知识生产的垄断以及对科学界的“捕获”。
[原文] [Host]: and that was in 2019 the year that you interviewed him one of the things that I I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence and a huge part of the views of the different people and the quotes that you're reading derives from a specific belief that they each have in this question of what is intelligence what constitutes intelligence
[译文] [主持人]: 那是在2019年,也就是你采访他的那一年;我觉得我们应该退一步来审视的一件事,是回到这个概念——到底什么是人工智能,我们所说的智能(Intelligence)是什么意思?而你读到的这些不同人的观点和语录中,很大一部分源于他们各自对“什么是智能、什么构成了智能”这个问题的特定信念。
[原文] [Guest]: for Ilia he has throughout his research career felt that ultimately our brains are giant statistical models this is not something that you know we actually know but this is his own hypothesis also the hypothesis of his mentor Jeffrey Hinton who also was on this podcast this is why they have such a strong conviction in the idea of building AI systems that are statistical models and that this particular approach is going to lead to intelligent systems as we are intelligent it's a hypothesis that they have it's not one that has been proven by science and some people vehemently disagree with them on this particular thing but if you step into their shoes and take on that hypothesis and assume that it's true that our brains are in fact statistical engines and that these systems that they're building are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain that's why they say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence is relevant in their framework
[译文] [嘉宾]: 对于伊利亚(Ilia)来说,他在整个研究生涯中都认为,我们的真实大脑最终只是巨大的统计模型(Statistical Models);这不是什么,你知道,我们确切知道的事实,但这他自己的假设,也是他导师杰弗里·辛顿(Jeffrey Hinton,他也上过这个播客)的假设;这就是为什么他们对构建作为统计模型的AI系统这一理念有如此强烈的信念,并认为这种特定的方法将导向像我们一样聪明的智能系统;这是他们拥有的一个假设,并不是一个已被科学证明的假设,有些人在这特定的一点上强烈不同意他们;但是如果你站在他们的立场上,接受那个假设,并假设它是真的——即我们的大脑实际上是统计引擎(Statistical Engines),而且他们正在构建的这些系统也是统计引擎,他们把它们造得越来越大,直到变成人类大脑的大小——这就是为什么他们说做这种比较(即系统将变得等同于人类智能,然后也许会超越人类智能)在他们的框架中是相关的。
[原文] [Guest]: and um Ilia gave a talk at one point at this really prominent AI research conference that happens every year called neural information processing systems it's a mouthful but he gave this keynote where he shows this chart of the size of brains and the intelligence of a species and it's roughly linear the bigger the size of the brain the more intelligent the species and so for him he thinks he's building a digital brain because he he thinks brains are just statistical engines so from that logic it's like okay if we then build a bigger statistical engine than the human brain then based on this chart it will be more intelligent and then we will be subjected to the same treatment that we've subjected animals but it's really important to understand that these are scientific hypotheses of specific individuals within the AI research community and there's a lot a lot of debate about whether this is in fact the case and some of The biggest critics say it's very reductive to think of our brains as simply just statistical engines
[译文] [嘉宾]: 而且,嗯,伊利亚曾经在一个每年举行、非常著名的AI研究会议上发表过演讲,这个会议叫神经信息处理系统大会(Neural Information Processing Systems),名字有点长,但他在发表主题演讲时展示了一张图表,关于大脑大小与物种智力之间的关系,它大致呈线性——大脑的尺寸越大,物种就越聪明;所以对他来说,他认为他正在构建一个数字大脑(Digital Brain),因为他认为大脑只是统计引擎;所以按照那个逻辑,这就好比,好吧,如果我们随后构建一个比人类大脑更大的统计引擎,那么基于这张图表,它将变得更聪明,然后我们就会遭受我们让动物遭受的同等待遇;但是非常重要的一点是要理解,这些只是AI研究社区内特定个人的科学假设,关于事实是否真是如此,存在非常非常多的争论,一些最严厉的批评者说,把我们的大脑仅仅看作统计引擎是非常还原论的(Reductive)。
[原文] [Host]: why why does it matter to know the mechanism is it not just important to know the outcome which is that it's going to be able to do make a video for me or agents are going to be able to do the work that I do does it does it really really matter for us to know the mechanism behind it
[译文] [主持人]: 为什么……为什么了解机制(Mechanism)很重要?难道了解结果(即它将能够为我制作视频,或者智能体将能够做我所做的工作)不才是重要的吗?我们了解其背后的机制,真的、真的很重要吗?
[原文] [Guest]: yes and no so it matters because these companies they are driving their future actions based on this hypothesis so they have decided we think that this hypothesis is true like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence and that's then having global consequences like in order to continue doing that they're hoovering up more and more data they're building more and more data centers they are having uh they're you know exploiting more and more labor in order to continue on this path
[译文] [嘉宾]: 既重要也不重要;说它重要,是因为这些公司正在基于这个假设来驱动他们未来的行动;所以他们已经决定,我们认为这个假设是真的,比如我们就应该继续构建越来越大的统计模型,以追求通用人工智能(AGI),而这随后正在产生全球性的后果;比如为了继续这样做,他们正在吸纳越来越多的数据,他们正在建设越来越多的数据中心,他们正在产生……呃,他们,你知道,正在剥削越来越多的劳动力,以便在这条道路上继续走下去。
[原文] [Guest]: here's a question that I think is important to ask is why are we trying to build AI systems that are duplicative of humans we're kind of having this conversation right now where we've just taken the premise of this industry as a good thing like they said that we should be building AGI so we say that we should be building AGI i would like to ask like why are we doing that why is it that we are building a technology that is ultimately designed to replace and automate people away that is not the enterprise of technology like we should be building technology and the purpose of technology throughout history has been to improve human flourishing not to replace people and so this is like a a critical part of my critique of these companies and and these scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources to pursue it is is this the right goal what like why are we doing this why can't we just build AI systems that do things like accelerate drug discovery and improve people's health care outcomes which are systems that have nothing to do with the statistical engines that they're trying to build to duplicate the human brain
[译文] [嘉宾]: 这里有一个我认为很重要的问题要问,那就是为什么我们试图构建复制人类(Duplicative of humans)的AI系统?我们现在进行这场对话时,某种程度上已经把这个行业的前提当成了一件好事,就像他们说我们应该构建AGI,所以我们也说我们应该构建AGI;我想问的是,比如我们为什么要那样做?为什么我们正在构建一种最终旨在取代并自动化淘汰人类的技术?那不是科技的事业(Enterprise of technology);我们应该构建技术,而纵观历史,技术的目的始终是促进人类繁荣(Improve human flourishing),而不是取代人类;所以这就像是我对这些公司以及这些只是采纳了这个目标并无情地追求它、拥有庞大资本和庞大资源去追求它的科学家们进行批判的一个关键部分;这是正确的目标吗?比如我们为什么要这样做?为什么我们不能只构建那些能做例如加速药物发现和改善人们医疗保健结果的AI系统?这些系统与他们试图为了复制人类大脑而构建的统计引擎毫无关系。
[原文] [Host]: so why are they doing it i mean you've interviewed all these people i think it's what 300 people in total 80 or 90 of them from OpenAI the maker of CHACHBC why do you think they're doing it
[译文] [主持人]: 那他们为什么在做这件事?我是说你采访了所有这些人,我想总共大概是300人吧,其中80或90人来自ChatGPT的制造者OpenAI,你认为他们为什么在做这件事?
[原文] [Guest]: i think it's because they're driven by an imperial agenda and that is why I call these companies empires of AI
[译文] [嘉宾]: 我认为这是因为他们被一种帝国主义议程(Imperial agenda)所驱动,这就是为什么我把这些公司称为“AI帝国”(Empires of AI)。
[原文] [Host]: what do you mean by an imperial agenda what does that term mean
[译文] [主持人]: 你说的帝国主义议程是什么意思?这个词是什么意思?
[原文] [Guest]: empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do and the scale that they operate and what motivates them to do what they do and there are many parallels that you see between what I call the empires of AI and the empires of old
[译文] [嘉宾]: 帝国是我唯一能找到的隐喻(Metaphor),可以完全概括这些公司所做事情的所有维度、他们运作的规模,以及是什么动机驱使他们做他们所做的事;而且你可以看到,我所称的“AI帝国”与古老的帝国之间存在许多相似之处。
[原文] [Guest]: they lay claim to resources that are not their own in the pursuit of training these models that's the data of individuals the intellectual property of artists writers and creators their land grabbing in order to build these supercomputer facilities for training the next generation models
[译文] [嘉宾]: 他们为了训练这些模型,对不属于他们的资源提出主权要求(Lay claim to),那是个人数据、艺术家、作家和创作者的知识产权(Intellectual property);他们强占土地(Land grabbing),以便建立这些超级计算机设施,用于训练下一代模型。
[原文] [Guest]: second they exploit an extraordinary amount of labor they contract hundreds of thousands of workers all around the world including in the US to ultimately make these technologies we can talk about that more and they also design their tools to be labor automating so that when the technologies are deployed it also affects labor rights because it erodess away labor rights and this is a political choice that they have
[译文] [嘉宾]: 其次,他们剥削了异常巨量的劳动力,他们在全球范围内(包括在美国)雇佣了数十万名合同工,最终来制造这些技术,我们可以就此多谈谈;而且他们还把他们的工具设计成自动化劳动力的(Labor automating),因此当这些技术被部署时,它也会影响劳工权利,因为它侵蚀了劳工权利,而这是他们做出的一种政治选择。
[原文] [Guest]: third they monopolize knowledge production and so they project this idea that they're the only ones that really understand how the technology works and so if the public doesn't like it it's because they don't actually know enough about this technology they do this to the public they do this to policy makers and they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI
[译文] [嘉宾]: 第三,他们垄断了知识生产(Monopolize knowledge production),因此他们投射出这样一种观念:他们是唯一真正了解这项技术如何运作的人;因此,如果公众不喜欢它,那是因为他们实际上对这项技术了解得不够;他们对公众这样做,他们对政策制定者(Policy makers)这样做;而且他们还捕获(Captured)了大多数致力于理解AI局限性和能力的科学家。
章节 6:制造恐慌与审查异见:巨头如何操纵公众认知
📝 本节摘要:
本节继续深入探讨AI巨头如何操纵公众认知并打压异见。嘉宾指出,AI公司通过资助多数研究人员来软性控制学术议程,并无情解雇了揭露模型风险的谷歌伦理AI团队负责人(如蒂姆尼特·格布鲁)。更令人胆寒的是,OpenAI甚至动用法律传票恐吓那些质疑其营利性转型的非营利组织,试图“垂钓”信息并挖出所谓幕后资助的马斯克。最后,嘉宾总结了AI帝国的又一核心特征——伪造“正邪对立”的叙事:他们总是将自己包装成抵御“邪恶帝国”(如早期的谷歌或别国)的救世主,以此名正言顺地要求公众让渡资源与劳动力。
[原文] [Host]: you think they're gaslighting the public in a way
[译文] [主持人]: 你认为他们在某种程度上是在对公众进行煤气灯操纵(Gaslighting)吗?
[原文] [Guest]: they are yeah so if most of the climate scientists in the world were bankrolled by fossil fuel companies do you think we would get an accurate picture of the climate crisis no and in the same way they employ and bankroll the AI industry employs and bankrolls most of the AI researchers in the world so they set the agenda on AI research in soft ways simply by funneling money to their priorities so that only certain types of AI research are produced but they also will censor researchers when they do not like what the researcher has found and so I talk about the case of Dr timmy Gabru in my book who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that Google was building she then co-wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful outcomes and in an attempt to try and stop this research from being published Google ended up firing Gabru and then fired her other co-lead Margaret Mitchell and so they control and quash the research that is inconvenient to the empire's agenda
[译文] [嘉宾]: 是的,他们确实是。所以,如果世界上大多数气候科学家都是由化石燃料公司资助的,你认为我们会得到关于气候危机的准确情况吗?不会。同样地,AI行业雇佣并资助了世界上大多数的AI研究人员,所以他们通过将资金引导到他们的优先事项上,以温和的方式设定了AI研究的议程,使得只有特定类型的AI研究被生产出来。但当他们不喜欢研究人员的发现时,他们也会审查这些研究人员。因此,我在书中谈到了蒂姆尼特·格布鲁博士(Dr. Timnit Gebru)的案例,她曾是谷歌(Google)合乎道德的AI团队的联合负责人。当她被明确聘用来批判谷歌正在构建的AI系统类型时,她随后参与撰写了一篇批判性研究论文,指出大型语言模型(Large Language Models)特别会导致某些类型的有害后果。为了试图阻止这项研究的发表,谷歌最终解雇了格布鲁,随后又解雇了她的另一位联合负责人玛格丽特·米切尔(Margaret Mitchell)。因此,他们控制并镇压了那些对帝国议程造成不便的研究。
[原文] [Host]: did you have an example where this is happening to journalists as well that are asking questions of their team members i think I was watching a video of yours where there was a young man that was saying he had someone show up at his door knocked on his door and asked for information emails text messages and this person was from one of the big AI companies
[译文] [主持人]: 你有没有关于这种情况也发生在记者身上的例子?那些向他们的团队成员提出质疑的记者?我想我曾看过你的一个视频,里面有个年轻人说有人出现在他家门口,敲他的门,索要信息、电子邮件、短信,而这个人正是来自其中一家大型AI公司。
[原文] [Guest]: this was opening i started subpoenaing some of its critics yeah um as a as part of a what's what appears to be a campaign of intimidation but also what appeared to be a campaign of fishing for more information to figure out to map out the network of critics further but this was a man who runs a small watchdog nonprofit and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for-profit ultimately OpenAI was successful in that conversion but during the period where it was sort of existential for open AI to complete this conversion there were a lot of civil society groups and watchdog groups like MIDAS who were trying to prevent the process from happening in the dead of night they were trying to get more transparency they were trying to have more public debate about this because it's unprecedented and it was then that um there was a knock on his door and he was served papers
[译文] [嘉宾]: 那是OpenAI开始向它的一些批评者发出传票(Subpoenaing),是的,嗯,作为一场似乎是恐吓运动的一部分,同时也似乎是一场试图钓取更多信息、以弄清楚并进一步描绘出批评者网络的运动。但这个人运营着一个小型看门狗非营利组织(Watchdog Nonprofit),在那段时间里他们做了大量工作,试图对OpenAI企图从非营利组织向营利性实体转变提出质疑。最终OpenAI在这次转变中取得了成功,但在OpenAI完成这次转变可谓生死攸关(Existential)的那段时期,有许多公民社会团体和像MIDAS这样的看门狗组织试图阻止这个过程在暗中悄然发生。他们试图获得更多透明度,他们试图就此展开更多公开辩论,因为这是史无前例的(Unprecedented)。正是在那个时候,嗯,有人敲了他的门,他收到了法律传票(Served papers)。
[原文] [Host]: what did the papers say
[译文] [主持人]: 文件上写了什么?
[原文] [Guest]: the papers asked him to reproduce every single piece of communication that he had had that might have involved Musk so this was like this strange paranoia that OpenAI had that Musk was somehow funding these people to block the conversion none of them were actually funded by Musk so in this particular case their request he simply was just answered you know I I don't have any documents because this doesn't exist
[译文] [嘉宾]: 文件要求他复制他所拥有的、可能涉及马斯克(Musk)的每一份通讯记录。所以这就像是OpenAI有一种奇怪的偏执(Paranoia),认为马斯克不知怎么地在资助这些人来阻止这场转变。实际上他们中没有一个人是由马斯克资助的。所以在这个特定的案例中,对于他们的要求,他只是简单地回答说,你知道,我没有这些文件,因为这根本不存在。
[原文] [Host]: so going back to this point of empires you were saying that one of the factors of an empire is a land grab and then the next one was was labor exploitation labor exploitation the third one controlling knowledge production
[译文] [主持人]: 那么回到关于帝国的这一点,你刚才说帝国的一个特征是强占土地(Land grab),然后下一个是劳动力剥削,第三个是控制知识生产。
[原文] [Guest]: and one of the other ones that's really important to understand about the AI empires in particular is empires always have this narrative that they they say to the public like we're the good empire and we need to be an empire in the first place because there are also bad empires in the world and if you allow us to take all the resources and use all of the labor then we promise we will bring you progress and modernity for everyone we will bring you to this utopic state akin to an AI heaven but if the evil empire does it first we will descend into a hell and the evil empire being in this case in this case most often it's China but actually in the early days Open AI evoked Google as the evil empire so all of their decisions were about we need to do it first because otherwise Google this evil corporation that's driven by profit us as a benevolent nonprofit like this is a this is a critical contest of who wins
[译文] [嘉宾]: 而另一个对于理解AI帝国来说非常重要的特征是,帝国总是有一种他们向公众讲述的叙事(Narrative),比如:“我们是好帝国,而且我们首先需要成为一个帝国,因为世界上还有坏帝国;如果你允许我们拿走所有资源并使用所有劳动力,那么我们承诺将为所有人带来进步和现代性,我们将带你进入这个类似于AI天堂的乌托邦状态;但如果邪恶帝国先做到了,我们就会堕入地狱。”而在这个案例中,邪恶帝国通常是指中国,但实际上在早期,OpenAI将谷歌描绘成邪恶帝国。所以他们所有的决定都在围绕着:“我们需要先做成它,因为否则,被利润驱动的邪恶公司谷歌(与作为仁慈非营利组织的我们相比)就会赢”,这就像是一场谁能赢的关键竞赛。
章节 7:奥特曼罢免风暴的幕后细节
📝 本节摘要:
本节首先揭露了AI巨头高管们(如萨姆·奥特曼和达里奥·阿莫迪)如何通过刻意构建“极好或极坏”的极端神话,来为自己垄断技术开发寻找正当理由。随后,嘉宾分享了她在撰写本书时遭到OpenAI公关团队封杀与“胡萝卜加大棒”式拉黑的真实经历,指出科技巨头常利用“采访渠道(Access)”来控制媒体叙事。紧接着,话题转向了震惊硅谷的“奥特曼罢免事件”。嘉宾通过多方一手采访还原了内幕:奥特曼在ChatGPT爆火后引发了公司内部的极度混乱,他不仅未加平息,反而挑起团队内斗。同时,他向董事会隐瞒了“OpenAI初创基金”实为其个人控制的真相。为了防止奥特曼利用其极强的说服力反扑,包括伊利亚在内的高管与独立董事们决定进行一场保密的“闪电罢免”,却因未提前知会最大金主微软而导致局面失控,最终使奥特曼在几天后重新夺权。
[原文] [Host]: do you think the people building these AI companies believe that the outcome is going to be all good now do you think they think that it's going to be it's going to serve everyone it's going to be the age of abundance everything's going to go up well what do you think they believe what do you think Sam believes
[译文] [主持人]: 你认为构建这些AI公司的那些人相信最终的结果会全都是好的吗?你认为他们觉得它将服务于所有人、将成为一个丰饶时代(Age of abundance)、一切都会好转吗?你认为他们相信什么?你认为萨姆(Sam)相信什么?
[原文] [Guest]: so so this is so funny is such a core part of the mythology that they create around the AI industry includes the belief that it could go very badly it goes hand in hand like they need that part of the myth in order to then say and that's why we need to be in control of the technology because that's the only way that it's going to go really really well
[译文] [嘉宾]: 所以,这非常有意思,他们围绕AI行业创造的神话(Mythology)的一个核心部分,就包含着“它可能会变得非常糟糕”的信念,这两者是并存的,就像他们需要这部分神话,然后才能说:“这就是为什么我们需要控制这项技术,因为那是让它真正、真正变好的唯一途径。”
[原文] [Guest]: and Alman has said publicly you know the worst case lights out for everyone but best case we cure cancer we solve climate change and there's abundance
[译文] [嘉宾]: 而奥特曼曾公开说过,你知道,最坏的情况是所有人都“熄灯”(灭亡),但最好的情况是我们治愈癌症、解决气候变化,并迎来丰饶时代。
[原文] [Guest]: and Dario Amade same kind of rhetoric was like worst case catastrophic or existential harm for humanity best case mass human flourishing so this is like two sides of the same coin like they have to use both of these narratives in order to continue justifying an extremely anti-democratic approach to AI development where there should not be broad participation in developing this technology they must be the ones controlling it at every step of the way
[译文] [嘉宾]: 达里奥·阿莫迪(Dario Amodei)也有同样的修辞,比如最坏的情况是对人类造成灾难性或生存性的伤害,最好的情况是人类大规模繁荣;所以这就像是同一枚硬币的两面,他们必须使用这两种叙事,以便继续为一种极度反民主的AI开发方法辩护,在这种方法中,不应该有广泛的参与来开发这项技术,必须由他们在每一步都掌控着它。
[原文] [Host]: sam Orman did a tweet saying "There are some books coming out about open AI and me we only participated in two of them one by Kesh Hegy Keegy Khaggy focused on me and one by Ashley Vance on OpenAI um he went on to say no book will get everything right especially when some people are so intent on twisting things but these two authors are trying to you quote retweeted that tweet from Sam Alman and you said the unnamed book empire of AI is mine do you believe that tweet from Sam Alman was in reference to your book
[译文] [主持人]: 萨姆·奥特曼发了一条推特(Tweet)说:“有一些关于OpenAI和我的书即将出版,我们只参与了其中两本,一本是Kesh Hegy...(此处主持人发音不清)以我为中心,另一本是阿什利·万斯(Ashley Vance)关于OpenAI的。嗯,他接着说,没有哪本书能把所有事情都写对,特别是当有些人如此执意于扭曲事实的时候,但这两位作者在努力。”你引用转发了萨姆·奥特曼的那条推特,你说那本没被点名的书《AI帝国》(Empire of AI)就是我的。你认为萨姆·奥特曼的那条推特是在指你的书吗?
[原文] [Guest]: 100% because there's only three books coming out about him and he had caught wind that your book was coming out and he knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said I'm working on a book now will you participate in it
[译文] [嘉宾]: 100%是的,因为关于他即将出版的书只有三本,而且他已经听到风声说你的书要出版了。他知道我的书要出版了,因为我从写作过程的一开始就联系了OpenAI,并说:“我现在正在写一本书,你们愿意参与吗?”
[原文] [Guest]: and actually initially they said yes even though so my history with OpenAI I profiled the company for MIT technology review i embedded within the office for 3 days in 2019 my profile comes out in 2020 the leadership are very unhappy and in my book I actually quote an email that I received that Sam Alman sent to the company about my profile saying "Yeah this is not great." And from then on the company's stance to me was "We are not going to participate in anything that you do we are not going to respond to anything any of the questions that you receive
[译文] [嘉宾]: 实际上他们最初答应了,尽管……所以我与OpenAI的历史是这样的,我曾为《麻省理工科技评论》对该公司进行过专题报道,2019年我在他们办公室嵌入式观察了3天,我的报道在2020年发表,领导层非常不高兴。在我的书中,我实际上引用了我收到的一封萨姆·奥特曼发给全公司的关于我那篇报道的电子邮件,上面写着:“是的,这不太好。”从那时起,公司对我的立场就变成了:“我们不会参与你做的任何事情,我们也不会回应你收到的任何问题。”
[原文] [Guest]: and this was you know this was things that they explicitly articulated it wasn't like me inferring um so I I had a a colleague at MIT Technology Review that also covered AI and at one point opening I sent him this press release being like "We would love for you to cover this story." And he was like "I'm really busy will you send it to Karen?" And they were like "Oh no we have a history you understand?"
[译文] [嘉宾]: 而且这是,你知道的,这是他们明确表达出来的,并不是我的推测。嗯,所以我在《麻省理工科技评论》有一位同样报道AI的同事,有一次OpenAI给他发了这份新闻稿,意思是“我们很希望你能报道这个故事”。然后他说:“我真的很忙,你们能发给卡伦(Karen)吗?”然后他们说:“哦不,我们之间有过节,你懂的。”
[原文] [Guest]: And so so for three years they they refused to talk to me but then I ended up at the Wall Street Journal where if they felt a a bit compelled because it was the journal to reopen the lines of communication and so I I I started having you know more dialogue with them every time I wrote a piece I would always send them here's my request for comment i would always ask them like will you sit for interviews and we did get to a more productive relationship and then I embarked on the book
[译文] [嘉宾]: 所以,所以整整三年他们都拒绝和我说话,但后来我去了《华尔街日报》(Wall Street Journal),在那里,由于那是《华尔街日报》,他们可能觉得有点被迫需要重新开放沟通渠道。所以我,我开始,你知道,与他们有更多对话,每次我写一篇文章,我总是发给他们:“这是我的评论请求(Request for comment)”,我总是问他们:“你们愿意接受采访吗?”我们确实建立了一种更有成效的关系,然后我开始写这本书。
[原文] [Guest]: so I I left the journal to focus on the book full-time and I told them right away I'm working on this book i want to continue this productive conversation where I make sure I reflect OpenAI's perspective in the book and so they were like we can arrange interviews for you you can come back to the office we'll set up some conversations
[译文] [嘉宾]: 所以我离开了《华尔街日报》,全职专注于这本书,我立刻告诉他们:“我正在写这本书,我想继续这种富有成效的对话,确保我在书中反映OpenAI的观点。”然后他们说:“我们可以为你安排采访,你可以回办公室来,我们会安排一些对话。”
[原文] [Guest]: and then as we were going back and forth on this the board fired Sam Alman and that's when things started going kind of south because the company started becoming very sensitive to scrutiny and so then they started pushing kicking the can down the road down the road down the road and I kept saying "Hey when are we rescheduling this what's going on?"
[译文] [嘉宾]: 然后,就在我们就这件事来回沟通时,董事会解雇了萨姆·奥特曼,也就是从那时候起,事情开始变得有些糟糕,因为公司开始对审查变得非常敏感。所以他们就开始推脱,一拖再拖、一拖再拖,我一直问:“嘿,我们什么时候重新安排这个?发生什么事了?”
[原文] [Guest]: And then I get an email saying "We are not going to participate at all you are not coming to the office you're not doing interviews." and I had actually already booked my tickets so I was already going to fly to San Francisco to have the the interviews and so then I told them I was like "That's fine i will still engage in the process where I'll give you extensive requests for comment i'll ask through my reporting I'll keep you updated on all the things that I'm finding so that you can choose to still comment."
[译文] [嘉宾]: 然后我收到一封电子邮件,上面写着:“我们完全不会参与,你不能来办公室,你也不能做采访。”而我实际上已经订好了机票,我已经准备飞往旧金山进行这些采访了。然后我告诉他们,我说:“没关系,我仍然会参与这个过程,我会给你们提供广泛的评论请求,我会在我的报道过程中向你们通报我发现的所有事情,以便你们仍然可以选择发表评论。”
[原文] [Guest]: I gave them 40 pages of requests for comment and I gave them over a month to respond to all of that so this was when the tweet came out was we were doing all this back and forth trying to and that's when Alman tweeted this h and they never responded to a single one of the one of the 40 pages
[译文] [嘉宾]: 我给了他们长达40页的评论请求,并给了他们一个多月的时间来回复所有这些内容。所以那条推特出来的时候,正是我们在做所有这些来回沟通、试图……就是在那时奥特曼发了这条推特。而他们从来没有回复那40页中的任何一个字。
[原文] [Host]: sam Alman does a lot of interviews you know he's doing a lot of interviews all the time he's done every podcast i've seen him on everything from Tucker Carlson to I think he's done Theo Joe Rogan um podcasts all over the world i wonder why he won't do mine
[译文] [主持人]: 萨姆·奥特曼做过很多采访,你知道,他一直在做很多采访,他上过所有的播客,我见过他上过从塔克·卡尔森(Tucker Carlson)到……我想他上过乔·罗根(Joe Rogan)的节目,嗯,世界各地的播客。我想知道为什么他不肯上我的节目。
[原文] [Guest]: well maybe i don't know why i I I don't know i think I'm fair with everyone i just ask I just ask questions I genuinely care about i don't come in with huge preconceptions or at least meet people for the first time but I've heard through the grape vine um that he doesn't want to do mine
[译文] [嘉宾]: 嗯,也许……我不知道为什么。我,我,我不知道,我认为我对每个人都很公平,我只问,我只问我真正关心的问题,我不会带着巨大的先入之见来,或者至少初次见面对人如此,但我通过小道消息听到,嗯,他不想上我的节目。
[原文] [Host]: i mean going back to what you were saying earlier that with this the way that OpenAI and these companies control research you asked do they also do this with journalists i mean yes the answer is yes and apparently they they also do it with anyone who has you know a broad mass communications platform it's not just about the conversation that you're going to have with them it's about who you also choose to platform
[译文] [主持人]: 我是说,回到你之前说的,关于OpenAI和这些公司控制研究的方式,你问他们是否也对记者这样做?我是说,是的,答案是肯定的,显然他们也对任何拥有,你知道的,广泛大众传播平台的人这样做。这不仅仅关乎你将与他们进行的对话,还关乎你选择为谁提供平台。
[原文] [Guest]: and there's this huge problem in technology journalism where companies know that a really big carrot that they can give to technology journalists is access yeah yeah yeah and they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to
[译文] [嘉宾]: 科技新闻界存在着一个巨大的问题,公司知道他们能给科技记者的一根非常大的“胡萝卜”(诱饵)就是采访渠道(Access),是的,是的,是的,如果他们听到风声说你正在和他们不希望你接触的人交谈,他们会毫不犹豫地收回那个采访渠道。
[原文] [Host]: this is so true and I don't think the average person really truly understands this yeah so this kind of sounds like theory as you say it but I'm not going to name names here because I don't think it's important but there is a particular person in AI who um whose team have basically dangled the carrot of them coming here for like 18 months and I'm like you don't you don't have to dangle the carrot i'm going to speak to whoever I want to regardless of the carrot or not
[译文] [主持人]: 这太真实了,我不认为普通人真的能完全理解这一点,是的。所以这听起来就像你说的理论,但我不想在这里指名道姓,因为我觉得那不重要,但在AI领域有一个特定的人,嗯,他的团队基本上把“他们要来这里(上节目)”作为诱饵(Dangled the carrot)吊了我大概18个月,而我心里想,你们不需要抛出这个诱饵,无论有没有这个诱饵,我想和谁谈就和谁谈。
[原文] [Host]: and when this person comes if they want to come I'll I'll give them a fair shot i'll ask them all genuinely curious questions about what they're doing their incentives i won't gotcha them i don't have a history of ever gotchering anybody even if I dis like even if I have a different of opinion I'll ask the question but they dangle carrots and they say "Well if you know he he's thinking about it let's think about a date."
[译文] [主持人]: 而且当这个人来的时候,如果他们想来,我,我会给他们一个公平的机会,我会问他们所有我真正好奇的问题,关于他们在做什么、他们的动机。我不会给他们设套(Gotcha them),我从来没有给任何人设套的黑历史,即使我不喜欢……即使我有不同的意见,我也会提出问题。但他们抛出诱饵,他们说:“好吧,你知道,他,他正在考虑,让我们想想看哪个日期合适。”
[原文] [Host]: And what what the strategy is and I don't think they they think those people don't understand is if we just dangle it for long enough then they will um perform in the way that we want them to do and they'll be they'll be pleasant about us they won't be critical they won't give a give a critics our critics and I think a lot of their game is just dangle the carrot forever
[译文] [主持人]: 他们的策略是——而且我认为他们觉得那些人并不明白这一点——只要我们吊胃口吊得足够久,那么他们就会,嗯,按照我们希望的方式行事,他们会对我们很友好,他们不会有批判性,他们不会把平台给我们的批评者。我认为他们很多时候的把戏就是永远吊着那个诱饵。
[原文] [Guest]: yes yeah that's like the optimal outcome is if we just dangle it if we just tell them yeah look we're just trying looking at the schedule it just doesn't work
[译文] [嘉宾]: 是的,是的,这就是最理想的结果,如果我们就只是吊着它,如果我们就只是告诉他们:“是的,看,我们只是在尽量安排日程,只是时间排不开。”
[原文] [Host]: i think in the modern world you just have to go there and give your opinion and allow the clash of ideas in the public forum let the viewers un decide for themselves what they think um but this is a Yeah this is such a huge part of their machinery is the way that they use these tactics to massage the public image of these companies and make sure that information that they don't want out and even opinions that they don't want out there go out there
[译文] [主持人]: 我认为在现代社会,你只需要去那里表达你的观点,允许公众论坛上思想的碰撞,让观众自己决定他们的想法。嗯,但这确实是,是的,这是他们机器(Machinery)中如此庞大的一部分,他们使用这些策略来粉饰(Massage)这些公司的公众形象,并确保他们不希望泄露的信息,甚至是不希望出现的观点,都不会流传出去。
[原文] [Guest]: mhm and so this is this is you know I feel very lucky now that opening I shut the door early on me at the time I didn't feel lucky i felt like I had screwed myself over i was nicer access to a journalist right like you're supposed to report the truth and you're always supposed to report in the interest of the public like that is the point of journalism and in that moment it I I was like relatively junior in my career i was like did I misunderstand what journalism about is is about like should I have actually been playing the access game mhm but it was too late i had the door shut to me
[译文] [嘉宾]: 嗯,所以这就是,你知道的,我现在觉得非常幸运,OpenAI很早就对我关上了大门。当时我并不觉得幸运,我感觉我把事情搞砸了,对于一个记者来说,渠道(Access)是更好的东西,对吧?比如,你本该报道真相,你总是应该为了公众利益而报道,这是新闻的重点。在那一刻,我当时在职业生涯中还比较资浅,我当时在想:我是不是误解了新闻业到底是关于什么的?比如,我是不是本来就应该玩这种建立关系渠道的游戏(Access game)?嗯,但为时已晚,那扇大门已经对我关闭了。
[原文] [Guest]: and so I had to build my career understanding that the door the front door was never going to be open and that actually really strengthened my own ability to just tell it like it is like objective yeah and just report what I see are the facts being presented to me irrespective of whether the company likes it or not and most often the company really does not like it but I can continue to do the work they don't need to open the front door for me i was still able to do more than 300 interviews
[译文] [嘉宾]: 所以我不得不在深知正门永远不会敞开的理解下建立我的职业生涯,而这实际上真正增强了我自己如实讲述的能力,比如客观(Objective),是的,只报道我看到的呈现在我面前的事实,不管公司喜不喜欢。而且大多数时候,公司真的非常不喜欢。但我可以继续做这份工作,他们不需要为我打开前门,我仍然能够完成超过300次采访。
[原文] [Host]: so Sam Alman gets kicked off the OpenAI executive team did you find out why that happened
[译文] [主持人]: 那么萨姆·奥特曼被踢出OpenAI高管团队,你查明那是为什么发生的吗?
[原文] [Guest]: yeah there's a scene by scene recounting from who i can't remember the exact number of sources so I don't want to misquote myself but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process
[译文] [嘉宾]: 是的,这里有一个逐个场景(Scene by scene)的复盘,来自……我不记得具体的信源数量了,所以我不想错误引用自己的话,但大约有六到七个人直接参与其中,或者与直接参与决策过程的人交谈过。
[原文] [Guest]: so Ilia Satskever is seeing these serious concerns about the way that Altman's behavior is leading to bad research outcomes and poor decision-m at the company he then approaches a board member Helen Toner
[译文] [嘉宾]: 所以伊利亚·苏茨克维(Ilya Sutskever)看到了这些严重的担忧,关于奥特曼的行为如何导致不良的研究结果以及公司糟糕的决策,然后他接触了一位董事会成员,海伦·托纳(Helen Toner)。
[原文] [Host]: ilia for anyone that doesn't know is the the co-founder we mentioned earlier the co-founder of OpenAI we mentioned earlier
[译文] [主持人]: 给不知道的人解释一下,伊利亚是我们之前提到的联合创始人,我们之前提到的OpenAI的联合创始人。
[原文] [Guest]: yes and he kind of does a bit of a sounding board thing to Helen just because Ilia is freaking out he's like he's been like sitting on this these these concerns for a while and he's like if I tell this to someone this could also be really bad for me if Alman finds out and so he asks for a meeting with Toner and in that first meeting he's like re like he barely says a thing he's just like dancing around trying to figure out hey is this someone that I can maybe trust to divulge more information
[译文] [嘉宾]: 是的,他在某种程度上把海伦当作一块试金石(Sounding board),纯粹是因为伊利亚被吓坏了(Freaking out),他就像,他已经对这些担忧按兵不动一段时间了,他心想,如果我把这件事告诉别人,一旦奥特曼发现,这对我来说也可能非常糟糕。所以他要求和托纳开个会,在那第一次会议中,他表现得,他几乎什么也没说,他只是在兜圈子(Dancing around),试图弄清楚:嘿,这个人是不是也许我可以信任并向她透露更多信息的人?
[原文] [Host]: and Toner's role and responsibilities at OpenAI were she was a board member just a board member yeah and and specifically an independent board member
[译文] [主持人]: 而托纳在OpenAI的角色和职责是,她是一名董事会成员,只是董事会成员?是的,并且具体来说是一名独立董事会成员(Independent board member)。
[原文] [Guest]: so opening eye when it was a nonprofit the board was split between people who had a stake financial stake in the company and then people who were fully independent and this was meant to be a structure that would balance the decision-m to be in the benefit of the public interest rather than to be in the benefit of the for-profit entity that opening I then created
[译文] [嘉宾]: 所以OpenAI在它还是一个非营利组织时,董事会是由两拨人组成的:在公司拥有经济利益相关(Financial stake)的人,以及完全独立的人。这本应是一种旨在平衡决策的结构,使其符合公共利益,而不是符合OpenAI后来创建的营利性实体的利益。
[原文] [Guest]: and Ilia as a non-independent board member was approaching toner as an independent board member her to try and see whether or not she was potentially seeing or hearing the same things that he was about the effect that Alman was having on the company
[译文] [嘉宾]: 而伊利亚作为一名非独立董事会成员,正在接触托纳这位独立董事会成员,试图看看她是否可能看到或听到了与他相同的关于奥特曼对公司产生的影响的事情。
[原文] [Guest]: this then sets off a series of conversations first between Ilia and Helen and then between Amir Moratti and some of the board members samir Moratti was at that point the chief technology officer of OpenAI where these two senior leaders essentially through these conversations and through documentation that they're pulling together like email Slack messages and so forth they convey to the independent board members three independent board members we are very concerned about Altman's leadership like he is creating too much instability at the company and it is like he is the root of the problem
[译文] [嘉宾]: 这随后引发了一系列的对话,首先是在伊利亚和海伦之间,然后是在米拉·穆拉蒂(Amir Moratti,转录口误,指Mira Murati)和一些董事会成员之间。米拉·穆拉蒂在当时是OpenAI的首席技术官(CTO),这两位高级领导本质上通过这些对话以及他们汇总的文件(如电子邮件、Slack消息等等),向独立董事们——三位独立董事——传达:“我们对奥特曼的领导力非常担忧,比如他正在公司制造太多的不稳定性(Instability),而且似乎他就是问题的根源。”
[原文] [Guest]: it's not they they they were trying to say to these independent board members like the problem will not be fixed unless Alman is removed because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really really important technology
[译文] [嘉宾]: 并不是……他们试图对这些独立董事说,比如:“除非把奥特曼移出,否则问题是解决不了的”,因为他正在让团队互相对立,并制造了这种人们无法再相互信任的环境,他们原本应该在这项非常非常重要的技术上进行合作,但现在却在相互竞争。
[原文] [Host]: when you say instability that's a that's quite a vague term that could mean lots of things like instability could mean pushing people hard to work harder right what do you mean by instability in spec as specific terms as you can possibly say them
[译文] [主持人]: 当你提到“不稳定性”时,这是一个非常模糊的术语,可能意味着很多事情,比如不稳定性可能意味着逼迫人们更努力地工作,对吧?你在尽可能具体的术语下所说的“不稳定性”是什么意思?
[原文] [Guest]: when chat GBT came out in the world OpenAI was wholly unprepared they didn't think that they were launching a gangbusters product yeah they thought they were releasing a research preview that would help them get the data flywheel going collect a bunch of data from users that would then inform what they thought would be the gang busters product which was a chatbot using GPT4 and chat GBT was using GPT 3.5
[译文] [嘉宾]: 当ChatGPT问世时,OpenAI是完全没有准备好的。他们并不认为自己推出了一款爆款产品(Gangbusters product),是的,他们以为自己只是发布了一个研究预览版,这将帮助他们启动数据飞轮(Data flywheel),从用户那里收集大量数据,以此来指导他们心目中真正的爆款产品——一个使用GPT-4的聊天机器人,而ChatGPT当时使用的是GPT-3.5。
[原文] [Guest]: and because of that there were servers crashing all the time because they they weren't they had to scale their their infrastructure you know faster than any company in history and there were um there were all of these outages they were trying to also hire faster than any company in history to try and have more personnel there and they were then sometimes hiring people that they were like "Actually we made a mistake we shouldn't have hired you." So they were firing people left and right and people were just disappearing off of Slack and that's how their colleagues would learn that they were no longer at the company
[译文] [嘉宾]: 正因如此,当时服务器一直在崩溃,因为他们没有……他们不得不以前所未有的速度扩展他们的基础设施(Infrastructure),而且出现了,嗯,所有这些宕机事件。他们还试图以史上最快的速度招聘,试图在那里配备更多人员,然后有时他们招了人,却又觉得:“其实我们犯了个错,我们不该雇佣你。”所以他们到处裁人,人们就直接从Slack上消失了,他们的同事也是通过这种方式得知他们已经不在公司了。
[原文] [Guest]: and so it was yes like many fast growing companies a very chaotic environment and a particularly chaotic environment because it was extra fast like they had to accelerate more than any other startup and on top of that mirror Morati and Ilasgiver felt that Alman was making it worse like he was not actually effectively ameliorating the circumstances of the chaos he was actually sewing more chaos getting these teams to be more divided
[译文] [嘉宾]: 所以,是的,就像许多快速发展的公司一样,这是一个非常混乱的环境,而且是一个特别混乱的环境,因为它发展得格外快,比如他们必须比任何其他初创公司加速得都快。最重要的是,米拉·穆拉蒂和伊利亚·苏茨克维觉得奥特曼让情况变得更糟了,比如他并没有真正有效地改善混乱的局面,他实际上在播撒更多的混乱,让这些团队变得更加分裂。
[原文] [Guest]: and this is where it's important to understand that the executives and the independent board members they're all operating under this idea that they're building AGI and that AGI could either be devastating or utopic to humanity and so it's not yes it's like any other company and no it's not like any other company you cannot have like in their view you cannot have this degree of chaos as the pressure cooker for creating a technology that they in their conception could make or break the world
[译文] [嘉宾]: 在这里非常重要的一点是要理解,高管和独立董事会成员,他们都在这样一种观念下运作,即他们正在构建AGI,而AGI可能对人类来说要么是毁灭性的,要么是乌托邦式的。所以,这既是,是的,它就像任何其他公司一样;也不是,它不像任何其他公司;你不能在他们看来,你不能有这种程度的混乱,作为创造一项在他们的概念中可以成就或毁灭世界的技术的高压锅(Pressure cooker)。
[原文] [Guest]: and so that is basically what the independent board members also begin to reflect on they have these conversations amongst themselves where they're like "Well based on what we're hearing about Altman's behavior like if this was an Instacart would that warrant firing him?" And they concluded "Maybe not but this is not Instacart." And that's why they were like "Well crap maybe this is actually this does rise to the to the bar where we should consider replacing him because we are ultimately building a technology that we think could have transformative impacts either in the positive or negative direction
[译文] [嘉宾]: 所以这基本上也是独立董事们开始反思的地方。他们在私底下有这些对话,他们说:“好吧,基于我们听到的关于奥特曼行为的说法,比如如果这是一家Instacart(生鲜代购平台),这足以解雇他吗?”他们得出的结论是:“也许不足以,但这可是不是Instacart。”这就是为什么他们觉得:“好吧,糟糕,也许这实际上确实达到了我们应该考虑替换他的门槛,因为我们最终正在构建一项我们认为可能在积极或消极方向上产生变革性影响的技术。”
[原文] [Guest]: and so that is what happens it's like these two executives and then the independent board members also they were hearing other feedback as well from their connections within the company with other people in the industry at one point Adam D'Angelo who is one of the independent board members and the CEO of Kora uh which is you know start a tech startup in the valley he is at a party in San Francisco and he starts to hear some of these rumors that there's something weird about the way that OpenAI has structured its OpenAI startup fund which was this fund that they the company had created to start investing in other startups mhm
[译文] [嘉宾]: 所以这就是发生的事情。就像这两位高管,然后独立董事们,他们也从他们在公司内部的关系、与行业内其他人的联系中听到了其他反馈。有一次,亚当·德安吉洛(Adam D'Angelo,独立董事之一,也是硅谷科技初创公司Quora的CEO)在旧金山的一个派对上,他开始听到一些谣言,说OpenAI构建其“OpenAI初创基金”(OpenAI startup fund)的方式有些奇怪,这是公司创建的一个开始投资其他初创公司的基金,嗯。
[原文] [Guest]: and he realizes they'd never really seen documentation about how the startup fund had been set up from Alman and finally they get the documents and it turns out that OpenAI startup fund is not OpenAI's startup fund it's Altman's startup fund
[译文] [嘉宾]: 然后他意识到,他们从来没有真正看到过奥特曼提供的关于初创基金是如何设立的文件。最终他们拿到了文件,结果发现,OpenAI初创基金并不是OpenAI的初创基金,而是奥特曼个人的初创基金。
[原文] [Guest]: and this was something like one of several experiences that the independent board members were also having where they're like there's something not right about the fact that there continuously are inconsistencies inconsistencies between the way that Altman is portraying what is being done versus what is actually being done
[译文] [嘉宾]: 而这就像是独立董事会成员正在经历的几个体验之一,他们觉得有些事情不对劲:在奥特曼描绘正在做的事情与实际正在做的事情之间,不断存在着不一致,不一致(Inconsistencies)。
[原文] [Guest]: and so when these two executives approach the board or the independent board members then they're like "Okay this lines up with also the experiences that we've been having." And at that point they then have this series of very intense discussions where they're meeting almost every day talking about should we actually really consider removing Altman and in the end they conclude yes we should and if we're going to do it we need to do it quickly because they were very concerned that the moment that Alman found out his persuasive abilities would make it impossible to do
[译文] [嘉宾]: 所以当这两位高管接触董事会或独立董事成员时,他们就觉得:“好吧,这也与我们一直以来的经历相符。”在那一点上,他们随后进行了一系列非常激烈的讨论,几乎每天都在开会,讨论我们是否真的应该考虑罢免奥特曼。最终他们得出结论:是的,我们应该。如果我们要做,我们需要尽快行动,因为他们非常担心,一旦奥特曼发现,他那极强的说服力(Persuasive abilities)将使这变得不可能完成。
[原文] [Guest]: and so they end up firing Altman without telling anyone you know they don't talk to any stakeholders to get them on the same page microsoft gets a call right before they execute the action saying "We're going to fire Altman." And Microsoft for anyone that doesn't know are a lead investor in OpenAI at the time
[译文] [嘉宾]: 所以他们最终在没有告诉任何人的情况下解雇了奥特曼。你知道,他们没有与任何利益相关者(Stakeholders)交谈以让他们达成共识。微软(Microsoft)在他们执行行动之前才接到了一个电话,说:“我们要解雇奥特曼。”而给不知道的人解释一下,微软在当时是OpenAI的领投方(Lead investor)。
[原文] [Host]: yes one of the only investors in OpenAI at the time
[译文] [主持人]: 是的,也是当时OpenAI唯一的几个投资者之一。
[原文] [Guest]: and that is what then devolves the whole thing because every single person that is affected by this decision is now extremely angry that they were not involved and that is what then creates this campaign to bring Altman back and then Alman is reinstalled as CEO days later
[译文] [嘉宾]: 而这就是后来导致整件事失控(Devolves)的原因,因为每一个受这个决定影响的人现在都极度愤怒他们没有被卷入其中,而这也正是后来引发那场带回奥特曼的运动的原因,随后奥特曼在几天后重新被安装(Reinstalled,官复原职)为CEO。
章节 8:硅谷的弥赛亚情结:分道扬镳的创始人与技术神话
📝 本节摘要:
本节探讨了OpenAI早期核心团队为何会彻底分崩离析并各自自立门户。嘉宾指出,几乎所有与萨姆·奥特曼共事过的高管(如马斯克、达里奥、伊利亚、米拉)最终都与他发生冲突并离开,创办了自己的AI公司,因为这些科技大佬都试图以自己的意志来塑造AI。面对“召唤恶魔”与极高人类毁灭概率的追问,嘉宾借用科幻巨著《沙丘》的比喻,深刻揭露了这些AI领袖的“弥赛亚情结”——他们最初通过制造恐惧神话来聚拢权力与资本,但最终在日复一日的自我洗脑中陷入认知失调,甚至连自己都对这些神话信以为真了。
[原文] [Host]: how does a CEO of a major company get fired by the board because board members there's a quote in your book on page 357 where you say about Ilia saying "I don't think Sam is the guy who should have the finger on the button for AGI." Now I I asked myself this question you know I work with lots of people here we have 150 people that work in this business and those people know me best they see me on camera they see me off camera so if they said that we don't think Steven is the right person to host the direc Yeah it would take a lot for them to say that they must have seen some shit off camera for them to go we don't think he's the right person to be on camera yeah or for whatever reason and in the case of AI which is much more consequential than a podcast that is you know filmed in my old kitchen um it almost sends a chill down one's body to think that the co-founder of a business has gone to the board and said this isn't the guy to lead this consequ I mirror Marotti then also said I don't think Alman is the right guy and then they both left later so then Altman comes back and lo and behold Ilia never comes back so his concerns about the fact that Alman founding out would be bad for him manifested he ended up not coming back and Miriam Marotti then left shortly thereafter quite a lot of these people leave don't they open AAI
[译文] [主持人]: 一家大公司的CEO怎么会被董事会解雇呢?因为董事会成员……你的书第357页有一段引用,你提到伊利亚(Ilia)说:“我不认为萨姆是那个应该把手指放在AGI按钮上的人。”现在我问自己这个问题,你知道,我在这里和很多人一起工作,我们公司有150名员工,那些人最了解我,他们看得到镜头前的我,也看得到镜头后的我。所以如果他们说,我们认为史蒂文(Steven)不是主持这个节目的合适人选……是的,要让他们说出这样的话是需要很大决心的,他们一定是在镜头后看到了一些糟糕透顶的事情(seen some shit),才会让他们觉得我们认为他不是适合在镜头前的人,是的,或者出于任何原因。而在AI这个比在我旧厨房里录制的播客要重要得多的领域,嗯,想到一家企业的联合创始人去找董事会说“这家伙不适合领导这个极其重要的事业”,几乎让人浑身发冷。米拉·穆拉蒂(Mirror Marotti,转录错误,指Mira Murati)后来也说,我不认为奥特曼是合适的人选,然后他们后来都离开了。所以后来奥特曼回来了,果然,伊利亚再也没有回来。所以他担忧奥特曼发现后会对他不利的事情变成了现实,他最终没有回来,而米拉·穆拉蒂随后不久也离开了。有相当多的人离开了OpenAI,不是吗?
[原文] [Guest]: they do so if you consider one of the origin stories of open AI is this dinner that happened at the Rosewood Hotel which is a very swanky hotel um right right in the heart of Silicon Valley that uh was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area and there was this dinner that was there where Altman was intending to recruit the OG team that would start OpenAI so he's kind of telling everyone you might have a chance to meet Musk because Musk is going to come to this dinner dinner and he cold emails Ilia and gets Ilia to then come because and Ilia specifically wants to come because he wants to meet Musk and he also emails all these other people including Greg Brockman Dario Amade these are all people that ended up working at Open and they all almost all of them not not every one of them but almost all of them end up working at OpenAI and leaving almost all of them end up leaving specifically after they clash with Alman
[译文] [嘉宾]: 他们确实离开了。如果你回想一下OpenAI的起源故事之一,那是发生在瑰丽酒店(Rosewood Hotel)的一场晚宴,这是一家非常时髦的酒店,嗯,就在硅谷的中心地带,这是埃隆·马斯克(Elon Musk)每次从洛杉矶来到湾区时最喜欢的酒店之一。在那里举办了一场晚宴,奥特曼打算在那招募创立OpenAI的元老(OG)团队。所以他某种程度上在告诉大家,你们可能有机会见到马斯克,因为马斯克要来参加这场晚宴。他向伊利亚发了冷邮件(Cold emails)让他来,而伊利亚特别想来,因为他想见马斯克。他还给所有其他这些人发了邮件,包括格雷格·布罗克曼(Greg Brockman)、达里奥·阿莫迪(Dario Amodei)。这些最终都在OpenAI工作的人,他们几乎所有人,不是每一个人,但几乎所有人都最终在OpenAI工作,然后离开,几乎所有人都最终在与奥特曼发生冲突后离开。
[原文] [Host]: and Ilia he left and launched a company called Safe Super Intelligence which is I mean that's an indirect if I've ever heard one do you know what I mean do you know what I mean if someone like co-ounded this podcast with me and then they left and started a podcast called Safe Podcasting I I'd take that as a slight i' I'd have people knocking on their door and asking for their texts
[译文] [主持人]: 还有伊利亚,他离开并创办了一家名为“安全超级智能”(Safe Super Intelligence)的公司,我是说,如果我听过什么含沙射影的话,这就是了。你懂我的意思吗?你懂我的意思吗?如果有人和我共同创办了这个播客,然后他们离开并开了一个名为“安全播客”(Safe Podcasting)的节目,我会把它看作是一种轻蔑。我、我会派人去敲他们的门,并索要他们的短信。
[原文] [Guest]: one of the things that is happening here is it is not a coincidence that every single tech billionaire has their own AI company mhm they want to create AI in their own image and that's why they keep not getting along and in fact it's not just don't get along they end up hating each other after working together mhm and then splinter off into their own organizations so after Musk leaves he starts XAI after Dario leaves he starts Anthropic after Ilia leaves he starts Safe Super Intelligence after Meera leaves she starts thinking machines lab they want to have control over their own vision of this technology and the best way that they have derived from their experiences of trying to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there
[译文] [嘉宾]: 正在发生的其中一件事是,每一个科技亿万富翁都有自己的AI公司,这绝非巧合,嗯。他们想以自己的形象来创造AI,这就是为什么他们一直合不来,事实上不仅仅是合不来,他们在一起工作后最终会互相仇恨,嗯,然后分裂(Splinter off)成他们自己的组织。所以在马斯克离开后,他创办了xAI;达里奥离开后,他创办了Anthropic;伊利亚离开后,他创办了Safe Super Intelligence;米拉离开后,她创办了Thinking Machines Lab。他们想要掌控自己对这项技术的愿景,而他们从试图将自己的愿景投入竞技场的经验中得出的最佳方式,就是创建一个竞争对手,然后与OpenAI以及外面所有其他的公司竞争。
[原文] [Host]: do you think some of these AICOs realize that they are quite literally summoning the demon as Elon said 10 years ago but they don't really care because being the person that summoned the demon is makes you consequential and powerful and historical even if the outcome is potentially horrific even if there's like a 20% outcome of it being horrific i remember I think it was Dario he's the one that said there's somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization 25% is a one in4 chance if you put bullets in a fourchamber revolver and said Steven the upside is you could become a multi-gazillionaire and be remembered forever the downside is that there would be a bullet in your head there is no chance that I would take take that bet with a 25% potential chance of things going catastrophically wrong
[译文] [主持人]: 你认为这些AI公司的CEO们有没有意识到,他们实际上真的就像埃隆10年前说的那样在“召唤恶魔”(Summoning the demon),但他们并不真正在乎,因为成为那个召唤恶魔的人能让你变得举足轻重、强大并在历史上留名?哪怕结果可能是可怕的,哪怕有20%的可能会是可怕的。我记得,我想是达里奥,正是他说过“在人类文明的尺度上,事情发生灾难性错误的几率可能在10%到25%之间”。25%是四分之一的几率,如果你把子弹装进一把四发左轮手枪里,并说:“史蒂文,好的一面是你可能成为超级亿万富翁并永远被铭记,坏的一面是你的脑袋里会挨一枪。”在有25%事情发生灾难性错误的潜在几率下,我绝不可能打那个赌。
[原文] [Guest]: so I have a very long answer to this because do they know if they're summoning the demon it really depends on what we define as summoning the demon and in this particular case to go back to what we were saying before there's a mythology that the AI industry uses where summoning the demon is an integral part of convincing everyone that therefore they can be the only ones that are developing this technology
[译文] [嘉宾]: 所以对这个问题我有一个很长的答案,因为他们是否知道自己在召唤恶魔,这真的取决于我们如何定义召唤恶魔。在这个特定的例子中,回到我们之前所说的话,AI行业正在使用一种神话(Mythology),在这种神话中,“召唤恶魔”是说服所有人的一个不可或缺的部分,借此让他们相信只有他们自己才是唯一能开发这项技术的人。
[原文] [Host]: i got it so on one end you got to say if we don't China will and that's terrible yeah but if we let anyone else do it other than me then we're fucked as well
[译文] [主持人]: 我明白了,所以一方面你得说,如果我们不做,中国就会做,那太可怕了;是的,但如果我们让除了我之外的任何人去做,那我们也完蛋了。
[原文] [Guest]: exactly so that means that I have to do it and you have to give me money and support exactly so when they're saying these things we should understand it as not as like a genuine prediction based on what they're seeing because first of all we don't predict the future we make it we should understand this as an act of speech to persuade other people into believing that they should seed more power more resources to these individuals and so do they know that they're summoning the demon i mean they are purposely trying to create this this feeling within the public that they are because it is a crucial part of their power but do they if we were to define just do they realize that the things that they are doing are having already really harmful impacts all around the world on vulnerable people vulnerable communities vulnerable countries that's where I'm like maybe yes maybe no
[译文] [嘉宾]: 完全正确。所以那意味着必须由我来做,而你必须给我钱和支持。完全正确,所以当他们在说这些事情时,我们应该理解为这不是一种基于他们所见所闻的真实预测(因为首先,我们不预测未来,我们创造未来)。我们应该将其理解为一种言语行为(Act of speech),旨在说服其他人相信他们应该向这些个人让渡(Cede,转录拼写为seed)更多的权力、更多的资源。所以他们知道自己正在召唤恶魔吗?我的意思是,他们是故意试图在公众中制造出“他们正在召唤恶魔”这种感觉,因为这是他们权力的关键部分。但是,如果我们仅将其定义为,他们是否意识到他们正在做的事情已经对全世界的弱势群体、弱势社区、弱势国家产生了极其有害的影响?在这一点上,我的态度是:也许知道,也许不知道。
[原文] [Guest]: and they don't really care because in the frame of mind like I sometimes use the analogy that the AI world is like Dune dune for anyone that doesn't know Dune science fiction epic written by Frank Herbert and it's set in this intergalactic era where there are all these houses and they're fighting each other for spice so it's a call back to colonialism and empire and they all are trying to control the spice but one of the features of this story is that there are these myths that are seated on the different planets about a a religious myth basically about the coming of the Messiah that are used as ways to control the people
[译文] [嘉宾]: 并且他们并不真正在乎,因为在这种思维框架下,就像我有时使用的那个比喻:AI的世界就像《沙丘》(Dune)。给不知道《沙丘》的人解释一下,这是由弗兰克·赫伯特(Frank Herbert)撰写的科幻史诗,背景设定在这个星际时代,那里有所有这些家族,他们为了香料(Spice)互相厮杀,这本质上是对殖民主义和帝国的呼应,他们都在试图控制香料。但是这个故事的一个特征是,在不同的星球上播种着这些神话,关于……一个关于弥赛亚(Messiah,救世主)降临的宗教神话,这些神话被用作控制人民的方式。
[原文] [Guest]: and Paul at Trades when he arrives at the planet Iraqis uh with with the intention of um trying to then fight against the empire and um avenge his father's death he steps into a myth that has been seated on this planet that says that one day there will be a Messiah that comes and saves the planet so he steps into the role of the Messiah and leans into this idea in order to better control the people and rally them behind him as a leader to help with this quest he knows that it's a myth in the beginning but because he lives and breathes and embodies it it kind of starts to blur in his mind whether this is really a myth or whether he's really the messiah
[译文] [嘉宾]: 而保罗·厄崔迪(Paul Atreides,转录口误为Paul at Trades)当他到达厄拉科斯星球(Arrakis,转录口误为Iraqis)时,呃,带着试图对抗帝国并为他父亲的死复仇的意图,他步入了一个早已在这个星球上播种下的神话中,这个神话说总有一天会有一位弥赛亚降临并拯救这个星球。所以他步入了弥赛亚的角色,并顺应了这个理念,以便更好地控制人民并将他们团结在自己这位领袖的周围,来帮助完成这项任务。一开始他知道这是一个神话,但是因为他日复一日地生活在其中、呼吸着它、体现着它,在他的脑海中,这到底真的是一个神话,还是他自己真的就是那位弥赛亚,边界开始变得模糊了。
[原文] [Guest]: and this is what I think happens in the AI world on one hand there are all these executives that actively engage in mythmaking because you know I have all these internal documents that I write about in the book where they are very keenly aware of how to bring the public along with them by showing them dazzling demonstrations of the technology by using crafting a mission that will sound really good uh and and and make people give more leniency to their companies so they know they're doing the mythmaking and also I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out
[译文] [嘉宾]: 而我认为这就是在AI世界中发生的事情。一方面,所有这些高管都在积极参与神话制造(Mythmaking),因为你知道,我掌握了所有这些我在书中写到的内部文件,他们非常敏锐地意识到,如何通过向公众展示这项技术令人眼花缭乱的演示,通过打造一个听起来非常美好的使命,呃,让公众与他们统一战线,并让人们对他们的公司给予更多的宽容。所以他们知道自己正在制造神话。同时,我认为他们中的许多人也在神话中迷失了自我,因为他们必须日复一日地生活在其中、呼吸着它并体现着它。
[原文] [Guest]: and so when you know Daario says he thinks that 10 to 25% of the future could be catastrophic or or whatever the probability is 10 to 25% he is actively engaging in the mythmaking but also he's losing himself in the myth like I think if you were to ask him "Do you genuinely believe that?" He would be like "Yes I genuinely believe that." Because there's been a blurring of when he's saying something just to say something versus when he actually believes what is he's required to believe in order to then continue doing the things that he's doing and this is the whole psychology of cognitive dissonance right where you the brain struggles to hold two conflicting worldviews at the same time so it's it's incentivized or it endeavors to dismiss one
[译文] [嘉宾]: 所以当你听到,你知道,达里奥说他认为未来的10%到25%可能是灾难性的,或者不管这个概率是10%还是25%,他都是在积极参与这种神话制造,但他同时也迷失在了这个神话中。比如我认为如果你问他:“你真的相信那个吗?”他会说:“是的,我真的相信。”因为有些界限已经模糊了,到底他什么时候说出某事只是为了制造说辞,什么时候他实际上真的相信了他被要求相信的东西,以此来继续做他正在做的事情。这就是认知失调(Cognitive dissonance)的整个心理学,对吧?当你的大脑挣扎着试图同时持有两种相互冲突的世界观时,它就会受到刺激,或者它会努力去排斥其中一个。
章节 9:智能的假象:对“规模法则(Scaling)”的质疑
📝 本节摘要:
本节中,主持人从“不发展AI就会落后于中国”的防御性逻辑出发,指出扩大模型规模(Scaling)似乎能带来更强的智能。嘉宾对此进行了强烈的反驳,揭开了“规模法则”的神话。她指出,AI并不具备类似人类的“通用智能”,其能力的提升仅仅是因为巨头们针对特定高利润领域(如金融、法律、医疗等)投入了海量数据和人工进行训练。模型表现出的所谓“锯齿状智能(Jagged intelligence)”实则是特定任务能力的人工堆砌。最终,科技领袖们不断炒作“系统会变得越来越聪明”的预测,仅仅是为了从这个神话中赚取巨额利润。
[原文] [Guest]: so if you you know if you wanted to be a healthy person but also a smoker um and I pointed out that smoking is bad for you the first words out of your mouth are going to be yes but smoking helps me with stress yeah but I only do it when I think I don't know
[译文] [嘉宾]: 所以如果你,你知道,如果你想成为一个健康的人但同时也是一个吸烟者,嗯,当我指出吸烟对你有害时,你脱口而出的第一句话会是:“是的,但吸烟能帮我缓解压力。”或者“是的,但我只有在思考时才抽,我不知道。”
[原文] [Host]: I kind of see that at the moment because these companies have to raise extortionate like huge amounts of money to fund their AI research and they're building out all of these data centers so when they're out in the public they're always fundraising all of these major companies are fundraising all the time at the moment so you can't be fundraising and saying "I'm going to destroy your children's future potentially there's 25% chance that your children aren't going to have a great life." Which might be the truth i mean that is actually what they say Dario this is what famously Dario Amade does he's like he does that but the others Sam's not doing that as much anymore
[译文] [主持人]: 我现在多少能看明白一点了,因为这些公司必须筹集极高的、比如巨额的资金来资助他们的AI研究,而且他们正在建设所有这些数据中心,所以当他们出现在公众视野时,他们总是在融资,所有这些大公司眼下都在无时无刻地融资;所以你不能一边融资一边说:“我可能会毁掉你孩子们的未来,有25%的几率你的孩子们将无法拥有美好的生活。”尽管这可能就是真相,我的意思是,这其实就是他们说的,达里奥(Dario),这正是达里奥·阿莫迪(Dario Amodei)出名的地方,他就是这么做的,但其他人,比如萨姆(Sam)现在不再这么做了。
[原文] [Guest]: yes and it's because you know it goes back to like each of them kind of distinguish themselves a little bit as as the brand that they need to project
[译文] [嘉宾]: 是的,而且这是因为,你知道,这又回到了他们每个人都在某种程度上稍作区分,以塑造他们需要投射的品牌形象(Brand)。
[原文] [Host]: do you think any of them are more have a stronger moral compass than others cuz I think Dario often gets the credit for having more of a you know more of a backbone and being more conscious of implications he does get a lot of credit for that he's from Claude and Anthropic for anyone that doesn't know
[译文] [主持人]: 你认为他们中有人比其他人拥有更强的道德准则(Moral compass)吗?因为我认为达里奥经常因为拥有,你知道的,更多的骨气(Backbone)并对潜在影响(Implications)有更清醒的认识而受到赞誉。他确实因此获得了很多赞誉,给不知道的人解释一下,他来自打造了Claude的Anthropic公司。
[原文] [Guest]: I don't think it truly matters that question the answer to that question because to me even if you were to swap all the CEOs for someone that people would say is better at running these companies it doesn't fix the problem that I identify in the book which is that there is a system of power that has been constructed where these companies and the people running these companies get to make decisions that affect billions of people's lives lives around the world and those billions of people do not get any say in how it goes those people they can go to the polls right so if the public are sufficiently educated they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws yes but at the speed and pace at which these companies operate and at the sheer scale and size they're able to also spend extraordinary amounts of money hundreds of millions in this upcoming midterms to try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage and so to me I think sometimes as a society we obsess a little bit with are these leaders good or bad people and to me the bigger question is is the governance structure that we've created a sound one or that allows broad participation or an anti-democratic one that has consolidated this decision-making power in the hands of the few because no person is perfect it does I don't I don't care who is on at the top of these companies they're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and um and and have a culture and history that are fundamentally different from them without things going wrong and so that is why throughout history we've moved from empires to democracy it's because empire as a structure is inherently unound it does not actually maximize the chances of most people in the world being able to live dignified lives
[译文] [嘉宾]: 我认为这个问题、这个问题的答案并不真正重要。因为对我来说,即使你把所有的CEO都换成人们认为更擅长运营这些公司的人,它也解决不了我在书中指出的那个问题:那就是目前已经构建了一个权力系统,在这个系统中,这些公司以及运营这些公司的人有权做出影响全球数十亿人生活的决定,而这数十亿人在事情的走向中没有任何发言权。这些人可以去投票,对吧?所以如果公众受过充分的教育,他们可以去投票选出一个声称要立法、通过法律或试图通过法律的领导人。是的,但以这些公司运作的速度和节奏,以及它们纯粹的规模和体量,它们也能够在这场即将到来的中期选举中豪掷数以亿计的资金,试图扼杀任何可能阻碍它们的立法,并制定将它们的优势合法化(Codify their advantage)的立法。所以对我来说,我认为有时作为一个社会,我们过于纠结于这些领导者是好人还是坏人;对我来说,更大的问题是,我们所创造的治理结构(Governance structure)是一个健全的、允许广泛参与的结构,还是一个反民主的、将决策权集中在少数人手中的结构。因为没有人是完美的。我不在乎谁在这些公司的最高层,他们都没有能力代表世界上那么多生活、语言、嗯、并且文化和历史与他们截然不同的人做出决定,且不让事情出乱子。这就是为什么纵观历史,我们从帝国走向了民主,因为帝国作为一种结构本质上是不健全的(Unsound),它实际上无法最大化世界上大多数人过上有尊严生活的机会。
[原文] [Host]: i'm going to try and take on their point of view so this is me playing devil's advocate okay but Karen if the US don't continue to accelerate their research with AI at some point China's model is going to become so smart and intelligent that we're basically going to have to rent it off them and we're going to be you know they'll get the scientific discoveries they'll discover the new era of autonomous weapons and we will be their backyard and like logically that argument does appear to be pretty true
[译文] [主持人]: 我将尝试站在他们的角度来看,所以我在这里扮演“魔鬼代言人”。好的,卡伦(Karen),如果美国不继续加速他们的AI研究,在某个时候,中国的模型将会变得如此聪明和智能,以至于我们基本上不得不向他们租用它;而且我们将陷入,你知道的,他们将获得科学发现,他们将开启自动武器(Autonomous weapons)的新纪元,而我们将沦为他们的后院。从逻辑上讲,这个论点似乎相当真实。
[原文] [Guest]: no it's not
[译文] [嘉宾]: 不,并不是这样。
[原文] [Host]: if we scale up if we just imagine any rate of change with this intelligence at some point we're going to come to a weapon that could theoretically disable um all of the United States electricity their weapons systems it would know exactly how to disable the United States from a cyber perspective because it would be that smart all you've got to imagine is any rate of improvement of any period any sort of long period of time so this is a theory that might be true and if it's true I mean yeah any theory might be true but but if but but you know again going to this point of like even if it's a small percentage it's worth paying attention to on the other side of the foot this is a theory that people talk about it could be the case that the most intelligent civilization is going to be the superior civilization logically that's a pretty sound thing to say
[译文] [主持人]: 如果我们扩大规模(Scale up),如果我们仅仅想象这种智能在任何变化速率下的发展,在某个时候,我们将会得到一种理论上能够瘫痪,嗯,美国所有电力、他们的武器系统的武器。它将确切地知道如何从网络的角度瘫痪美国,因为它将会有那么聪明。你只需想象在任何时期、任何一段较长时期内的任何改进速度。所以这是一个可能是真实的理论,如果它是真实的——我是说,是的,任何理论都可能是真实的——但如果,但你知道,再次回到这一点,即哪怕它只有很小的百分比概率,也值得我们关注。另一方面,这是人们谈论的一种理论,即最智能的文明将成为更高级的文明,从逻辑上讲,这说法相当合理。
[原文] [Guest]: no so there's a lot of a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument and let's knock them down one by one so the first one is that these systems are intelligent and that just scaling them is going to bring us more intelligence
[译文] [嘉宾]: 不。所以,在这个论点中,有很多基本要素必须是真实的,才能使之成为一个站得住脚的论点。让我们把它们逐一击破。第一个前提是:这些系统是具有智能的,并且仅仅扩大它们的规模(Scaling)就会为我们带来更多的智能。
[原文] [Host]: so far so true
[译文] [主持人]: 至少目前来看是真的。
[原文] [Guest]: no it's actually not because first of all again we don't actually know if these systems are like intelligence is not it's not like the right analogy almost it's sort of like it's like is a calculator a calculator can do math problems faster than a human does that make it intelligent it has a narrow intelligence because they're solving a narrow problem which is like 1 plus 1 equals 2 but and these systems they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone they actually can only do some things for some people this is like the jagged frontier of these AI models like some of the capabilities are quite good other capabilities are not that good you know why that happens is because the company can only focus on advancing certain types of capabilities it can't literally focus on advancing all types of capabilities they have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking uh you know getting a bunch of human contractors to annotate and train the model to do that exact thing and so scaling these models is actually a perpendicular question to are we actually getting more cyber capabilities specifically and more military capabilities specifically
[译文] [嘉宾]: 不,实际并非如此。因为首先,重申一下,我们实际上并不知道这些系统是否……把它们称为“智能(Intelligence)”几乎不是一个恰当的比喻。这有点像:计算器做数学题比人类快,这能让它变得“智能”吗?它拥有狭隘的智能(Narrow intelligence),因为它们在解决一个狭隘的问题,比如1加1等于2。而这些系统,实际上它们也是相当“狭隘智能”的,因为尽管这些公司声称它们是能为任何人做任何事的“全能机器(Everything machines)”,但它们实际上只能为某些人做某些事。这就像这些AI模型的锯齿状前沿(Jagged frontier):一些能力相当好,而另一些能力却不太好。你知道为什么会发生这种情况吗?因为公司只能专注于推进特定类型的能力,它不可能逐字意义上地专注于推进所有类型的能力。它们必须真的下定决心去推进特定的能力,通过收集该能力所需的数据,通过,呃,你知道,雇佣大量人类合同工来对模型进行标注(Annotate)和训练,让它专门做那一件事。因此,扩大这些模型的规模,实际上与“我们是否具体获得了更多的网络战能力,以及具体获得了更多的军事能力”是一个垂直毫不相交(Perpendicular)的问题。
[原文] [Host]: i would argue that most of the most of the top people in AI believe that the intelligence is going to continue to scale for some time
[译文] [主持人]: 我想反驳说,大多数AI领域的顶尖人士都相信,智能将在一段时间内继续随规模扩大(Scale)。
[原文] [Guest]: a lot of them do like Jeffrey Hinton does and again it's it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is his hypothesis throughout his career has been the brain is a statistical engine but that's his hypothesis and that is not universally agreed upon especially among people that are not in the AI world when you talk with neuroscientists and psychologists people who actually study human intelligence in the human brain that is where you start to get a lot of debate and disagreement about this particular view that Hinton has and so this is kind of like one of the one of the things is like AI is already being used in the military and has been used in the military for a long time but ex specifically accelerating large language models isn't just the only path for getting military cap like the companies would have to choose to specifically pick military capabilities to accelerate not just like general intell it's like you know what I'm saying like they create this myth that they are actually pushing the frontier of all of the capabilities of the model but that's not what's actually happening internally and I have I had hundreds of pages of documents on like how they were specifically training models they pick what capabilities they want to advance and you know how they pick them it's based on which industries countries would be able to pay them the most money for their services so they pick finance law medicine healthcare commerce it's not actually intelligent like a like a a baby where you the the more that you that the baby grows up they start having this like general these general abilities
[译文] [嘉宾]: 他们中有很多人确实如此,就像杰弗里·辛顿(Jeffrey Hinton)一样。这又回到了他关于人类智能如何运作以及什么是大脑合适模型的假设上。他整个职业生涯的假设一直是:大脑是一个统计引擎。但这只是他的假设,并不是普遍共识,特别是在非AI领域的人群中。当你与神经科学家和心理学家,即那些真正研究人类智能和人类大脑的人交谈时,你就会开始听到关于辛顿这个特定观点的大量辩论和分歧。因此,这就是其中一件事,比如AI已经被用于军事领域,并且已经在军事领域使用了很长时间,但专门加速大型语言模型并不是获得军事能力的唯一路径;就好像这些公司必须刻意选择特定的军事能力来加速,而不是像提升通用智能那样。你知道我在说什么吗?比如他们创造了这个神话,说他们实际上正在推进模型所有能力的前沿,但这并不是内部真正发生的事情。我掌握了数百页的文件,关于他们究竟是如何具体训练模型的:他们挑选他们想要推进的能力;而且你知道他们是怎么挑选的吗?是基于哪些行业、哪些国家能够为他们的服务支付最多的钱。所以他们挑选了金融、法律、医学、医疗保健、商业。它并不是真正具有智能的,不像一个婴儿,随着婴儿的成长,他们开始拥有这种普遍的、通用的能力。
[原文] [Host]: i think I have jagged intelligence i'll be honest i wasn't going to say it but I think I know a little I know a little bit about uh No I know a lot about a little bit yeah
[译文] [主持人]: 我觉得我也有锯齿状智能(Jagged intelligence)。老实说,我本来不想说的,但我觉得我对……呃,不,我对很少的一点东西懂得很多,是的。
[原文] [Guest]: but if but you also have the capability to learn and acquire knowledge by yourself and you also have the ability to choose what you're going to learn and acquire by yourself it's not easy and it takes a lot more time than these models it seems less compute but and you can learn how to drive in one place and then immediately know how to drive in another place these models cannot do that
[译文] [嘉宾]: 但是如果你……但你也有独立学习和获取知识的能力,你也有能力自己选择你要学习和获取什么知识。这并不容易,它比这些模型需要花费更多的时间,虽然它似乎需要更少的计算力(Compute);而且,你可以学会如何在一个地方开车,然后立刻就知道如何在另一个地方开车,这些模型做不到这一点。
[原文] [Host]: every time a self-driving car is shifted to another location it has to completely retrain on that location it's like all the self-driving cars i mean we're sitting in Austin right now and there's all these self-driving cars that are driving through Austin but when one of them learns they all learn which is which well it's just because it's a it's an operating system that is has an AI model as part of it and you're training the AI model and then you deploy that AI model across all the self-driving a big advantage because if one optimist robot learns one thing in one factory they all learn it and imagine that imagine if humans if we all learned what all the other humans learned that would be that would give us such an unbelievable competitive advantage
[译文] [主持人]: 每次自动驾驶汽车被转移到另一个地点,它都必须在那个地点完全重新训练。就好像所有的自动驾驶汽车一样,我的意思是,我们现在正坐在奥斯汀(Austin),这里有所有这些驶过奥斯汀的自动驾驶汽车,但当其中一辆学会了,它们就全学会了。这是……嗯,只是因为它是一个将AI模型作为其一部分的操作系统(Operating system),你在训练这个AI模型,然后你将这个AI模型部署到所有的自动驾驶车辆上。这是一个巨大的优势,因为如果一个擎天柱机器人(Optimus robot)在一个工厂学会了一件事,它们就全学会了。想象一下,想象一下如果我们人类学会了所有其他人类所学到的东西,那将会……那将赋予我们多么令人难以置信的竞争优势。
[原文] [Guest]: i mean one of the ways we did that is through communication they could not because they could be learning the wrong thing which has also happened again and again with these technologies is that all of them then learn the wrong thing and they all have the same failure mode i mean part of the resilience of human society is that we do have different expertises and we also have different failure modes
[译文] [嘉宾]: 我的意思是,我们做到这一点的方法之一是通过沟通交流。它们不能,因为它们可能会学到错误的东西,这在这些技术中也是一而再、再而三发生的事情,即它们接着全都学到了错误的东西,并且它们全都有同样的故障模式(Failure mode)。我的意思是,人类社会韧性(Resilience)的一部分在于我们确实拥有不同的专业知识,而且我们也拥有不同的故障模式。
[原文] [Host]: i think sometimes we hold AI models to a higher standard than we hold humans to and in a weird because I I' I'd hear on stage we're in we're in Austin at the moment and I'd hear people go ah but you know them AI models they hallucinate sometimes i'm like "Have you met a human?" Like I I hallucinate all the time i can barely spell or do math so
[译文] [主持人]: 我认为有时我们对AI模型的要求比对人类的要求更高,这很奇怪,因为我,我……我在台上听到——我们现在在奥斯汀——我听到人们说:“啊,但是你知道那些AI模型有时会产生幻觉(Hallucinate)。”我就想:“你见过人类吗?”比如我,我一直都在产生幻觉,我几乎不会拼写或者做数学题,所以。
[原文] [Guest]: yes but it's it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies like we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a a way to try and gauge whether or not it is good or worthy or capable in society
[译文] [嘉宾]: 是的,但这再一次就像使用了这个在该领域早期被专门挑选出来的比喻,作为营销这些技术的一种方式。比如,我们反复使用这种“智能”的比喻,并将这些机器与人类智能联系起来,以此作为试图衡量它在社会中是否良好、有价值或有能力的一种方式。
[原文] [Host]: i think the output is the thing that really m is the most consequential which is like okay it might have a different brain and a different system but does it arrive at the same capability like does it is it able to do surgery on someone's brain is it able to drive a car like my car drives itself in in Los Angeles I don't touch the steering wheel and I can drive for many many hours and in here in Austin I just saw the ones the other day where they've removed the steering wheel and the pedals the new cyber cabs so I go it doesn't really matter if it's using a different system if it's navigating through the world as a car it has a better safety record than human beings Um then as far as I'm concerned intelligence or not it's like yes you know
[译文] [主持人]: 我认为输出结果才是真正……最重要的东西,这就像:好吧,它可能有一个不同的大脑和一个不同的系统,但它达到了同样的能力吗?比如,它能为别人的大脑做手术吗?它能开车吗?就像我的车在洛杉矶能自动驾驶,我不碰方向盘,能开很多很多个小时。而在这里,在奥斯汀,前几天我刚看到那些去掉了方向盘和踏板的新型赛博出租车(Cyber cabs)。所以我认为它是否使用不同的系统并不重要,如果它作为一辆车在这个世界上导航行驶,它的安全记录比人类要好,嗯,那么在我看来,不管是不是智能,它就像是,你知道的。
[原文] [Guest]: but that was not the original argument that you made which was like these systems are just generally going to become more intelligent across different things based on the prediction this is a prediction that you're making right like that and this is a prediction that all the AI um Ilia's making Dario's making Elon's making Zuckerberg's making man's making Dennis is making and do you know what the common feature of all of them is they profit enormously off of this myth
[译文] [嘉宾]: 但那不是你最初提出的论点,你最初的论点是说,基于预测,这些系统会在各个不同领域普遍变得更加智能。这是一个你正在做出的预测,对吧?这是一个所有的AI……嗯,伊利亚在做的预测,达里奥在做的预测,埃隆在做的预测,扎克伯格在做的预测,奥特曼(原文口误为man's)在做的预测,丹尼斯(Dennis,指DeepMind的Demis Hassabis)在做的预测,而且你知道他们所有人的共同特征是什么吗?他们从这个神话中赚取了巨额利润。
章节 10:职场大洗牌与“数据标注”的隐形血汗工厂
📝 本节摘要:
本节探讨了AI对就业市场的真实影响。面对马斯克关于“自动驾驶和机器人将取代所有人工工作(如外科医生)”的预测,嘉宾指出AI实际上是依赖海量“数据标注”人工“喂”出来的统计引擎。随后,主持人朗读了支付巨头Klarna CEO的私信,澄清其公司因AI而裁员的真实业务考量。嘉宾借此尖锐指出,当前的失业潮不仅是因为AI能力的提升,还因为许多高管趁机借AI之名进行裁员。更残酷的现实是,那些被裁掉的白领甚至好莱坞精英,为了糊口不得不去从事枯燥的“数据标注”计件工作,亲手训练那些即将淘汰更多人的模型。这种自动化浪潮正在彻底摧毁传统的职业上升通道。
[原文] [Host]: elon has recently spearheaded the construction of Colossus a massive supercomputer in Memphis housing a 100,000 GPU specifically to scale up their API models faster than their competitors it appears that they've all converged around this idea that you can brute force your way to greater more generalized intelligence they've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are that are financially lucrative and I heard Elon say that if you're a surgeon there's just no point he was like don't train to be a surgeon he says in a couple of years time Optimus and AI generally are going to be better than any surgeon that's ever lived yeah you know do you think these things are true
[译文] [主持人]: 埃隆最近牵头在孟菲斯建设了“巨像”(Colossus),这台容纳了10万个GPU的巨型超级计算机专门用来比竞争对手更快地扩展他们的API模型。似乎他们都已经达成了一个共识,即你可以通过“大力出奇迹”(Brute force)的方式获得更强大的、更通用的人工智能。他们达成共识的理念是,你可以通过暴力计算构建出模型,然后把它们卖给别人去自动化执行某些利润丰厚的任务。我听到埃隆说,如果你是个外科医生,那就毫无意义了,他的原话大概是别去受训当外科医生了,他说几年后,擎天柱(Optimus)机器人和一般意义上的AI将会比史上任何外科医生都要出色。是的,你知道,你认为这些话是真的吗?
[原文] [Guest]: well you know I I'm pretty sure it was Hinton that famously slash infamously said there would be no need for radiologists anymore there would be no need for radiologists anymore in he set a deadline that we've already passed i don't remember how many years radiology is doing great as a profession
[译文] [嘉宾]: 嗯,你知道,我,我很确定是辛顿(Hinton)发表过那个著名的、或者说臭名昭著的言论,说“以后再也不需要放射科医生了,再也不需要放射科医生了”,他设定的期限我们早就已经过了。我不记得具体过了多少年,但放射科作为一个职业目前发展得非常好。
[原文] [Host]: do you think it will be in 5 years
[译文] [主持人]: 你觉得会在5年内实现吗?
[原文] [Guest]: okay so this this once again goes back to this question of like why do we build technology and why should we specifically be building AI okay and for me like the whole project of technology development advancement is not to advance technology for technologies sake it's to help people and there have been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have the AI model in their hands and for the for the human expert to use the AI model as a tool as an input into their judgment and it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognosis of the patient
[译文] [嘉宾]: 好的,所以这、这再一次回到了这个问题上:我们为什么要开发技术?我们为什么特别要开发AI?好的,对我来说,技术开发进步的整个项目,不是为了技术而推进技术,而是为了帮助人类。而且已经有大量研究表明,实际上,在医疗保健环境中,对患者最好的结果是让放射科医生手中拥有AI模型,让人类专家将AI模型作为工具、作为他们判断的一个输入信息(Input);正是这种结合,才带来了对某些类型癌症最准确、最及时的早期诊断,从而有助于改善患者的预后(Prognosis)。
[原文] [Host]: do you believe that in the coming years all the cars pretty much all the cars on the road will be driving themselves
[译文] [主持人]: 你相信在未来几年里,所有的车,或者说路上几乎所有的车,都会自动驾驶吗?
[原文] [Guest]: no you don't you don't think so mm-m
[译文] [嘉宾]: 不相信。(主持人插话:你不相信?嗯。)
[原文] [Host]: how come
[译文] [主持人]: 为什么?
[原文] [Guest]: because of the way the technology works because because these are statistical I mean currently the way that AI models are primarily developed they're statistical engines you have what's called a neural network which is a piece of software that has a bunch of densely connected nodes and like parameters
[译文] [嘉宾]: 因为这项技术运作的方式,因为、因为这些是统计模型(Statistical)。我的意思是,目前AI模型主要开发的方式,它们都是统计引擎(Statistical engines),你有一个所谓的神经网络(Neural network),这是一种拥有大量密集连接节点(Nodes)和比如参数(Parameters)的软件。
[原文] [Host]: is this what they call parameters
[译文] [主持人]: 这就是他们所说的参数吗?
[原文] [Guest]: yeah pretty much and you're just pumping a bunch of data into it and then it's analyzing the data and creating this all of these finding all these correlations in the data finding all these patterns and then it's through those patterns that the machine is then able to act autonomously right and so the way that they're training a self-driving car is they're they're recording all this footage and then they have tens of thousands or hundreds of thousands of human contractors that draw literally around every single vehicle in the footage every single pedestrian every single traffic light every single lane marking and label it exactly as such so that then it's fed into an AI model that can identify all of these different components and then it's connected to another piece of software that is not AI that's saying okay if you if the AI model recognizes the pedestrian we do not run over the pedestrian if the AI model recognizes a red traffic light we stop and so the like the thing about statistical engines is that it's based on probabilities it's not based on deterministic logic so systems make errors all the time and it's impossible it is technically impossible to get them to stop making errors humans make errors way more than systems
[译文] [嘉宾]: 是的,基本上是这样。你只是把大量数据注入进去,然后它分析数据,并在数据中创造、找到所有这些相关性(Correlations),找到所有这些模式(Patterns),然后正是通过这些模式,机器接着就能自主行动了,对吧?所以他们训练一辆自动驾驶汽车的方式是,他们记录下所有这些录像,然后他们有成千上万、甚至数十万的人类合同工(Human contractors),逐字意义上地在录像中的每一辆车、每一个行人、每一个红绿灯、每一个车道标线上画框,并准确地贴上标签(Label it);以便之后将这些喂给一个能识别所有这些不同组件的AI模型;然后它再连接到另一个不是AI的软件上,那个软件的指令是:“好的,如果AI模型识别出行人,我们就不碾压行人;如果AI模型识别出红灯,我们就停车。”所以统计引擎的特点就是它是基于概率的(Probabilities),它不是基于确定性逻辑(Deterministic logic)的。因此系统一直都在犯错,而且不可能——在技术上是不可能的——让它们停止犯错。虽然人类犯错的频率比系统高得多。
[原文] [Host]: in this case like the safety record is like isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is to for a human to drive
[译文] [主持人]: 在这种情况下,比如安全记录,如果乘坐自动驾驶的特斯拉(Tesla),是不是比人类驾驶要安全10倍?
[原文] [Guest]: it depends on the place it depends on whether the Tesla was trained to specifically navigate the place that you're driving get drunk because if it's in Mumbai in some place in Vietnam no it would not be safer i would much rather be driven by someone that has been driving in that place their whole life i'm I'm not arguing against like the fact that in certain places where the car has been explicitly trained to drive in this place that it has a better safety record than the humans that are driving in that place but you specifically asked if I think that all of the most cars most cars in the world in the US in the United States cuz we're here i don't actually think that it's like imminently on the horizon 10 years no I don't think so
[译文] [嘉宾]: 这取决于地点,取决于这辆特斯拉是否被专门训练过在你正在驾驶的地方导航。你喝醉了吧?因为如果是在孟买(Mumbai)或者越南(Vietnam)的某个地方,不,那绝对不会更安全。我宁愿让一个一辈子都在那个地方开车的人来载我。我不是在反驳这样的事实,即在某些汽车被明确训练过在那里行驶的特定地点,它确实比那个地方的人类司机有更好的安全记录。但你刚才具体问我,是否认为世界上、美国的大多数汽车(因为我们现在在美国)都将自动驾驶,我其实并不认为这是即将到来的事情,哪怕10年内,不,我也不这么认为。
[原文] [Host]: i sat with Dra from Uber and he's pretty convinced that his 9 million couriers will be replaced by autonomous vehicles i mean how long have has self-driving cars been invested in thus far it's been more than 10 years and what percentage of cars right now are autonomous on the US roads i mean so part of it is it's actually not a technical problem right like part of it is also social problem like do people even trust getting into these vehicles part of it is also a legal problem which is if the car the self-driving car kills someone which it has happened yeah it has happened who is responsible so in the case in LA it was both Tesla and the driver because the driver dropped their phone they looked down and this was a couple of years ago I believe um and they went to grab their phone and they hit someone and so it went to court and they were held both responsible both the driver and Tesla um in terms of Tesla pretty much everyone that gets the car it comes with autonomy now for pretty much most people I believe partial autonomy yeah it's called full self-driving at the moment where it's like I mean yes it is called full self-driving full self-driving supervised where you kind of have to be looking in the d you have to be looking in the right direction but Yeah so it's partial autonomy and here in Austin it's full autonomy cuz there's no steering wheel on the new car um so you can't drive it anyway but it is you know the Model Y is the undisputed highest selling car bestselling car in the world across all brands well I guess my point here is like these predictions where they say AI is going to completely change transportation and driving it's going to completely change lawyers aren't going to have jobs accountants aren't going to have jobs um do you believe that they are true do you believe that there's going to be mass job displacement
[译文] [主持人]: 我曾和优步(Uber)的达拉(Dara Khosrowshahi)坐在一起,他非常确信他的900万名快递员将被自动驾驶车辆取代。我的意思是,自动驾驶汽车迄今为止已经被投资了多久?已经超过10年了。而现在美国道路上自动驾驶的汽车比例是多少?我的意思是,这部分实际上不仅是一个技术问题对吧?一部分也是一个社会问题,比如人们到底敢不敢坐进这些车里;一部分也是一个法律问题,即如果汽车、自动驾驶汽车撞死了人(这确实发生过),是的,发生过,谁来负责?所以在洛杉矶(LA)的案子中,特斯拉和司机都有责任,因为司机手机掉了,他们低头去看,我相信这是几年前的事了,嗯,他们去捡手机,然后撞了人,所以案子上了法庭,司机和特斯拉被判定共同承担责任。嗯,就特斯拉而言,几乎所有拿到这辆车的人,现在几乎大多数人都带有了自动驾驶功能,我相信是部分自动驾驶(Partial autonomy),是的。它目前被称为“全自动驾驶”(Full self-driving),我的意思是,是的,它被称为“监督下的全自动驾驶”(Full self-driving supervised),你某种程度上必须看着前方,你必须看向正确的方向。但是,是的,所以它是部分自动驾驶。而在奥斯汀这里,那是完全自动驾驶,因为新车上没有方向盘,嗯,所以你无论如何也开不了它。但它是,你知道的,Model Y是无可争议的、世界上所有品牌中最畅销的汽车。嗯,我想我在这里的观点是,像这样的预测,他们说AI将彻底改变交通和驾驶,它将彻底改变律师(律师将失去工作)、会计师(会计师将失去工作),嗯,你相信这些是真的吗?你相信会出现大规模的工作流失(Mass job displacement)吗?
[原文] [Guest]: okay so I do think that there is going to be huge impacts on employment and we already seeing those impacts it is not simply because the AI models are just automating those jobs away it is specifically because the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on and executives at other companies are then deciding to fire or lay off their workers because they think that AI can replace the worker irrespective of whether that might be true and there you know there have been cases of like the CLA CEO who laid off a bunch of people thinking that he would replace everyone with AI and then it didn't actually work and he had to ask some people to come back
[译文] [嘉宾]: 好的,所以我确实认为会对就业产生巨大影响,而且我们已经看到了这些影响。这不仅仅是因为AI模型直接把这些工作自动化淘汰了,更具体地说,是因为这些模型在某些能力上正在提升(这是基于开发它们的公司选择在哪些方面提升它们),而其他公司的高管随后决定解雇或裁掉他们的工人,因为他们认为AI可以取代工人——不管这到底是不是真的。而且你知道,有过这样的案例,比如Klarna(转录文本拼为CLA,系支付公司Klarna)的CEO,他解雇了一批人,以为自己可以用AI取代所有人,结果实际上行不通,他又不得不请一些人回来。
[原文] [Host]: i actually DM'd him about this if you're hearing this this is because I've DM'd Sebastian and he's fine with me sharing this he said because I've heard his name mentioned a lot and so when I when we talked about AI in the past and people mention Sebastian and Cler as the example I wanted to clarify with him what the truth was he said "It's great to hear from you um I think sometimes people struggle with two things can be true at the same time i think it might be time to come back on your podcast to your point this is the media misinterpreting my tweet we are doubling down on AI more than ever cler is shrinking with almost 100 employees per month due to AI we used to be 7,400 at the peak a year ago 5,500 now we're 3,300 and by the end of summer so this was last year will be 3,000 people ai handles 70% of our customer service conversations at this moment this is because we have realized that with AI the production cost of software comes down to almost zero just like manufacturing used to be all handcrafted and then the machines came code used to be all handcrafted up until a few years ago and now it is machine produced and ultimately we pay people more than ever for the unique handcrafted man-made stuff china is a bank people will want to connect to humans not only machines they want us to be personable relatable even flawed so we need to make sure while we are automating replacing with AI in parallel we make sure we offer a super available human experience
[译文] [主持人]: 实际上我曾就此给他发过私信(DM)。如果你正在听这个,这是因为我给塞巴斯蒂安(Sebastian Siemiatkowski,Klarna的CEO)发了私信,而且他同意我分享这个。他说,因为我听到他的名字被经常提起,所以当我们在过去谈论AI时,人们拿塞巴斯蒂安和Klarna(转录文本拼为Cler)作为例子,我想向他澄清真相是什么。他回复说:“很高兴收到你的消息,嗯,我认为有时候人们很难理解两件事可以同时为真。我想可能是时候再上一次你的播客了。关于你的观点,这是媒体误解了我的推特。我们比以往任何时候都更加倍押注AI,Klarna由于AI,目前以每月近100名员工的速度缩减。我们在一年多前的峰值曾有7400人,后来是5500人,现在是3300人,而且到夏末(这是去年说的)将会是3000人。AI此刻处理着我们70%的客户服务对话。这是因为我们已经意识到,有了AI,软件的生产成本降到了几乎为零。就像制造业曾经全是手工制作,然后机器出现了。直到几年前,代码还全是手工编写的,而现在它是机器生成的。最终,我们为独特的、手工制作的、人造的东西支付比以往任何时候都高的薪水。人们希望与人类而不仅仅是机器建立联系。他们希望我们显得亲切、有共鸣,甚至是有缺陷的。所以我们需要确保,在我们用AI进行自动化取代的同时,平行地确保我们提供一种随时可用的超级人类体验。”
[原文] [Guest]: i'm really glad you read this because I think it touches on some really important nuances to the AI yeah like the impact that AI is going to have on employment so I think the there's often these binary narratives it's like AI is going to come for every job mhm or people say AI is not actually working and it's not actually coming for jobs and like the reality is it's coming for jobs there are definitely jobs that are being automated away because of the capabilities of their models and there's also jobs that are being lost because executives are deciding to lay off the workers even if the models don't match the capabilities because it's good enough like they would rather have the good enough model for way cheaper or they made a mistake with hiring they blowed their team and it's a great convenient thing to say exactly like there's there's there's many reason but like clearly we're already seeing impacts on the job market like the um US jobs report that came out earlier this year showed that there has been a decline in hiring is a slowdown in hiring across especially white collar professional industries
[译文] [嘉宾]: 我真的很高兴你读了这段话,因为我认为它触及了AI一些非常重要的细微差别。是的,比如AI对就业将产生的影响。所以,我认为经常有这种非黑即白的叙事(Binary narratives):一种是AI将取代所有的工作,嗯;另一种是人们说AI实际上不起作用,它其实并没有取代工作。而现实情况是,它确实在取代工作。确实有工作因为他们模型的能力而被自动化淘汰了;同时也确实有工作流失是因为高管们决定裁掉工人——即使模型的能力还达不到要求,因为“足够好”就行了。比如他们宁愿要一个“足够好”且便宜得多的模型;或者他们在招聘时犯了错,导致团队人员臃肿,而拿AI当借口是一件非常方便的事。完全正确,有很多原因,但是很明显我们已经看到了对就业市场的影响,比如,嗯,今年早些时候发布的美国就业报告显示,招聘人数有所下降,特别是在白领专业行业的招聘出现了放缓。
[原文] [Host]: and you saw Anthropic's report the new this week the TLDDR is it matches kind of what you were saying where they Anthropic looked at exactly how people were using their models and they looked at like what people are saying and they said that there's been a 40% reduction in entry- level jobs in particular and then they made this graph which has gone viral over the internet the red shows where we are now in terms of capability and based on how people are currently using the models they prediction extrapolated out that the blue part will be the disrupted parts this is the things that they say AI can do right now but people don't realize it yet so if you look at it it's like it's kind of all the stuff you would expect it's the physical real world human stuff which robots maybe can do someday like construction or agriculture that are untouched but like office and admin um like saying finance stuff math and notice that these are all the things that I just named that they purposely finance math law media and arts
[译文] [主持人]: 你也看到了Anthropic本周新出炉的报告,太长不看版(TL;DR)是它和你说的某种程度上是一致的。Anthropic准确地查看了人们是如何使用他们模型的,他们看了人们的反馈,并说入门级岗位(Entry-level jobs)出现了特别明显的、高达40%的减少。然后他们做了一张图表,在网上疯传。红色显示了我们在能力方面目前所处的位置,基于人们目前使用模型的方式,他们的预测推断出蓝色部分将是被颠覆(Disrupted)的领域。这些是他们声称AI现在就能做、只是人们还没意识到的事情。所以如果你看看它,它涵盖了几乎所有你能想到的东西。物理意义上真实世界的人类工作(也许机器人有一天能做),比如建筑或农业,目前还未受触及。但是像办公室和行政管理,嗯,比如我说过的金融事务、数学。请注意,我刚才提到的这些全都是他们刻意瞄准的领域:金融、数学、法律、媒体和艺术。
[原文] [Host]: that's me cooked yeah
[译文] [主持人]: 那我可算完蛋了,是的。
[原文] [Host]: office and admin i mean they do focus a lot on like assistant type and managerial work so
[译文] [主持人]: 办公室和行政管理,我是说,他们确实非常关注助理类和管理类的工作,所以。
[原文] [Guest]: but but the the other thing that the CLO CEO said was but people also want human experiences so it's not actually just about the capabilities of the models it's also about what people want like some things they would turn to AI for and some things they wouldn't irrespective of whether or not AI is capable of doing it but because of a preference that they want humanto human interaction and so what we're seeing right now is yeah the the thing that happens with every wave of automation which is that there is a bunch of entry-level work that gets automated away and there There are also new jobs created but the jobs that are created are one in one of two categories there are people that get even higher skilled jobs and what he was saying like we pay people more for like the handcrafted code now and there's also the people who get way worse jobs and so there was this amazing article in New York magazine that was talking about how a lot of people are getting laid off and then they end up working in data annotation which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate and so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in which will then perpetuate more layoffs if that model then develops that skill and the article was talking about how this has become a huge catchall for a lot of people that are struggling with finding job opportunities right now including like awardwinning directors in Hollywood that are actually secretly doing this data annotation work to put food on the table and so when they talk about there's going to be mass unemployment and then there's going to be some new jobs created that we can't even imagine I think a lot of these narratives rarely talk about like first of all why are some jobs going away it's not just because of the model capabilities it's also because of executive choices and because of the rhetoric that they use if they want to just downsize um but the other thing that is rarely talked about is the jobs a lot of the jobs that are created are way worse than the jobs that were there and it breaks the career ladder so it's the entry level and the mid tier jobs that get gouged out it's higher order jobs and then way more lower order jobs that get created and so how do people continue to progress in their careers there's no more rungs on the ladder
[译文] [嘉宾]: 但是,Klarna的CEO说的另一件事是:人们同样渴望人类体验。所以实际上不仅仅是模型能力的问题,也是关于人们想要什么的问题。有些事情他们会求助于AI,而有些事情他们不会——不管AI有没有能力做到,这纯粹是因为他们偏好人与人之间的互动。所以我们现在看到的,是的,是伴随着每一波自动化浪潮都会发生的事情:会有一堆入门级的工作被自动化淘汰,并且也会创造出新的工作岗位。但是被创造出来的工作岗位属于以下两类之一:一类是获得更高技能工作的人(就像他说的,我们现在为纯手工编写的代码支付更多报酬);另一类是获得比原来糟糕得多的工作的人。所以《纽约杂志》(New York magazine)上有一篇非常精彩的文章,谈到了许多人被裁员后,最终去做了“数据标注”(Data annotation)。数据标注就是我在整场对话中一直提到的那种劳动力,公司需要这种劳动力来教他们的模型学会公司下一步试图自动化的事情。所以,这就好比一个营销人员被裁员了,然后他们去一家数据标注公司工作,来训练模型掌握他们刚被裁掉的那个岗位的技能;而如果那个模型随后发展出了这项技能,就会导致更多的裁员。那篇文章谈到了,这已经成为目前许多正在艰难寻找工作机会的人的一个巨大收容所(Catchall),甚至包括好莱坞屡获殊荣的导演,他们居然都在秘密地做这种数据标注工作,只是为了糊口。所以,当他们谈论将会出现大规模失业,然后会创造出一些我们甚至无法想象的新工作时,我认为许多这种叙事很少谈及:首先,为什么有些工作会消失?这不仅仅是因为模型的能力,也是因为高管的选择,以及他们想要缩小规模(Downsize)时所使用的说辞。嗯,而另一件很少被谈及的事情是,很多被创造出来的工作,比原本在那里的工作要糟糕得多,而且它打破了职业上升通道(Career ladder)。因为入门级和中层的工作被掏空了,剩下的是高级工作,以及大量被创造出来的低级工作。那么人们如何继续在他们的职业生涯中晋升呢?阶梯上已经没有横木可踩了。
章节 11:在AI时代回归人类本质:职场新图景与核心竞争力
📝 本节摘要:
本节重点探讨了在AI自动化浪潮下,人类职场的新图景以及未来最有价值的核心竞争力。主持人结合自己的企业管理经验指出,面对AI智能体(AI Agents)的冲击,未来职场将主要需要三类人:拥有深厚领域专业知识的“指挥家”、极度好奇且善用AI工具的年轻“放大器”,以及拥有极强现实社交能力(IRL)的连接者。两人探讨了一个反直觉的观点:AI或许恰恰能把我们从枯燥的屏幕前解放出来,迫使社会回归线下真实的人际连接。期间,节目甚至接到了支付巨头Klarna CEO塞巴斯蒂安的现场连线来电,他分享了公司依靠AI实现“自然减员”但仍保留人类客服VIP体验的真实业务进展。最后包含了两段赞助商广告口播。
[原文] [Host]: i actually don't know the answer to this question and I've been furiously trying to find a good answer to this question because I can you know everything is theory and for my audience I would say most of my audience don't run businesses a lot of them do a lot of them aspire to but they don't run businesses so they're kind of they're also in the land of theory they're hearing lots of different things
[译文] [主持人]: 实际上我不知道这个问题的答案,而且我一直在疯狂地试图找到这个问题的良好答案,因为我能……你知道,一切都还是理论;对于我的观众,我想说我的大多数观众并不经营企业,他们中很多人经营,很多人渴望经营,但他们目前不经营企业,所以他们某种程度上也处于理论的领域中,他们听到了许多不同的声音。
[原文] [Host]: jack Dorsey does his tweet saying he's halfing his headcount because of AI they don't know what's true they don't know the sort of internal economics at Jack's company and did he bloat the company during the pandemic and he's just using this as an excuse to make this share price spike seven points because his investors now think they're an AI company or whatever mh it's hard to pass through so eventually I go okay what am I doing
[译文] [主持人]: 杰克·多西(Jack Dorsey)发推特说因为AI他要裁减一半的员工;他们不知道什么是真的,他们不知道杰克公司内部的经济状况,他是不是在疫情期间让公司人员臃肿了,他只是拿这个当借口,好让股价飙升7个点,因为他的投资者现在认为他们是一家AI公司了,诸如此类,嗯。这很难分辨,所以最终我会想,好吧,那我在做什么?
[原文] [Host]: i have hundred hundreds of team members probably 70 companies I invest in maybe five or six that I'm like the lead shareholder in what am I actually doing on a day-to-day basis right now i am I'm also I also consider myself to be head of recruitment but in the last month in particular I have met extremely capable candidates in terms of cultural alignment hard work those kinds of things but I've had to take a great deal of pause because when I run the experiment of can I get an AI agent to do that exact same thing the answer is increasingly yes especially in a world of open clause and so what I'm curious like now you confront this decision where you're seeing in this short-term period you could just choose the AI agent and in the long-term period there is no career ladder so so who are you promoting into these senior roles like what how do you resolve it for your own company
[译文] [主持人]: 我有成百上千的团队成员,大概投资了70家公司,也许在其中五六家我算是大股东。我现在每天实际上都在做些什么?我、我也把自己看作是招聘主管,但在过去的一个月里尤其明显,我遇到了在文化契合度、努力工作等方面都非常有能力的候选人;但我不得不极大地停顿下来,因为当我在脑海中进行实验,问自己“我能让一个AI智能体(AI agent)做完全相同的事情吗?”时,答案越来越是肯定的,特别是在一个充满open clause(转录口误,或指OpenAI和Claude)的世界里。所以我很好奇的是,现在你面临这个决定,你看到在短期内你可以直接选择AI智能体,而在长期来看职业上升通道(Career ladder)就不复存在了,那么、那么你该提拔谁进入这些高级职位?比如,你如何为你自己的公司解决这个问题?
[原文] [Host]: yeah it's a good question so there's kind of two ways I'm thinking about it i think really deep expertise is very very valuable because if you're now the orchestrator of potentially AI agents it's really about um having a deep understanding of the right question to ask and and that's someone who has deep expertise on something so I need my CFO because if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else she needs to understand what to tell them to do in our company mhm
[译文] [主持人]: 是的,这是个好问题。所以我有两种思考方式。我认为极其深厚的专业知识(Deep expertise)是非常非常宝贵的,因为如果你现在是潜在的AI智能体的指挥家(Orchestrator),这实际上关乎于,嗯,对“提出正确的问题”有深刻的理解,而那就是在某方面拥有深厚专业知识的人。所以我需要我的首席财务官(CFO),因为如果她要指挥我们的智能体团队去做可能是财务分析或其他什么工作时,她需要了解在我们公司该吩咐它们做什么,嗯。
[原文] [Host]: and in turn financial analysts can't do that they need this the 50 odd years of experience that you know CLA has on the other end I need Cass cass is 25 cass knows everything about AI agents he's a young Japanese kid who's highly highly curious you know on the weekend he's building AI agents to solve problems in my life i need those two kinds of thinking which is highly proficient agent maxing young kids or they don't necessarily need to be young but like really lean in high curiosity that's creating a force multiplier in my business and then I need deep expertise
[译文] [主持人]: 反过来,初级财务分析师做不到这一点,他们需要这种,你知道的,克莱尔(CLA,转录拼写可能有误)拥有的那50多年的经验。另一方面,我需要卡斯(Cass)。卡斯25岁,卡斯对AI智能体无所不知。他是一个非常非常好奇的日本年轻小伙,你知道,他在周末构建AI智能体来解决我生活中的问题。我需要这两种思维:一种是高度精通、将智能体能力发挥到极致(Agent maxing)的年轻人(或者他们不一定要年轻,但必须非常投入、充满极高的好奇心),这能在我的业务中创造一种力量倍增器(Force multiplier);然后我还需要深厚的专业知识。
[原文] [Host]: now the everything else outside of there is another one I've thought of another group is like people with extremely great IRL people skills because we do meet people in real life we greet you when you arrive here we greet we when we go for lunch with big clients that we have whether it's Apple or LinkedIn or whoever it might be we you know we need to smoosh mhm
[译文] [主持人]: 至于这两者之外的所有其他东西……我还想到了另一类群体,那就是拥有极佳现实社交能力(IRL people skills,IRL即In Real Life)的人。因为我们确实要在现实生活中与人会面,当你到达这里时我们会迎接你,当我们和我们拥有的大客户共进午餐时我们会打招呼,不管客户是苹果(Apple)、领英(LinkedIn)还是其他任何人。我们,你知道,我们需要去应酬交际(Smoosh,应为Schmooze),嗯。
[原文] [Host]: and we have teams who you know are in person in the office so we we do a lot of stuff IRL and increasingly we're building communities even for this show we're doing community events all around the world so we need people that are good at that as well irl bringing people together in real life and organizing stuff those are the three groups of people that I'm like you know irreplaceable right now
[译文] [主持人]: 而且我们有团队,你知道,是在办公室面对面办公的。所以我们在现实生活(IRL)中做很多事情,而且我们越来越多地在建立社区,即使是为了这个节目,我们也在世界各地举办社区活动。所以我们需要擅长那些事情的人:在现实生活(IRL)中把人们聚在一起并组织活动。这就是目前我认为,你知道,不可替代的三类人。
[原文] [Host]: and if you were to to all of the all the roles that could be done by AI agents if we were to replace them with AI agents do you think you would still have these three roles pools of people to hire and promote into the three critical things that you need in the long term if things carry on at the the current rate of trajectory
[译文] [主持人]: 如果你要去把所有能够由AI智能体完成的职位……如果我们用AI智能体来取代它们,你认为你还能保留这三类角色的人才池,以供你长期雇佣并提拔进入这三项关键职能中吗(如果事情按照目前的发展轨迹继续下去的话)?
[原文] [Guest]: one could assert that even those roles would experience pressure if you just imagine like people think of things either statically or linearly or exponentially yeah you imagine an exponential rate of improvement which is kind of what I've seen even like a 10% compounding rate of improvement at some point at some point at some point I think what remains is actually the IRL irreplaceably human stuff human to human our Maslovian needs of being in person like we are now aren't going to change we need connection humans get very sick when they don't have other human beings in their life and strong deep relationships
[译文] [嘉宾]: 人们可以断言,如果你的想象……就像人们在思考事物时,要么是静态地思考,要么是线性地思考,要么是呈指数级地思考,是的。如果你想象一种指数级的改进速度(这某种程度上也是我所看到的),哪怕仅仅是10%的复利改进速度,在某个时刻、在某个时刻、在某个时刻……我认为留下来的实际上就是现实生活中(IRL)不可替代的人类专属事物(Human stuff),人与人之间的交流。像我们现在这样面对面的马斯洛需求(Maslovian needs)是不会改变的,我们需要连接,当人们的生活中没有其他人、没有强大而深厚的关系时,人类就会生很严重的病。
[原文] [Host]: 100% agree so that stuff is going to matter a whole lot i have this contrarian weird take that actually maybe this is the first technology that's going to deliver on the promise of making us human and connected because we're going to be rendered useless of everything else other than what humans are good at cuz all the other technology said "Oh we're going to make you more connected connecting the world." And they disconnected the world and isolated the world but maybe this is the one it's so intelligent now that it doesn't need us to fuck around in spreadsheets anymore do you see that actually happening in real time right now that it's making us more able to be in person connected with one another having deeper social community engagements
[译文] [主持人]: 100%同意。所以那些东西将变得非常非常重要。我有一个反共识的奇怪观点,那就是也许这是第一个将兑现“让我们成为人类并相互连接”这一承诺的技术,因为除了人类擅长的事情之外,我们在其他一切事情上都将变得毫无用处。因为之前所有的其他技术都在说:“哦,我们要让你们更加互联,连接世界。”结果它们切断了世界的连接,孤立了世界。但也许就是这个(AI技术),它现在如此智能,以至于它不再需要我们在电子表格里瞎折腾了。你是否看到这正在当下实时发生——它让我们更有能力面对面相处、相互连接,拥有更深层次的社交社区参与度?
[原文] [Guest]: yes yes and I'll give you some data points okay data point number one the Financial Times released a report on social media usage and what they saw is 2022 was the peak and it's plateaued ever since the generation that's plateaued the fastest and heading down is the younger generations the boomers are still off to the races right so on Facebook and stuff and then you look at the way Gen Alfa are using social media they're not posting as much they call it uh posting zero they're scrolling sometimes but they're in dark social environments like WhatsApp and Snapchat and iMessage they're not like performing to the world
[译文] [嘉宾]: 是的,是的,我给你一些数据点。好的,数据点一:《金融时报》(Financial Times)发布了一份关于社交媒体使用情况的报告,他们看到2022年是巅峰,从那以后就停滞不前(Plateaued)了。停滞最快且呈下降趋势的世代是年轻一代。婴儿潮一代(Boomers)依然在狂欢,对吧,在脸书(Facebook)之类的平台上。然后你看看阿尔法世代(Gen Alfa)使用社交媒体的方式,他们不再发那么多帖子了,他们称之为,呃,“零发帖”(Posting zero)。他们有时会滑动浏览,但他们处于“私域社交”(Dark social)环境中,比如WhatsApp、Snapchat和iMessage,他们不像是在向全世界表演。
[原文] [Guest]: they also value IRL experiences much more than any other generation they're like not getting smashed we're seeing every brand has a run club um I mean runs exploding around the world and we're seeing this real sort of sort of almost like innate realization that like technology let us down at some fundamental level like dating apps let us down social networking kind of has let us down and we're seeing I think maybe a bifocation of society where a lot of people are going fuck this like I want to go back to what it is to be a human
[译文] [嘉宾]: 他们也比其他任何世代都更看重现实生活(IRL)的体验。他们比如不会去烂醉如泥;我们看到每个品牌都有一个跑步俱乐部(Run club),嗯,我的意思是跑步正在世界各地呈爆发式增长。我们看到了这种真实的、有点像是内在的觉醒:技术在某个基本层面上让我们失望了。比如约会软件让我们失望了,社交网络有点让我们失望了。而且我们正在看到,我认为也许是社会的分化(Bifurcation,转录拼写为bifocation),很多人都在说:“去他的吧,我想回归作为一个人类该有的样子。”
[原文] [Host]: and I I would imagine that in such a world where intelligence is so sophisticated that we no longer needed to sit at laptops and like I think screen time is going to continue to fall i think you go into an office you're not going to see people sat at laptops you're gonna see something completely different and I think maybe you know and then we talk about robots and Optimus robots elon says there'll be 10 billion Optimus robots elon has been wrong with timing before he's almost never been wrong on the big things completely he's just his timing is got a bad track record um so I think he's he's probably right you know
[译文] [主持人]: 而且我,我能够想象,在这样一个智能如此复杂高级的世界里,我们不再需要坐在笔记本电脑前。比如我认为屏幕使用时间(Screen time)将继续下降。我认为你走进一个办公室,你不会看到人们坐在笔记本电脑前,你将看到完全不同的景象。而且我认为也许你知道,然后我们谈论机器人和擎天柱机器人(Optimus robots)。埃隆说将会有100亿个擎天柱机器人。埃隆以前在时间点上犯过错,但在大方向上他几乎从来没有完全错定过,他只是在时间点上的预判记录不太好,嗯。所以我认为他、他可能是对的,你知道。
[原文] [Host]: I think I've I've got some people on the way from Boston Dynamics and these other big companies like Scale AI and they're actually bringing the robots here to show it like folding laundry doing the dishes i'm not saying that's what I would want in my home but I think factory work is going to completely change i think a lot of manual labor is going to completely change and I think we're going to be forced to do what only we can do
[译文] [主持人]: 我想我已经、我已经请到了一些来自波士顿动力(Boston Dynamics)以及其他大公司(比如Scale AI)的人在来节目路上了,而且他们实际上正在把机器人带到这里来展示它,比如叠衣服、洗碗。我不是说那就是我希望在我家里出现的东西,但我认为工厂里的工作将发生彻底的改变。我认为大量的体力劳动将发生彻底的改变。而且我认为我们将被迫去做只有我们(人类)能做的事情。
[原文] [Host]: um Sebastian who's the CEO of Cler has actually just called me hello Sebastian you're right hey how are you i'm good how are you it's been a while it has been a while since you're on the show i was just saying we do need to get you back on i I just I just had a couple of simple questions cuz you know I do a lot of interviews and um Clan has always mentioned because I think the media has said that you like double down on AI then you reversed because it didn't work out so I know I spoke to you a while ago and we exchanged a couple of DMs about it but that was more than a it was almost a year ago now so I just wanted to get an update on Cler's business AI agents and all of that if possible
[译文] [主持人]: 嗯,Klarna(转录文本拼写为Cler)的CEO塞巴斯蒂安刚好给我打电话了。你好,塞巴斯蒂安,你还好吗?嘿,你怎么样?我很好,你怎么样?有一段时间没联系了。自从你上次来节目确实有一段时间了。我刚才还在说我们确实需要让你再来一次节目。我、我只是、我只是有几个简单的问题,因为你知道我做过很多采访,嗯,Klarna(转录文本拼写为Clan)总是被提及,因为我认为媒体曾说过你们似乎加倍押注了AI,然后你们又因为行不通而反悔了。所以我知道我不久前和你谈过,我们就此交换了几条私信,但那是一年多……现在差不多快一年前的事了。所以我只是想获取一下关于Klarna业务、AI智能体以及所有相关事情的最新进展,如果可以的话。
[原文] [Guest (Sebastian via phone)]: first and foremost we were early on uh released um AI uh to support our customer service which had that uh initial uh benefit of uh more calls being dealt with by AI which customers liked because those calls or chat messages were much much faster and more qualitative then since then that has actually expanded slightly um what we did however try to communicate as well is that we believed in a world of where AI is cheap and available the value of human interaction will be regarded as higher so the future of customer service VIP is a human
[译文] [嘉宾 (塞巴斯蒂安 电话连线)]: 首先也是最重要的是,我们很早就,呃,发布了,嗯,AI,呃,来支持我们的客户服务。它带来了那种,呃,初步的,呃,好处,那就是更多的呼叫由AI处理,顾客们喜欢这样,因为那些呼叫或聊天信息处理得快得多、质量也高得多。从那以后,这实际上得到了轻微的扩展。嗯,然而我们同时试图传达的是,我们相信在一个AI变得廉价且随时可用的世界里,人际互动的价值将被看得更高。所以未来的客户服务VIP将是人类。
[原文] [Guest (Sebastian via phone)]: um we have then hence doubled down on providing more of that but at the same time the efficiency gains within the company has continued i mean we used to be about 6,000 people and and now we are less than 3,000 which is 2 3 years since we stopped recruiting and at same point in time our revenue has doubled right so you can clearly see that AI has allowed us to be do more with less people but we have avoided layoffs and instead relied on natural attrition when people kind of move on to other jobs
[译文] [嘉宾 (塞巴斯蒂安 电话连线)]: 嗯,因此我们随后加倍努力去提供更多那种(人类)体验;但与此同时,公司内部的效率提升仍在继续。我的意思是,我们过去大约有6000人,现在我们不到3000人。自我们停止招聘以来已经有两三年了,而在同一时期,我们的收入翻了一番,对吧。所以你可以清楚地看到,AI使我们能够用更少的人做更多的事。但我们避免了裁员(Layoffs),而是依赖于当人们转向其他工作时的自然减员(Natural attrition)。
[原文] [Guest (Sebastian via phone)]: i mean from my perspective we will continue to be very you know not really recruit much i mean we recruit a little bit here and there but we expect that kind of natural attrition of 10 15% per year to continue and to become fewer i think the big breakthrough was really in November December last year where even the kind of more most skeptical uh engineers who were like very well-renowned and and appreciated like the founder of Linux and stuff like that basically said that coding has now been resolved and hence is not you know uh you don't need to code anymore and that was kind of a common sentiment so I think in in coding that's definitely an engineering work that has been a tremendous shift in the last six months
[译文] [嘉宾 (塞巴斯蒂安 电话连线)]: 我的意思是,从我的角度来看,我们将继续保持非常,你知道的,不怎么进行大量招聘的状态。我的意思是,我们到处零星招一点人,但我们预计那种每年10%到15%的自然减员会继续下去,人数会变得更少。我认为巨大的突破其实是在去年11月、12月,当时连那种更、最怀疑的,呃,工程师们(那些非常著名和受人尊敬的人,比如Linux的创始人等等),基本上都说编程(Coding)现在已经被解决了,因此不再是……你知道,呃,你不再需要自己写代码了。这算是一种普遍的情绪。所以我认为,在编程领域,这绝对是工程工作在过去六个月里发生的巨大转变。
[原文] [Host]: what do all these people go do Sebastian
[译文] [主持人]: 所有这些人要去干什么呢,塞巴斯蒂安?
[原文] [Guest (Sebastian via phone)]: i am optimistic i mean I think obviously people will have a lot of opinions about this topic but I still believe that we are going to move towards a richer society now in the short term there could be more worry about what happens if people don't get a job and and so forth but I think in the longer term I I am optimistic what it means for society and humanity
[译文] [嘉宾 (塞巴斯蒂安 电话连线)]: 我是乐观的。我的意思是,我认为显然人们对这个话题会有很多意见,但我仍然相信我们将走向一个更富裕的社会。现在,在短期内,人们可能会更担心如果找不到工作等等会发生什么;但我认为,从长期来看,我、我对它对社会和人类意味着什么持乐观态度。
[原文] [Host]: thank you so much Seb i'll chat to you soon thank you for taking the time i appreciate you mate thanks all right all right byebye byebye
[译文] [主持人]: 太感谢你了,Seb(塞巴斯蒂安的简称),我回头再跟你聊。感谢你抽出时间,谢谢你兄弟,多谢。好的好的,拜拜,拜拜。
[原文] [Host (Ad Break)]: you know the little traditional SIM card that goes inside of our phones they haven't changed at all since they were invented in the '90s you have this physical piece of plastic that means you're locked into one carrier one network and the second you cross a border that carrier can start charging you whatever they want but there are alternatives and today's sponsor SY is one of them it's an eSIM app that gives you a safe and secure data connection in over 200 destinations all of their ESIMs have built-in cyber security which is great if you're traveling for work and looking at confidential material i've been using SY whenever I travel because the connection is always reliable and it saves me a ton of roaming fees it also means I don't have to deal with all of the faf that surrounds sorting out a SIM everywhere I go if you want to give it a try download the sale app from the app store now and scan the QR code on screen and if you want 15% off your first purchase use my code D O A when you get to check out that's D O A for 15% off keep that to yourself
[译文] [主持人 (广告口播)]: 你知道放在我们手机里的传统小SIM卡吗?自90年代被发明以来,它们一点也没变过。你拿着这块物理塑料片,意味着你被锁定在一个运营商(Carrier)、一个网络上,而一旦你跨越边境,那个运营商就可以开始向你收取任何他们想要的费用。但现在有了替代方案,今天的赞助商SY(实为Saily,转录有误)就是其中之一。它是一款eSIM应用程序,可以在200多个目的地为您提供安全可靠的数据连接。他们所有的eSIM都内置了网络安全功能,如果您是出差并需要查看机密材料,这就太棒了。我每次旅行都使用SY,因为连接总是很可靠,而且它为我节省了大量的漫游费。这也意味着我不需要应对走到哪都要弄一张SIM卡的繁琐麻烦(Faf)。如果你想试一试,现在就从应用商店下载这个应用,扫描屏幕上的二维码;如果你想在首次购买时享受15%的折扣,在结账时使用我的代码D-O-A,也就是D-O-A打八五折,这事儿只有你知我知。
[原文] [Host (Ad Break)]: this is something that I've made for you i've realized that the Dio audience are strivals that we want to accomplish and one of the things I've learned is that when you aim at the big big big goal it can feel incredibly psychologically uncomfortable because it's kind of like being stood at the foot of Mount Everest and looking upwards the way to accomplish your goals is by breaking them down into tiny small steps and we call this in our team the 1% and actually this philosophy is highly responsible for much of our success here so what we've done so that you at home can accomplish any big goal that you have is we've made these 1% diaries and we released these last year and they all sold out so I asked my team over and over again to bring the diaries back but also to introduce some new colors and to make some minor tweaks to the diary so now we have a better range for you so if you have a big goal in mind and you need a framework and a process and some motivation then I highly recommend you get one of these diaries before they all sell out once again and you can get yours at the diary.com and if you want the link the link is in the description below
[译文] [主持人 (广告口播)]: 这是我为你做的一样东西。我意识到Dio(指The Diary of a CEO的简写)的观众都是我们想要实现目标的奋斗者(Strivers,转录为strivals)。我学到的一件事是,当你的目标是那一个大大大的目标时,在心理上会感到难以置信的不舒服,因为这有点像站在珠穆朗玛峰脚下向上看。实现目标的方法是将它们分解成微小的步骤,我们在团队中称之为“1%”。实际上,这种哲学很大程度上促成了我们在这里取得的许多成功。所以我们所做的——为了让你在家里也能实现你拥有的任何大目标——是我们制作了这些“1%日记本”。我们去年发布了这些,它们全部售罄了。所以我一遍又一遍地要求我的团队把这些日记本带回来,但也引入了一些新的颜色,并对日记本做了一些小的微调。所以现在我们为你提供了更好的系列。所以如果你心里有一个大目标,你需要一个框架(Framework)、一个流程和一些动力,那么我强烈建议你在它们再次售罄之前买一本这样的日记本。你可以在thediary.com上买到你的日记本,如果你想要链接,链接就在下面的描述中。
章节 12:环境危机与被剥削的弱势群体
📝 本节摘要:
本节中,嘉宾反驳了“AI让人类回归线下真实连接”的乐观精英视角。她通过《纽约杂志》的报道指出,真正的现实是大量被裁的底层员工和白领为了糊口,被迫沦为“数据标注”流水线上的计件工,甚至连陪伴孩子的时间都被平台算法无情剥夺。更为触目惊心的是,AI巨头们正在全球各地(如得州阿比林、田纳西州孟菲斯)疯狂建设堪比城市规模的超级计算中心。这些数据中心不仅掠夺当地的淡水与电力资源,甚至排放大量有毒气体,导致有色人种和弱势社区的哮喘与肺癌发病率激增。AI不仅没有让大众“更像人类”,反而极大地加剧了阶级分化(“有产者”与“无产者”的鸿沟),让弱势群体的生存境遇雪上加霜。
[原文] [Host]: any thoughts
[译文] [主持人]: 你有什么想法吗?
[原文] [Guest]: well I actually had thoughts on something that you said before he called which is you were saying that the Jenzers like there's this trend that they're actually disconnecting from technology so they're becoming more in person and then there's this other class of workers that are actually leaning into the technology but then becoming more human because they're leaning into the technology because they're realizing that they should actually just be spending more time doing inerson interactions rather than staring at a spreadsheet and so they're no longer doing the typing whatever
[译文] [嘉宾]: 嗯,实际上我对他在打电话之前你说的那些话有一些想法。你当时说Z世代(Gen Z-ers,原转录拼为Jenzers),比如有一种趋势,他们实际上正在断开与技术的连接,所以他们变得更多地参与面对面的现实互动;然后还有另一类工人,他们实际上正在拥抱技术,但随后变得“更像人类”,因为他们拥抱技术后意识到,他们实际上应该花更多时间进行面对面的人际互动,而不是盯着电子表格,所以他们不再做打字之类的工作了。
[原文] [Guest]: i really want to go back to this New York Magazine piece that just came out because what you're describing is true for a very specific category of people which is often like the business owners and leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time but what the piece talks about is the working class like people like people who are not business owners that are then having to experience being laid off and then working for the data annotation industry which is now one of the top jobs on LinkedIn by the way um the yeah so LinkedIn had a report that showed the top 10 jobs with the highest growth in the last year and data annotation is on that list
[译文] [嘉宾]: 我真的想回到刚刚出版的那篇《纽约杂志》(New York Magazine)的文章,因为你所描述的只对一个非常特定的人群是真实的,通常是那些企业主和公司内部的领导层,他们实际上可以决定如何打发自己的时间以及最终用自己的时间做什么。但那篇文章谈论的是工人阶级(Working class),比如那些不是企业主的人,他们不得不经历被裁员,然后为数据标注(Data annotation)行业工作。顺便说一句,这现在是领英(LinkedIn)上的顶级工作之一。嗯,是的,领英有一份报告显示了去年增长最快的十大工作,而数据标注就在那个名单上。
[原文] [Host]: and for anyone that doesn't know what data annotation is
[译文] [主持人]: 给任何不知道数据标注是什么的人解释一下。
[原文] [Guest]: yeah so data annotation is the process of teaching these chat bots or or any AI system to do what they ultimately are able to do so the fact that chat GBT can chat is because there were tens of thousands or hundreds of thousands of people that were literally typing into a large language model and showing it this is how you're supposed to then respond when a user types in a prompt like this before they did that work chatgbt didn't exist like it just it would just you would prompt the model and the model would generate some text that was not in dialogue with the person it would kind of generate something that was adjacently related
[译文] [嘉宾]: 是的,所以数据标注就是教这些聊天机器人(Chat bots)或任何AI系统去做它们最终能够做的事情的过程。所以ChatGPT能够聊天的原因是,有成千上万或数十万的人,逐字意义上地输入到一个大型语言模型中,并向它展示:“当用户输入像这样的提示词(Prompt)时,你应该这样回应。”在他们做这项工作之前,ChatGPT是不存在的。比如,它只是……你向模型输入提示词,模型会生成一些不与人对话的文本,它会生成一些有点沾边的相关内容。
[原文] [Host]: is this what they call reinforcement learning where you kind of you give it like a
[译文] [主持人]: 这是不是他们所说的强化学习(Reinforcement learning),某种程度上你给了它一个类似……
[原文] [Guest]: it's a part of the process of reinforcement learning so you do data annotation which is literally um showing lots of different um you know examples of things that you want the model to know and then reinforcement learning is getting the model to then train on those examples iteratively in a way that then gives the model some of those capabilities
[译文] [嘉宾]: 它是强化学习过程的一部分。所以你做数据标注,字面意思上就是,嗯,向模型展示大量不同的,嗯,你知道的,你希望模型知道的事情的例子;然后强化学习就是让模型接着在这些例子上进行迭代训练(Iteratively),从而赋予模型其中一些能力。
[原文] [Guest]: and what the New York Magazine piece highlighted is many many of the people that are getting laid off now or or or are struggling to find work and these are highly educated people they're college graduates PhD graduates law degree graduates doctors um and again like award-winning directors that are that are then struggling to find employment in the economy because the economy has been very much restructured by AI they are then finding themselves being serving this industry and the industry is designed in a way that is extremely inhumane because what the companies the companies that use these data annotation services like there's these third party providers that are data annotation firms an open AI a gro um a Google they will hire these firms to then find the workers to perform the data annotation tasks that they need for these
[译文] [嘉宾]: 而《纽约杂志》那篇文章强调的是,现在许多被裁员或者正在艰难寻找工作的人——这些都是受过高等教育的人,他们是大学毕业生、博士毕业生、法学学位毕业生、医生,嗯,重申一下,还有像屡获殊荣的导演——他们由于经济已经被AI极大地重构(Restructured),进而在经济体系中艰难求职;他们随后发现自己正在为这个行业服务,而这个行业的设计方式极其不人道(Inhumane)。因为那些公司,那些使用这些数据标注服务的公司,比如有一些第三方供应商(即数据标注公司),OpenAI、Grok(转录拼为gro)、嗯,或者是谷歌(Google),他们会雇佣这些外包公司去寻找工人,来执行他们所需的数据标注任务。
[原文] [Guest]: These firms these third party firms they are incentivized to pit workers against each other because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another in this middle layer to get the the the bid the the contract from the the client
[译文] [嘉宾]: 这些公司,这些第三方公司,他们受到激励去让工人们互相对立内卷,因为他们希望这种数据标注能以最快的速度、尽可能便宜地完成,这样他们也可以在这个中间层互相竞争,从而赢得竞标,拿到客户的合同(Contract)。
[原文] [Guest]: and so all of these workers that were interviewed for this New York Magazine story talk about how they actually no longer have an ability to be human because they are waiting at their laptop to be pinged on Slack for when a project is going to open up for data annotation because they've tried job hunting they literally can't find anything else this is the thing that's going to help them put food on the table for their kids
[译文] [嘉宾]: 所以所有这些为《纽约杂志》报道接受采访的工人都谈到了,他们实际上不再有能力成为一个正常的人类(Be human),因为他们一直在笔记本电脑前等待着Slack上的消息通知(Pinged),等着什么时候会有数据标注的项目开放。因为他们尝试过找工作,他们真的找不到任何其他工作了,这就是唯一能帮助他们养活孩子们、把食物端上桌子的东西。
[原文] [Guest]: and there was this one woman who said like "I have so much anxiety about when the project is going to come when it's going to leave that when the project came it was right when my kid was coming off of off of school." And I just started tasking furiously because I don't know what's going to go and I need to earn as much money as possible in this window of opportunity so then my when my kid came home and tried to talk to me I screamed at my child for for distracting me
[译文] [嘉宾]: 其中有一位女士说:“对于项目什么时候会来、什么时候会结束,我感到非常焦虑,以至于当项目真的来时,正好是我的孩子放学的时候。”然后我就开始疯狂地做任务,因为我不知道机会什么时候会溜走,我必须在这个机会窗口期内赚尽可能多的钱。所以后来当我的孩子回到家试图和我说话时,我朝我的孩子尖叫,怪他让我分心了。
[原文] [Guest]: and then she was like "I've become a monster and I'm not even allowed to go to the bathroom or take care of my kids let alone myself because this industry that is absorbing more and more of the workers that are being laid off is mechanizing my life atomizing my work devaluing my expertise and then harvesting it for the perpetuation of this machine that all of these AI executives are saying is then going to come for everyone else's jobs
[译文] [嘉宾]: 然后她说:“我变成了一个怪物,我甚至不被允许去上洗手间或照顾我的孩子,更不用说照顾我自己了。因为这个正在吸收越来越多被裁工人的行业,正在将我的生活机械化(Mechanizing),将我的工作原子化(Atomizing),贬低我的专业知识,然后收割它,只为了延续这台机器,而所有这些AI高管都说这台机器之后将会取代其他所有人的工作。”
[原文] [Guest]: and so what you were saying about these this class of workers the business owners that get to become more human because there are all of these AI models now doing the tasks that they don't have to do anymore it is at the cost of the vast majority of people who are not business owners that are struggling to find work getting absorbed into the work of then providing these technologies that the business owners can use and instead of becoming more human they feel like their humanity has been squeezed and diminished and they have no ability to have control agency and dignity in their lives anymore
[译文] [嘉宾]: 所以你刚刚提到的那类工人——那些企业主,他们因为有了所有这些AI模型来做那些他们不再需要做的任务,从而得以变得“更像人类”——这是以绝大多数非企业主的人为代价的。这些人正在艰难求职,被吸收到提供这些技术(供企业主使用)的工作中。他们不但没有变得更像人类,反而觉得他们的人性(Humanity)被榨干和削弱了,他们在生活中不再有任何控制权、自主权(Agency)和尊严(Dignity)。
[原文] [Host]: i think this is a big I think this is a big question that kind of pertains to this graph here which is you know all of these people if we believe anthropics prediction of who will be disrupted these people in these industries like arts and media legal um life and social sciences architecture and engineering computer and maths business and finance and management and also office and admin these people if we believe this would have to retrain at something else and unlike the industrial revolution where you might get 10 20 years to retrain because factories take a long time to build the distribution layer that AI sits on top of is the open internet so this is why chat can go and get hundreds of millions of users in no time at all and become the fastest growing company of all time um one of my fears is that this disruption takes place at a speed where we can't transition
[译文] [主持人]: 我认为这是一个很大的……我认为这是一个很大的问题,某种程度上与这里的这张图表有关:也就是你知道,如果我们要相信Anthropic关于谁将被颠覆(Disrupted)的预测,这些行业中的所有人——比如艺术和媒体、法律、嗯,生命和社会科学、建筑和工程、计算机和数学、商业和金融与管理,以及办公和行政管理人员——如果我们要相信这一点,这些人将不得不重新接受培训去干点别的。而不同于工业革命(Industrial revolution,那时你可能有10到20年的时间去重新培训,因为建工厂需要很长时间),AI所位于的那个分发层(Distribution layer)是开放的互联网。这就是为什么ChatGPT能在极短的时间内获得数亿用户,并成为有史以来增长最快的公司。嗯,我的担忧之一是,这种颠覆发生的速度让我们根本无法过渡(Transition)。
[原文] [Guest]: and that was you know that I think you you you said that sentence in the passive voice the transition would happen at a speed but who is driving that speed um it's the companies and their race with one another
[译文] [嘉宾]: 而这就是,你知道的,我认为你、你、你用被动语态(Passive voice)说了那句话——“过渡将以某种速度发生”,但是谁在驱动这种速度呢?嗯,是这些公司,以及他们彼此之间的军备竞赛。
[原文] [Host]: yeah
[译文] [主持人]: 是的。
[原文] [Guest]: and so they are driving the transition to happen at a speed at which it would be really hard to take care of all of the people that would be bulldozed over by
[译文] [嘉宾]: 所以他们正在推动这种过渡以极快的速度发生,在这种速度下,我们很难照顾到所有那些将被它推土机般无情碾过(Bulldozed over by)的人。
[原文] [Host]: this is one of the crazy questions that no one can answer for me when I sit with these people that are AI CEOs so I go "So what happens to the people if this is if you agree that this is going to happen at super speed?" You know I spoke to that CEO of Uber Dar who said very similar things to what you're saying is you know there'll be data labeling jobs for example for the drivers but um they can't all become data labelers and there's a question around meaning and purpose and fulfillment and that comes from losing your meaning in life i s also sit here with so many people who talk about how their father lost their job in Iran or some some other country and came to the United States and had to be a a toilet cleaner on particular case was a doctor in Iran but came to the US and was a toilet cleaner and had to deal with the sense of shame that that particular person felt and the lack of dignity that that caused and how that made that person's self-esteem feel and the depression alcoholism that transpired from that um if this happens at a large scale across society there's going to be a ton of consequences like that
[译文] [主持人]: 当我和那些AI公司的CEO坐在一起时,这是那些无人能为我解答的疯狂问题之一。所以我会问:“如果这是……如果你同意这将以超级速度发生,那么人们会怎样?”你知道,我和优步(Uber)的CEO达拉交谈过,他说了和你非常相似的话,他说,你知道,例如会有给司机做的数据标注(Data labeling)工作。但是,嗯,他们不可能都成为数据标注员,而且这里有一个关于意义(Meaning)、目标(Purpose)和成就感(Fulfillment)的问题,而那源于你在生活中失去了意义。我也和许多人坐在这里谈论过,他们的父亲如何在伊朗或其他国家失去了工作,来到美国后不得不成为一名厕所清洁工。有个具体的案例,一个人在伊朗是医生,但来到美国后却做起了厕所清洁工,不得不去应对那种深深的羞耻感,以及那带来的尊严缺失,这极大影响了那个人的自尊,甚至引发了随之而来的抑郁和酗酒。嗯,如果这在全社会大规模发生,将会产生大量类似的后果。
[原文] [Guest]: i mean this is this is like the core themes of my work and the reason why I'm critical of these companies is that they are creating technologies in a way that creates the halves and have nots in an extreme form that we have it's it's exacerbating the inequality that we already see in the world like the people who have things will have way more riches they'll have way more free time they'll be allowed to be more human but the people who don't have things are even being squeezed even more
[译文] [嘉宾]: 我的意思是,这、这就像是我工作的核心主题。我之所以批评这些公司,是因为他们创造技术的方式,正在以一种极端的形式制造出“有产者”和“无产者”(Haves and have nots,富人与穷人)。这在加剧我们已经在世界上看到的不平等:那些拥有资产的人将获得多得多的财富,他们将拥有多得多的空闲时间,他们将被允许更加“像一个人”;但那些一无所有的人正在遭受更严重的压榨。
[原文] [Guest]: and it's not just from a work perspective i mean I talk in my book also about the environmental and public health crisis that these companies have created where they are building these colossal supercomput facilities there and and in in comm community like communities all around the world and they specifically pick some of the most vulnerable communities
[译文] [嘉宾]: 而且这不仅仅是从工作的角度来看的。我的意思是,我在我的书中也谈到了这些公司造成的环境和公共卫生危机,他们在世界各地的社区中建设这些庞大的超级计算设施(Supercomputer facilities),并且他们专门挑选了一些最弱势(Vulnerable)的社区。
[原文] [Guest]: we're sitting in Texas right now open AAI's largest one of its largest data center projects is being built in Abalene Texas as part of the Stargate initiative which was an effort announced at the beginning of Trump's second administration to spend $500 billion on AI computing infrastructure this facility consumes will when it's finished will consume more than a gigawatt of power which is over 20% over 20% so this is actually a little bit inaccurate now um this was something that circulated online for a while but there's updated numbers
[译文] [嘉宾]: 我们现在坐在得克萨斯州(Texas),OpenAI最大的、其最大的数据中心项目之一正在得州阿比林(Abilene,转录拼为Abalene)建设,这是“星际之门计划”(Stargate initiative)的一部分,该计划是在特朗普第二届政府初期宣布的一项耗资5000亿美元用于AI计算基础设施的努力。这个设施消耗的……当它建成时,将消耗超过一吉瓦(Gigawatt)的电力,这超过了20%……超过了20%……所以这其实现在有一点不准确了,嗯,这是在网上流传了一阵子的数据,但现在有更新的数据了。
[原文] [Host]: just for someone that can't see cuz they're listening on Spotify or something it's a picture of the size of this facility so this is not the Abene Texas one this is a meta facility
[译文] [主持人]: 为了那些看不到画面的人解释一下,因为他们可能正在Spotify或其他平台上收听,这是一张展示这个设施规模的图片。所以这不是得州阿比林的那个,这是一个Meta的设施。
[原文] [Guest]: yeah so let's first talk about opening eyes facility in Texas that one would be the size of Central Park and it would run a million computer chips and it would require the power of more than 20% of New York City
[译文] [嘉宾]: 是的,所以让我们先谈谈OpenAI在得克萨斯的设施。那个设施将有中央公园(Central Park)那么大,它将运行100万个计算机芯片,并需要超过整个纽约市(New York City)20%的电力。
[原文] [Host]: do you know one of the things which I found confusing so I'd like to like alleviate the dissonance is I thought you were saying earlier that you didn't think the job disruption promises were real
[译文] [主持人]: 你知道我发现有一件事情让我感到困惑吗?所以我想试着消除这种认知失调。我以为你之前在说,你认为那种关于工作将被颠覆的承诺并不真实。
[原文] [Guest]: no what I was saying is that when we talk about what these executives predict about the future we need to understand that they are ultimately trying to influence the public in a way that allows them to continue maintaining control over the technology
[译文] [嘉宾]: 不,我当时说的是,当我们谈论这些高管对未来的预测时,我们需要明白,他们最终试图以一种允许他们继续保持对这项技术的控制权的方式,来影响公众。
[原文] [Host]: but objectively do you think that the job disruption that they talk about where
[译文] [主持人]: 但客观地说,你认为他们谈论的工作颠覆(Job disruption)会不会……
[原文] [Guest]: Yeah yeah i mean I I mentioned real well I I don't want to comment specifically on like this chart but it's like we've already seen in job reports that there is a restructuring of the economy happening right now
[译文] [嘉宾]: 是的,是的。我的意思是,我刚才说得很清楚,我不想专门去评论这张图表,但正如我们在就业报告中已经看到的,现在正发生着一场经济结构的重组(Restructuring)。
[原文] [Guest]: yeah but but going back to like the data center so this supercomputer facility it's a meta supercomputer facility is being built in Louisiana and it would be four times the size of the Abene Texas one and use half of the average power demand of New York City so it's one the size of Manhattan this makes it seem like almost all of Manhattan but it's it would be 1/5 the size of Manhattan
[译文] [嘉宾]: 是的,但让我们回到数据中心的话题。这个超级计算设施,它是一个正在路易斯安那州(Louisiana)建设的Meta超算设施,它的规模将是得州阿比林设施的四倍,耗电量相当于纽约市平均电力需求的一半。所以它的规模……堪比曼哈顿(Manhattan),这让它看起来几乎像整个曼哈顿那么大,但实际上它大约是曼哈顿面积的五分之一(1/5)。
[原文] [Guest]: when these facilities go into these communities what happens power utility increases grid reliability decreases the facilities also need fresh water to generate the power for powering them as well as fresh water to cool and there have been lots of documented stories of communities that are already really constrained in their freshwater resource they're under a drought when a facility comes in and then there are people the community is actually like competing with this facility for fresh water i talk about one of those communities in my book
[译文] [嘉宾]: 当这些设施进入这些社区时,会发生什么?电力使用量激增,电网的可靠性下降。这些设施还需要淡水来发电为其供能,并且需要淡水来冷却。已经有许多有记录的故事表明,当一个设施进入那些淡水资源原本就已经非常紧张、正处于干旱状态的社区时,人们……社区实际上就像在跟这个设施争夺淡水。我在书中写到了其中一个社区。
[原文] [Guest]: and also sometimes these facilities instead of connecting to the grid they instead a a power plant pops up next to it so in Memphis Tennessee where Musk built Colossus the supercomputer for training Grock he used 35 methane gas turbines to power the facility
[译文] [嘉宾]: 此外,有时候这些设施非但不接入电网,反而在旁边凭空冒出一个发电厂。比如在田纳西州的孟菲斯(Memphis Tennessee),马斯克建了用来训练Grok大模型的超级计算机“巨像”(Colossus),他使用了35台甲烷燃气轮机(Methane gas turbines)来为该设施供电。
[原文] [Guest]: this is a working-class community a black and brown community a rural community that was not even told that they would be the hosts of this facility and they discovered it because they literally smelled what seemed like a gas leak in all of their living rooms and that's when they discovered that these methane gas turbines were taking away their right to clean air
[译文] [嘉宾]: 这是一个工人阶级社区,一个由黑人和棕色人种组成的社区,一个甚至没人告知他们将成为该设施“宿主”的乡村社区。他们发现这件事,是因为他们在自家的客厅里真的闻到了类似于煤气泄漏的味道,就在那时他们才发现,这些甲烷燃气轮机正在剥夺他们呼吸清洁空气的权利。
[原文] [Guest]: and this is a community that's already been facing a history of environmental racism they had already had lots of struggles to access their right to clean air and now there's this huge supercomput that's landed in their midst that is pumping thousands of tons of toxins into their air exacerbating the asthmatic symptoms of the children exacerbating the respiratory illnesses of other people that it's it's one of the communities that has the highest rates of um lung cancer
[译文] [嘉宾]: 这是一个已经面临着环境种族主义(Environmental racism)历史的社区,他们已经为了获得呼吸清洁空气的权利经历过许多斗争。而现在,这个巨大的超级计算机降临在他们中间,正将数千吨的毒气排入他们的空气中,加剧了儿童的哮喘症状(Asthmatic symptoms),加剧了其他人的呼吸道疾病,这、这是肺癌(Lung cancer)发病率最高的社区之一。
[原文] [Guest]: and so and that supercomputers taking their jobs and then they also have supercomputers taking their jobs so so this is what I mean is like the halves and have nots are fundamentally being pulled apart even further
[译文] [嘉宾]: 所以而且,那个超级计算机正在抢走他们的工作,然后他们也面临着超级计算机抢走他们工作的情况。所以,所以这就是我的意思,这就好比“有产者”和“无产者”(Haves and have nots)从根本上被进一步撕裂了。
[原文] [Guest]: like if you in this version of Silicon Valley's future are in the misfortunate category of being a have not we are talking about you now getting a job that is way worse than what you had because you might be doing data annotation and you might be treated as a machine rather than as a human to extract value the value of your labor for perpetuating this labor automating machine that these people are building
[译文] [嘉宾]: 比如在硅谷版本的这个未来中,如果你不幸属于“无产者”类别,这意味着你现在找到的工作将比你以前拥有的糟糕得多。因为你可能会去做数据标注,你可能会被当作一台机器而不是一个人类来对待,以榨取价值——榨取你劳动力的价值,以此来延续这台那些人正在建造的、旨在大规模自动化取代劳动力的机器。
[原文] [Guest]: you might be competing with these facilities for freshwater resources they're also polluting your air your bills have increased so the affordability crisis is getting worse like how is that making people able to be more human
[译文] [嘉宾]: 你可能正在与这些设施争夺淡水资源;它们同时还在污染你的空气;你的账单增加了,所以负担能力危机(Affordability crisis)越来越严重。比如,这一切到底是怎么让人们能够变得“更像人类”的呢?
章节 13:打破帝国垄断:寻找“AI自行车”与民主抗争
📝 本节摘要:
本节是整个访谈的终章。嘉宾提出了一个极具启发的比喻:“AI火箭”与“AI自行车”。当前消耗海量资源、剥削劳动力的大语言模型就像昂贵的“火箭”,而像AlphaFold这样利用小规模精选数据解决特定问题(如蛋白质折叠)、耗能极低的系统则是“自行车”。针对普通人的无力感,嘉宾呼吁大众行动起来“打破帝国垄断”,通过拒绝成为“数据供体”、抗议数据中心建设、通过法律维权等方式,拒绝让AI巨头们的计划“完美运转”。最后,主持人高度评价了这本充满人文关怀的著作《AI帝国:萨姆·奥特曼OpenAI的梦想与梦魇》,并呼吁全社会展开关于AI伦理与社会影响的深刻对话。
[原文] [Guest]: yes okay so one of the analogies that I always use is AI is like the word transportation transportation can literally refer to everything from a bicycle to a rocket and we have nuanced conversations about transportation where we always say we need to transition our transportation towards more uh sustainable options we need a transition towards you know public transport electric vehicles and we don't we don't ever say everyone should get a rocket to do every to serve all of their transportation needs right
[译文] [嘉宾]: 是的,好的。所以我经常使用的一个比喻是:AI就像“交通(Transportation)”这个词,交通字面上可以指代从自行车到火箭的任何东西。我们在讨论交通时会有非常细致的对话,我们总是说我们需要将交通向更可持续的选择过渡,我们需要向公共交通、电动汽车过渡;我们从来不会说“每个人都应该买一枚火箭”来满足他们所有的交通需求,对吧?
[原文] [Guest]: so all of the models that we've been talking about I like to think of them as the rockets of AI they use an extraordinary amount of resources and they provide benefit some dramatic benefit to some people but they're also exacting an extraordinary cost on a large swath of people because of the like the costs of developing this technology why don't we build more bicycles of AI this is things like deep minds alpha fold which is a system that predicts how proteins will fold based on amino acid sequences it's really important for accelerating drug discovery for understanding human disease and it won the Nobel Prize in chemistry in 2024 and the reason why it's a bicycle of AI is because you're using small curated data sets... which means significantly less energy which means less emissions so on and so forth and you're providing enormous benefit to people
[译文] [嘉宾]: 所以我们刚才谈论的所有那些大模型,我喜欢把它们看作是“AI火箭”,它们消耗了极其庞大的资源,它们确实为某些人提供了一些戏剧性的好处,但由于开发这项技术的成本,它们也让很大一部分人付出了极其高昂的代价。我们为什么不多造一些“AI自行车”呢?这就好比DeepMind的AlphaFold,这是一个基于氨基酸序列预测蛋白质将如何折叠的系统,它对于加速药物发现、理解人类疾病非常重要,并赢得了2024年的诺贝尔化学奖。它之所以是“AI自行车”,是因为你使用的是小规模精选的数据集……这意味着开发该系统所需的计算资源大大减少,这也意味着能耗大幅降低、碳排放大幅减少等等,同时你还在为人类提供巨大的利益。
[原文] [Host]: it feels like the horse has left the stable in this regard because they've already taken people's IP they've taken media they they train on this podcast we know they do... do you think there's any chance of it going down do you think there's any chance of this sort of brute force scaling approach where you take data you take computational power energy and you you know you have um the data labelers... do you think there's any chance it's going to stop or go in a different direction other than the one it's going in now
[译文] [主持人]: 在这方面感觉就像“马已经跑出马厩了(木已成舟)”,因为他们已经拿走了人们的知识产权(IP),拿走了媒体内容,他们用这个播客的数据来训练(我们知道他们这么做了)……你认为这种情况有减少的可能吗?你认为这种需要耗费海量数据、计算力、能源以及数据标注员的“大力出奇迹式规模化(Brute force scaling)”的方法,有可能停下来或者走向一个不同于现在的方向吗?
[原文] [Guest]: here's the thing if the horse truly had left the stables they wouldn't have to train on anything anymore why is it that their appetite for data has actually expanded it's because in order to build the next generations of their technologies... they need to train again and again and again and again... i would love to reframe the question and say what should we be doing in this moment where it's not going down... I always say we need to break up the empire and we need to develop alternatives and we are already seeing a flourishing of incredible grassroots movements that are applying an enormous amount of pressure to the way that the empire is trying to unfold its agenda 80% of Americans in the most recent poll think that the AI industry need to be regulated
[译文] [嘉宾]: 问题是这样的,如果马真的已经跑出马厩了,他们就不需要再训练任何东西了。为什么他们对数据的胃口实际上反而变大了?这是因为为了构建下一代技术,他们需要一次又一次地反复训练。我很想重新构建你的问题:在这一切并没有减少的当下,我们应该做些什么?……我总是说,我们需要“打破帝国(Break up the empire)”,我们需要开发替代方案。而且我们已经看到,令人难以置信的草根运动正在蓬勃发展,这些运动正在对帝国试图展开其议程的方式施加巨大的压力。在最近的民意调查中,80%的美国人认为AI行业需要受到监管。
[原文] [Host]: what goal should we be aiming at so if I said to my audience Janet at home because this is kind of what I see in the comments it's hopelessness it's like what can I do
[译文] [主持人]: 我们应该瞄准什么目标?如果我对家里看节目的观众珍妮特(Janet)说……因为这正是我在评论区看到的,是一种无力感,就像是:“我能做些什么?”
[原文] [Guest]: the goal is not that we completely get rid of this technology the goal is that these companies need to stop being empires and the way I define like a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair exchange of value with the workers who work for them or the people who use them... they can extract and exploit and extract and exploit and get more value than what they offer... so that's like for me the north star is like we should be pushing back and holding accountable these companies when they operate in an imperial way
[译文] [嘉宾]: 目标并不是我们要彻底摆脱这项技术,目标是这些公司需要停止成为“帝国”。我区分典型企业和“帝国”的方式在于,帝国是建立在这样一种理念之上的:他们不需要与为他们工作的工人或使用他们的人提供“平等的价值交换”……他们可以不断榨取和剥削、榨取和剥削,获取比他们提供的要多得多的价值。所以对我来说,北极星(指引方向的目标)就是:当这些公司以帝国主义的方式运作时,我们应该予以反击并追究他们的责任。
[原文] [Guest]: think about all of the ways that your life intersects with the resources and the that the AI industry needs to perpetuate what they do and also the spaces that they would need to deploy these technologies... so you're a data donor to these companies you could withhold that data and that's what those artists and writers are are doing like they're suing these companies to withhold... you probably have a data center popping up around you if you're at a school environment or a company environment you're probably having a discussion in those environments right now about what should the AI adoption policy be... and so what I would say to everyone of your viewers is let's not make it go flawlessly if we don't agree with what they are doing ah okay i got you and then let's build alternatives
[译文] [嘉宾]: 想想你的生活与AI行业为了延续其作为所需的资源之间所有产生交集的方式,以及他们部署这些技术所需的空间……所以,你是这些公司的数据供体(Data donor),你可以拒绝提供这些数据,而这正是那些艺术家和作家正在做的,比如他们起诉这些公司以拒绝提供数据。你周围可能突然冒出一个数据中心;如果你在学校或公司环境中,你现在可能正在讨论AI的采用政策应该是什么……所以我对你的每一位观众想说的是:如果我们不同意他们正在做的事情,就不要让他们的计划完美运转(Flawlessly)!噢,好的,我懂你的意思了。然后,让我们去建立替代方案。
[原文] [Host]: it's strange i'm quite I think I'm I'm I've trained myself to deal with dichotoies in my head and this for me is such is a dichotomy where I as a CEO and as a founder as an entrepreneur and someone that loves technology I think it's incredible it's absolutely incredible AI... but and the big butt is is it possible to think that is true and also think that there are significant unintended consequences which technology in the history of technology should have taught us to take a moment to pause to talk about
[译文] [主持人]: 这很奇怪,我想我已经训练了自己去处理脑海中的二元对立(Dichotomies)。这对我来说就是一个极大的二元对立:作为一名CEO、创始人、企业家和一个热爱科技的人,我认为AI令人难以置信,它绝对是令人难以置信的……但是,这个巨大的转折是:有没有可能在认为“AI是极好的”同时,也认为“它存在重大的意外后果”?而科技发展史本应教会我们,面对这些后果,我们需要停下脚步来好好谈论一番。
[原文] [Guest]: because I think this is absolutely like you can have both of these things in your head and what I'm saying is that this tension doesn't have to be a tension because we could actually preserve the utility and benefits of these technologies but actually develop and design them in a different way that doesn't have all of these unintended consequences yes and I think there needs to be a big social conversation...
[译文] [嘉宾]: 是的,因为我认为你绝对可以同时在脑海中容纳这两种想法。我想说的是,这种张力(Tension)其实没必要成为一种张力,因为我们实际上完全可以保留这些技术的效用和益处,同时以一种不产生这些意外后果的不同方式来开发和设计它们。是的,我认为需要有一场盛大的社会对话……
[原文] [Host]: empire of AI: Dreams and Nightmares in Sam Alman's Open AI by Karen How i'll link it below for anyone that wants to read this book i highly recommend you do it's a New York Times bestseller for good reason karen thank you thank you so much Stephen
[译文] [主持人]: 《AI帝国:萨姆·奥特曼OpenAI的梦想与梦魇》(Empire of AI: Dreams and Nightmares in Sam Alman's Open AI),作者是卡伦·郝(Karen Hao)。我会把链接放在下面,强烈推荐给任何想读这本书的人,它成为《纽约时报》畅销书是有充分理由的。卡伦,谢谢你。非常感谢你,斯蒂芬。