章节 1:开篇与董事会风波的个人回顾
📝 本节摘要:
本章以一段关于算力未来价值与 AGI 权力斗争的预告片开场。随后,Lex Fridman 正式介绍了 Sam Altman 及其公司 OpenAI 的成就(GPT-4, Sora 等)。访谈正式开始后,Lex 请 Sam 回顾 2023 年 11 月发生的董事会罢免风波。Sam 形容这是他职业生涯中最痛苦但也充满爱意的经历,仿佛“活着参加了自己的葬礼”。他坦言这段经历虽然混乱且令人心碎,但也增强了公司的韧性。Sam 确认了他一直以来的预判:通往 AGI(通用人工智能)的道路注定是一场巨大的权力斗争。最后,他描述了事后陷入的“神游状态”以及重回工作正轨的过程。
[原文] [Sam Altman]: I think compute is gonna be the currency of the future. I think it'll be maybe the most precious commodity in the world. I expect that by the end of this decade. And possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable. The road to AGI should be a giant power struggle. I expect that to be the case.
[译文] [Sam Altman]: 我认为算力(compute)将会成为未来的货币。我觉得它可能会成为世界上最珍贵的商品。我预计在这个十年结束之前,甚至可能稍微早一点,我们将拥有非常强大的系统,当我们看着它们时会惊叹:“哇,这真的很了不起。”通往通用人工智能(AGI)的道路注定是一场巨大的权力斗争。我预计情况会是这样。
[原文] [Lex Fridman]: Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
[译文] [Lex Fridman]: 无论谁最先造出 AGI,都会获得巨大的权力。你相信自己能掌握那么大的权力吗?
[原文] [Lex Fridman]: The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and perhaps one day the very company that will build AGI. This is Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Sam Altman.,
[译文] [Lex Fridman]: 接下来是与萨姆·奥尔特曼(Sam Altman)的对话,这是他第二次做客本播客。他是 OpenAI 的首席执行官,这家公司开发了 GPT-4、ChatGPT、Sora,也许有一天,它正是那家构建出 AGI 的公司。这是 Lex Fridman 播客。为了支持本节目,请查看描述中的赞助商信息。现在,亲爱的朋友们,有请萨姆·奥尔特曼。,
[原文] [Lex Fridman]: Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.
[译文] [Lex Fridman]: 带我回顾一下那场始于 11 月 16 日星期四,对你来说可能是 11 月 17 日星期五的 OpenAI 董事会风波(board saga)吧。
[原文] [Sam Altman]: That was definitely the most painful professional experience of my life and chaotic, and shameful, and upsetting and a bunch of other negative things. There were great things about it too and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time.,
[译文] [Sam Altman]: 那绝对是我职业生涯中最痛苦的经历,充满了混乱、羞耻、沮丧以及一大堆其他的负面情绪。其中也有很棒的部分,我真希望当时没有处于那种肾上腺素飙升的状态,导致我没能停下来好好体会那些美好。,
[原文] [Sam Altman]: I came across this old tweet of mine or this tweet of mine from that time period, which was it was like kind of going to your own eulogy, watching people say all these great things about you and just like unbelievable support from people I love and care about. That was really nice. That whole weekend I kind of like felt with one big exception, I felt like a great deal of love and very little hate even though it felt like I have no idea what's happening and what's gonna happen here and this feels really bad.,
[译文] [Sam Altman]: 我偶然看到了我之前发的一条旧推文,或者是那个时期发的一条推文,说那感觉就像是去参加自己的悼词宣读仪式(eulogy),看着人们说着关于你的各种溢美之词,还有那些我爱和在乎的人给予的难以置信的支持。那真的很美好。整个周末,除了一个巨大的例外,我都感觉到大量的爱,几乎没有恨,尽管当时感觉就像是“我根本不知道正在发生什么,也不知道接下来会发生什么,这种感觉真的很糟”。,
[原文] [Sam Altman]: And there were definitely times I thought it was gonna be like one of the worst things to ever happen for AI safety. Well, I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was gonna be something crazy and explosive that happened, but there may be more crazy and explosive things happen. It still I think helped us build up some resilience and be ready for more challenges in the future.,
[译文] [Sam Altman]: 确实有些时刻,我认为这会成为 AI 安全领域发生过的最糟糕的事情之一。不过,我也觉得很高兴它发生得相对较早。我原以为在 OpenAI 成立到我们创造出 AGI 之间的某个时间点,会发生一些疯狂且具有爆炸性的事情,当然可能还会有更多疯狂和爆炸性的事情发生。但我认为这还是帮助我们建立了一些韧性(resilience),为未来更多的挑战做好了准备。,
[原文] [Lex Fridman]: But the thing you had a sense that you would experience is some kind of power struggle.
[译文] [Lex Fridman]: 但你有一种预感,你会经历某种形式的权力斗争。
[原文] [Sam Altman]: The road to AGI should be a giant power struggle. Like the world should... Well, not should. I expect that to be the case.
[译文] [Sam Altman]: 通往 AGI 的道路注定是一场巨大的权力斗争。就像这个世界应该……嗯,不是应该。我预计情况就是这样。
[原文] [Lex Fridman]: And so you have to go through that, like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate all that in order to deescalate the power struggle as much as possible, pacify it.,
[译文] [Lex Fridman]: 所以你必须经历这些,就像你说的,尽可能多地迭代,去弄清楚如何建立董事会结构,如何建立组织架构,如何选择共事的人,以及如何沟通这一切,以便尽可能地让权力斗争降级,平息它。,
[原文] [Sam Altman]: But at this point, it feels like something that was in the past that was really unpleasant and really difficult and painful. But we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after. There was like this fugue state for kind of like the month after, maybe 45 days after that was I was just sort of like drifting through the days, I was so out of it. I was feeling so down,
[译文] [Sam Altman]: 但在这一刻,这感觉就像是一件已经过去的、非常令人不快、非常艰难且痛苦的事情。但我们已经恢复工作了,事情非常忙碌和紧张,所以我并没有花太多时间去想它。在事后的一段时间里,大概是一个月,或者 45 天左右,我处于一种神游状态(fugue state),我就像是在日子里随波逐流,整个人都不在状态。我当时情绪非常低落。,
[原文] [Lex Fridman]: Just on a personal psychological level.
[译文] [Lex Fridman]: 就仅仅是在个人心理层面上。
[原文] [Sam Altman]: Yeah. Really painful. And hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and kind of recover for a while. But now it's like we're just back to working on the mission.
[译文] [Sam Altman]: 是的。真的很痛苦。而且在这种状态下还要继续运营 OpenAI 很难。我当时只想爬进一个山洞里,恢复一段时间。但现在,感觉就像是我们又回到了致力于使命的工作中。
[原文] [Lex Fridman]: Well, it's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research, and product development, and money and all this kind of stuff so that you have a very high potential of building AGI would do so in a slightly more organized, less dramatic way in the future.,
[译文] [Lex Fridman]: 不过,回顾过去并反思董事会结构、权力动态、公司运营方式,以及研究、产品开发和资金之间的张力等所有这些事情,仍然是有用的,这样你们在构建 AGI 时拥有极高潜力的同时,未来能以一种稍微更有序、更少戏剧性的方式进行。,
[原文] [Sam Altman]: Definitely learned a lot about structure and incentives and what we need out of a board And I think that it is valuable that this happened now in some sense. I think this is probably not like the last high stress moment of OpenAI, but it was quite a high stress moment. Company very nearly got destroyed. And we think a lot about many of the other things we've gotta get right for AGI. But thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure the world, which I expect more and more as we get closer. I think that's super important.,
[译文] [Sam Altman]: 我们确实学到了很多关于结构、激励机制以及我们需要什么样的董事会的知识。而且我认为这件事发生在现在,从某种意义上说是有价值的。我想这可能不会是 OpenAI 最后一个高压时刻,但这确实是一个相当高压的时刻。公司差点就被毁了。我们思考了很多为了实现 AGI 必须做对的其他事情。但是思考如何建立一个有韧性的组织,以及如何建立一个能够经受住世界巨大压力的结构——随着我们越来越接近目标,我预计这种压力会越来越大——我认为这超级重要。,
章节 2:董事会结构、成员甄选与“死亡”周末
📝 本节摘要:
在本章中,Lex 询问了董事会决策的具体细节。Sam 认为原董事会虽然出于好意,但在高压下做出了次优决定,并指出了非营利组织董事会缺乏制衡(不向股东负责)的结构性问题。随后,双方讨论了新董事会成员(如 Larry Summers)的甄选标准,Sam 提出了著名的“斜率(Slope)与 Y 轴截距(Y-intercept)”人才评估理论。对话的后半部分,Sam 详细回顾了被解雇那个周末的心理活动:从周五的震惊与接受“死亡”,到周六的谈判僵局,再到周日任命临时 CEO 时的情绪低谷,以及他在混乱中感受到的支持与爱。
[原文] [Lex Fridman]: Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don't we fire Sam kind of thing?
[译文] [Lex Fridman]: 你是否了解董事会的审议过程有多深入、多严谨?能不能从这种情况下的人际动态角度给我们透露一点情况?是仅仅几次谈话后事情就突然升级,变成了“我们要不要解雇 Sam”那样的事情吗?
[原文] [Sam Altman]: I think the board members were far, well-meaning people on the whole. And I believe that in stressful situations where people feel time pressure or whatever, people understandably make suboptimal decisions. And I think one of the challenges for OpenAI will be we're gonna have to have a board and a team that are good at operating under pressure.
[译文] [Sam Altman]: 我认为总体而言,董事会成员都是出于好意的人。而且我相信,在压力情境下,当人们感到时间紧迫或其他压力时,做出次优决定(suboptimal decisions)是可以理解的。我认为 OpenAI 面临的挑战之一,就是我们将必须拥有一个擅长在压力下运作的董事会和团队。
[原文] [Lex Fridman]: Do you think the board had too much power?
[译文] [Lex Fridman]: 你认为董事会的权力是否过大了?
[原文] [Sam Altman]: I think boards are supposed to have a lot of power, but one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have like super voting shares or whatever. In this case, I think one of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, like quite a lot of power, they don't really answer to anyone but themselves. And there's ways in which that's good, but what we'd really like is for the board of OpenAI to answer to the world as a whole as much as that's a practical thing.
[译文] [Sam Altman]: 我认为董事会本就应该拥有很大的权力,但我们确实看到,在大多数公司结构中,董事会通常要对股东负责。有时人们会有超级投票权之类的东西。而在这种情况下,我认为我们的结构中有一点我们可能本该思考得更多,那就是非营利组织的董事会,除非你制定了其他规则,否则他们拥有相当大的权力,而且除了他们自己,他们实际上不需要对任何人负责。这在某些方面是好的,但我们真正希望的是 OpenAI 的董事会能尽可能切实地对整个世界负责。
[原文] [Lex Fridman]: So there's a new board announced.
[译文] [Lex Fridman]: 那么现在宣布了一个新的董事会。
[原文] [Sam Altman]: Yeah.
[译文] [Sam Altman]: 是的。
[原文] [Lex Fridman]: There's, I guess, a new smaller board of first and now there's a new final board.
[译文] [Lex Fridman]: 我猜,先是有了一个新的小型董事会,现在有了一个新的最终董事会。
[原文] [Sam Altman]: Not a final board yet. We've added some, we'll add more.
[译文] [Sam Altman]: 还不是最终董事会。我们增加了一些人,以后还会增加更多。
[原文] [Lex Fridman]: Added some, okay. What is fixed in the new one that was perhaps broken in the previous one?
[译文] [Lex Fridman]: 增加了一些,好的。新的董事会修复了哪些前任董事会可能存在的问题?
[原文] [Sam Altman]: The old board sort of got smaller over the course of about a year. It was nine and then it went down to six and then we couldn't agree on who to add. And the board also, I think, didn't have a lot of experienced board members and a lot of the new board members at OpenAI have just have more experience as board members. I think that'll help.
[译文] [Sam Altman]: 旧董事会在大约一年的时间里变小了。原本是九个人,后来减少到了六个,然后我们在该增加谁的问题上无法达成一致。而且我认为,那个董事会没有太多经验丰富的董事会成员,而 OpenAI 的许多新董事会成员只是拥有更多担任董事会成员的经验。我认为这会有所帮助。
[原文] [Lex Fridman]: It's been criticized some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What was the process of selecting the board? What's involved in that?
[译文] [Lex Fridman]: 加入董事会的一些人受到了一些批评。比如,我听到很多人批评拉里·萨默斯(Larry Summers)的加入。挑选董事会的流程是怎样的?其中涉及了什么?
[原文] [Sam Altman]: So Bret and Larry were kind of decided in the heat of the moment over this very tense weekend and that was... I mean, that weekend was like a real rollercoaster, like a lot of lots and downs. And we were trying to agree on new board members that both sort of the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members. Bret, previous to that weekend, suggested, but he was busy and didn't wanna do it. And then we really needed help in wood. We talked about a lot of other people too, but I felt like if I was going to come back, I needed new board members. I didn't think I could work with the old board again in the same configuration, although we then decided, and I'm grateful that Adam would stay, but we wanted to get to... We considered various configurations, decided we wanted to get to a board of three and had to find two new board members over the course of sort of a short period of time. So those were decided honestly without... That's like you kind of do that on the battlefield. You don't have time to design a rigorous process then. For new board members, since new board members will add going forward, we have some criteria that we think are important for the board to have different expertise that we want the board to have. Unlike hiring an executive where you need them to do one role, well, the board needs to do a whole role of kind of governance and thoughtfulness. Well, and so one thing that Bret says, which I really like is that we wanna hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring nonprofit expertise, expertise at running companies, sort of good legal and governance expertise. That's kind of what we've tried to optimize for.
[译文] [Sam Altman]: Bret 和 Larry 实际上是在那个非常紧张的周末,在紧要关头(heat of the moment)决定的,那真是……我的意思是,那个周末就像真正的过山车,起起伏伏。我们在努力就新董事会成员达成一致,既要让这里的管理团队觉得合理,也要让旧董事会成员觉得合理。Larry 实际上是旧董事会成员的提议之一。Bret 在那个周末之前就被提议过,但他当时很忙,不想做。后来我们真的很需要帮助。我们也讨论了很多其他人选,但我觉得如果我要回来,我需要新的董事会成员。我认为我无法再与旧的董事会以同样的配置共事,虽然后来我们决定——我也很感激 Adam 愿意留任——但我们想……我们考虑了各种配置,决定先组建一个三人董事会,并且必须在很短的时间内找到两名新成员。所以坦白说,这些决定是在没有……那感觉就像是在战场上做决定。你当时没有时间去设计一个严谨的流程。对于未来的新董事会成员,既然我们会陆续增加,我们有一些我们认为重要的标准,希望董事会拥有不同的专业知识。这不像招聘高管,你需要他们做好一个角色,董事会需要从整体上起到治理和深思熟虑的作用。Bret 说了一点我很喜欢,就是我们要按组(slates)来聘请董事会成员,而不是一次聘请一个。我们要考虑这一群人能带来非营利组织的专业知识、运营公司的专业知识、以及良好的法律和治理专业知识。这大概就是我们试图优化的方向。
[原文] [Lex Fridman]: So is technical savvy important for the individual board members?
[译文] [Lex Fridman]: 那么对于个别董事会成员来说,技术悟性重要吗?
[原文] [Sam Altman]: Not for every board member, but certainly some you need that. That's part of what the board needs to do.
[译文] [Sam Altman]: 不是对每个成员都重要,但显然你需要一些成员具备这点。这是董事会职责的一部分。
[原文] [Lex Fridman]: So I mean, the interesting thing that people probably don't understand about OpenAI certainly is like all the details of running the business. When they think about the board given the drama, they think about you, they think about like if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what's the conversation with the board like? And they kind of think, all right, what's the right squad to have in that kind of situation to deliberate?
[译文] [Lex Fridman]: 我的意思是,人们可能不了解 OpenAI 的一点,当然是指经营业务的所有细节。鉴于这场闹剧,当他们想到董事会时,他们会想到你,想到如果你实现了 AGI,或者你实现了一些具有难以置信影响力的产品,你构建并部署了它们,那么与董事会的对话会是什么样的?他们会想,好吧,在那样的局势下,什么样的阵容才适合进行审议?
[原文] [Sam Altman]: Look, I think you definitely need some technical experts there and then you need some people who are like, how can we deploy this in a way that will help people in the world the most and people who have a very different perspective? I think a mistake that you or I might make is to think that only the technical understanding matters. And that's definitely part of the conversation you want that board to have. But there's a lot more about how that's gonna just like impact society and people's lives that you really want represented in there too.
[译文] [Sam Altman]: 听着,我认为你肯定需要一些技术专家在里面,然后你需要一些会思考“我们要如何以一种对世人帮助最大的方式来部署它”的人,以及拥有非常不同视角的人。我认为你或我可能会犯的一个错误是认为只有技术理解力才重要。那绝对是你希望董事会进行的对话的一部分。但还有更多关于这究竟会如何影响社会和人们生活的内容,你也真的希望能在那里得到体现。
[原文] [Lex Fridman]: Are you looking at the track record of people or you're just having conversations?
[译文] [Lex Fridman]: 你是看重人们的过往业绩(track record),还是仅仅通过对话来判断?
[原文] [Sam Altman]: Track record's a big deal. You, of course, have a lot of conversations. There's some roles where I kind of totally ignore track record and just look at slope, kind of ignore the y-intercept.
[译文] [Sam Altman]: 过往业绩很重要。当然,你也会进行大量的对话。对于某些职位,我几乎完全忽略过往业绩,只看“斜率”(slope,即成长速度),有点像忽略“Y轴截距”(y-intercept,即当前起点)。
[原文] [Lex Fridman]: Thank you. Thank you for making it mathematical for the audience,
[译文] [Lex Fridman]: 谢谢。谢谢你为了听众把它数学化了,
[原文] [Sam Altman]: For a board member, I do care much more about the y-intercept. I think there is something deep to say about track record there and experiences, something's very hard to replace.
[译文] [Sam Altman]: 对于董事会成员,我确实更看重“Y轴截距”。我认为在那方面,关于过往业绩和经验有一些深刻的东西,有些东西是非常难以替代的。
[原文] [Lex Fridman]: Do you try to fit a polynomial function or exponential one to track record?
[译文] [Lex Fridman]: 你有没有试过用多项式函数或指数函数来拟合过往业绩?
[原文] [Sam Altman]: That's not that. An analogy doesn't carry that far.
[译文] [Sam Altman]: 不是那样的。这个类比没法延伸那么远。
[原文] [Lex Fridman]: All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking Ayahuasca disappearing forever or?
[译文] [Lex Fridman]: 好吧。你提到了那个周末的一些低谷时刻。对你来说,心理上的一些低谷是什么?你有没有考虑过跑去亚马逊丛林,喝点死藤水(Ayahuasca),然后永远消失,或者之类的?
[原文] [Sam Altman]: I mean, there's so many low, like it was a very bad period of time. There were great high points too. My phone was just like sort of nonstop blowing up with nice messages from people I worked with every day, people I hadn't talked to in a decade. I didn't get to appreciate that as much as I should have. 'cause I was just like in the middle of this firefight, but that was really nice. But on the whole, it was like a very painful weekend and also just like a very... It was like a battle fought in public to a surprising degree and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. The board did this Friday afternoon. I really couldn't get much in the way of answers, but I also was just like, "Well, the board gets to do this." And so I'm gonna think for a little bit about what I want to do, but I'll try to find the, the blessing in disguise here. And I was like, "Well, my current job at OpenAI, it was like to like run a decently-sized company at this point."
[译文] [Sam Altman]: 我是说,有太多的低谷了,那是一段非常糟糕的时期。也有很棒的高光时刻。我的手机就像是被打爆了一样,不断收到每天共事的人、甚至十年没说过话的人发来的暖心信息。我当时没能像我本该做的那样去好好感激这一切,因为我就像身处一场交火(firefight)之中,但这真的很美好。但总的来说,那就像是一个非常痛苦的周末,而且也非常……那就像是一场在公众面前进行的战斗,其程度令人惊讶,这让我极度筋疲力尽,比我预期的要累得多。我觉得争斗通常都很累人,但这一场尤甚。董事会在周五下午采取了行动。我真的得不到什么答案,但我当时也就觉得,“好吧,董事会有权这么做。”所以我想稍微思考一下我想做什么,我会试着从中找到塞翁失马般的慰藉(blessing in disguise)。我想,“好吧,我现在在 OpenAI 的工作,在这一点上就像是在管理一家规模相当大的公司。”
[原文] [Sam Altman]: And the thing I'd always liked the most was just getting to work with the researchers. And I was like, yeah, I can just go do like a very focused AI research effort. And I got excited about. That didn't even occur to me at the time to like possibly that this was all gonna get undone. This was like Friday afternoon.
[译文] [Sam Altman]: 而我一直最喜欢的事情就是和研究人员一起工作。我就想,是啊,我可以去做一个非常专注的 AI 研究项目。我对此感到兴奋。当时我甚至根本没有想到这一切可能会被撤销。这是周五下午的情况。
[原文] [Lex Fridman]: Oh, so you've accepted the death-
[译文] [Lex Fridman]: 噢,所以你已经接受了“死亡”——
[原文] [Sam Altman]: Very quickly, very quickly. I mean, I went through like a little period of confusion and rage, but very quickly. And by Friday night, I was talking to people about what was gonna be next and I was excited about that. I think it was Friday night evening for the first time that I heard from the exec team here, which is like, hey, we're gonna like fight this and we think... Well, whatever. And then I went to bed just still being like, okay, excited.
[译文] [Sam Altman]: 非常快,非常快。我是说,我经历了一小段困惑和愤怒的时期,但非常快(就接受了)。到了周五晚上,我已经在和人们讨论接下来要做什么了,而且我很兴奋。我想是在周五晚上,我第一次收到这边高管团队的消息,大意是说,嘿,我们要对此进行抗争,我们认为……嗯,不管怎样。然后我就去睡觉了,心里仍然觉得,好的,很兴奋。
[原文] [Lex Fridman]: Like onward, were you able to sleep?
[译文] [Lex Fridman]: 就像是要向前看了,你能睡得着吗?
[原文] [Sam Altman]: Not a lot. It was one of the weird things was there was this like period of four and a half days where sort of didn't sleep much, didn't eat much and still kind of had like a surprising amount of energy. You learn like a weird thing about adrenaline and more time.
[译文] [Sam Altman]: 睡得不多。那是件奇怪的事情之一,有那么大概四天半的时间,我没怎么睡,也没怎么吃,但还是有惊人的精力。你会了解到关于肾上腺素和时间的一些奇怪现象。
[原文] [Lex Fridman]: So you kind of accepted the death of this baby OpenAI?
[译文] [Lex Fridman]: 所以你有点接受了这个“孩子” OpenAI 的死亡?
[原文] [Sam Altman]: And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.
[译文] [Sam Altman]: 而且我对新事物感到兴奋。我只是觉得,好吧,这很疯狂,但无所谓了。
[原文] [Lex Fridman]: It's a very good coping mechanism.
[译文] [Lex Fridman]: 这是一个非常好的应对机制。
[原文] [Sam Altman]: And then Saturday morning, two of the board members called and said, "Hey, we destabilize. We didn't mean to destabilize things. We don't restore a lot of value here. Can we talk about you coming back?" And I immediately didn't wanna do that, but I thought a little more and I was like, "Well, I really care about the people here, the partners, shareholders. I love this company." And so I thought about it and I was like, "Well, okay, but here's the stuff I would need." And then the most painful time of all over the course of that weekend, I kept thinking and being told... Not just me, like the whole team here kept thinking. Well, we were trying to keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit, whatever. We kept being told like, "All right, we're almost done, we're almost done. We just need like a little bit more time."
[译文] [Sam Altman]: 然后周六早上,两名董事会成员打来电话说:“嘿,我们造成了动荡。我们并非有意动摇局势。我们这里没能挽回太多价值。我们可以谈谈让你回来的事吗?”我第一反应是不想回去,但我多想了一下,我觉得:“嗯,我真的很在乎这里的人、合作伙伴、股东。我爱这家公司。”所以我考虑了一下,我说:“嗯,好吧,但我需要这些条件。”然后那个周末最痛苦的时刻来了,我一直在想,也一直被告知……不只是我,这里的整个团队一直在想。当时我们正试图保持 OpenAI 的稳定,而全世界都在试图把它拆散,人们试图挖角,诸如此类。我们一直被告知:“好的,我们快搞定了,快搞定了。我们只需要再多一点点时间。”
[原文] [Sam Altman]: And it was this like very confusing state. And then Sunday evening when again like every few hours, I expected that we were gonna be done and we're gonna figure out a way for me to return and things to go back to how they were, the board then appointed a new interim CEO and then I was like... I mean, that feels really bad. That was the low point of the whole thing. You know, I'll tell you something, it felt very painful, but I felt a lot of love that whole weekend. It was not other than that one moment, Sunday night, I would not characterize my emotions as anger or hate, but I really just like... I felt a lot of love from people towards people. It was like painful, but it was like the dominant emotion of the weekend was love, not hate.
[译文] [Sam Altman]: 那是一种非常混乱的状态。然后到了周日晚上,这期间每隔几个小时,我都以为我们要搞定了,我们会找到一个方法让我回去,一切恢复原状,结果董事会随后任命了一位新的临时 CEO,那时我就……我的意思是,那感觉真的很糟糕。那是整件事的最低谷。你知道,我跟你说件事,那感觉非常痛苦,但在整个周末我感受到了很多的爱。除了周日晚上的那一刻,我不会把我的情绪描述为愤怒或仇恨,但我真的只是……我感受到了人们对人很多的爱。那是痛苦的,但那个周末的主导情绪是爱,而不是恨。
[原文] [Lex Fridman]: You've spoken highly of Mira Murati that she helped, especially as you put in a tweet, "In the quiet moments when it counts, perhaps we could take a bit of a tangent." What do you admire about Mira?
[译文] [Lex Fridman]: 你对 Mira Murati 评价很高,说她帮了大忙,尤其是你在推文中写道:“在那些关键的静默时刻(in the quiet moments when it counts)。”也许我们可以稍微离题聊一下。你欣赏 Mira 的哪些方面?
[原文] [Sam Altman]: Well, she did a great job during that weekend in a lot of chaos, but people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9:46 in the morning and in just sort of the normal drudgery of the day-to-day, how someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments.
[译文] [Sam Altman]: 嗯,她在那个充满混乱的周末做得非常出色,但人们通常是在危机时刻看到领导者的表现,无论好坏。但我真正看重领导者的一点是,他们在平淡无奇的周二早上 9:46 表现如何,以及在那种日常的单调工作中,一个人如何在会议中表现,他们做出的决策质量如何。这就是我所说的那些静默时刻。
[原文] [Lex Fridman]: Meaning like most of the work is done on a day by day in a meeting by meeting, just be present and make great decisions.
[译文] [Lex Fridman]: 意思是大部分工作都是日复一日、一个会议接一个会议完成的,就是要在这个过程中保持在场并做出伟大的决定。
[原文] [Sam Altman]: Yeah. I mean, look, what you have wanted to spend the last 20 minutes about and I understand is like this one very dramatic weekend. But that's not really what OpenAI is about. OpenAI is really about the other seven years.
[译文] [Sam Altman]: 是的。我的意思是,听着,你过去 20 分钟想聊的——我也理解——是这一个非常充满戏剧性的周末。但这并不是 OpenAI 的真谛。OpenAI 真正关乎的是另外那七年。
章节 3:Ilya 的去向之谜与信任的代价
📝 本节摘要:
话题从严肃的董事会风波转向了关于首席科学家 Ilya Sutskever 的网络迷因(Meme)。Lex 幽默地询问 Ilya 是否被“软禁”,Sam 否认并表达了对 Ilya 的深厚敬意,澄清了“Ilya 看到了 AGI 所以感到恐惧”的传言,并分享了 Ilya 生活中鲜为人知的可爱一面。随后,对话进入了更深层的心理层面:Sam 坦承这次背叛极大地改变了他的性格。他曾是一个默认信任他人的人,但这次经历让他变得更加多疑和谨慎。Lex 指出,对于一个构建 AGI 的领导者来说,适度的不信任或许并非坏事。
[原文] [Lex Fridman]: Well, yeah, human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that's something people totally focus on.
[译文] [Lex Fridman]: 嗯,是啊,人类文明并不全是关于纳粹德国入侵苏联,但那仍然是人们完全关注的事情。
[原文] [Sam Altman]: Very understandable.
[译文] [Sam Altman]: 非常可以理解。
[原文] [Lex Fridman]: It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments. So it's like illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?
[译文] [Lex Fridman]: 它让我们洞察人性,洞察人性的极端,也许人类文明的一些破坏和一些胜利正是发生在那些时刻。所以这是具有启示意义的。让我问问关于 Ilya(Ilya Sutskever)的事。他是不是被作为人质扣押在一个秘密核设施里?
[原文] [Sam Altman]: No.
[译文] [Sam Altman]: 没有。
[原文] [Lex Fridman]: What about a regular secret facility?
[译文] [Lex Fridman]: 那普通的秘密设施呢?
[原文] [Sam Altman]: No.
[译文] [Sam Altman]: 没有。
[原文] [Lex Fridman]: What about a nuclear non-secure facility?
[译文] [Lex Fridman]: 那非安保的核设施呢?
[原文] [Sam Altman]: Neither, not that either.
[译文] [Sam Altman]: 也不在,都不是。
[原文] [Lex Fridman]: I mean, this is becoming a meme at some point. You've known Ilya for a long time. He was obviously part of this drama with the board and all that kind of stuff. What's your relationship with him now?
[译文] [Lex Fridman]: 我是说,这也快成个梗(meme)了。你认识 Ilya 很久了。他显然也卷入了这场董事会风波以及所有那些事情。你现在跟他的关系怎么样?
[原文] [Sam Altman]: I love Ilya. I have tremendous respect for Ilya. I don't have anything I can say about his plans right now. That's a question for him. But I really hope we work together for certainly the rest of my career. He's a little bit younger than me, maybe he works a little bit longer.
[译文] [Sam Altman]: 我爱 Ilya。我对 Ilya 怀有极大的敬意。关于他目前的计划,我无可奉告。那是他要回答的问题。但我真的希望我们能共事,肯定希望在我的职业生涯余下的时间里都能如此。他比我年轻一点点,也许他会工作得更久一点。
[原文] [Lex Fridman]: There's a meme that he saw something, like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?
[译文] [Lex Fridman]: 有个梗说他看见了什么东西,比如他可能看见了 AGI(通用人工智能),这让他内心非常担忧。Ilya 到底看见了什么?
[原文] [Sam Altman]: Ilya has not seen AGI, none of us have seen AGI. We've not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns broadly speaking, including things like the impact this is gonna have on society very seriously. And as we continue to make significant progress, Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right to ensure that we succeed at the mission. So Ilya did not see AGI. But Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.
[译文] [Sam Altman]: Ilya 没看见 AGI,我们谁都没看见 AGI。我们还没有造出 AGI。我确实认为,我非常喜欢 Ilya 的众多原因之一,就是他非常严肃地对待 AGI 和广义上的安全问题,包括这对社会将产生的影响。随着我们不断取得重大进展,过去几年里,Ilya 是我花最多时间交流的人之一,我们探讨这一切意味着什么、我们需要做什么来确保方向正确、确保我们完成使命。所以 Ilya 并没有看见 AGI。但就他为了确保我们做对这件事所付出的思考和担忧而言,Ilya 是人类的功臣(credit to humanity)。
[原文] [Lex Fridman]: I've had a bunch of conversation with him in the past. I think when he talks about technology, he's always like doing this long-term thinking type of thing. So he is not thinking about what this is gonna be in a year. He's thinking about in 10 years.
[译文] [Lex Fridman]: 我过去和他有过很多次谈话。我觉得当他谈论技术时,他总是在进行这种长期思考。所以他想的不是一年后会怎样,而是十年后会怎样。
[原文] [Sam Altman]: Yeah.
[译文] [Sam Altman]: 是的。
[原文] [Lex Fridman]: Just thinking from first principles like, okay, if the scales, what are the fundamentals here? Where's this going? And so that's a foundation for them thinking about like all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he's been kind of quiet? Is it he's just doing some soul searching?
[译文] [Lex Fridman]: 就像从第一性原理(first principles)出发去思考,好吧,如果规模扩大,这里的基础是什么?这将走向何方?这成为他们思考所有其他安全问题以及诸如此类事情的基础,这让他成为一个非常值得交谈的人。你知道他为什么一直这么安静吗?是因为他正在进行某种灵魂探索(soul searching)吗?
[原文] [Sam Altman]: Again, I don't wanna speak for Ilya. I think that you should ask him that. He's definitely a thoughtful guy. I think I kind of think of Ilya as like always on a soul search in a really good way.
[译文] [Sam Altman]: 再说一次,我不想代表 Ilya 发言。我觉得你应该去问他。他绝对是一个深思熟虑的人。我觉得在某种程度上,我认为 Ilya 总是处于一种很好的灵魂探索状态中。
[原文] [Lex Fridman]: Yes. Yeah. Also he appreciates the power of silence. Also, I'm told he can be a silly guy, which I've never seen that side of him.
[译文] [Lex Fridman]: 是的。没错。而且他懂得沉默的力量。另外,有人告诉我他也会是个挺顽皮(silly)的人,我还没见过他的那一面。
[原文] [Sam Altman]: It's very sweet when that happens.
[译文] [Sam Altman]: 当那种情况发生时,非常可爱。
[原文] [Lex Fridman]: I've never witnessed a silly Ilya, but I look forward to that as well.
[译文] [Lex Fridman]: 我从没见过顽皮的 Ilya,但我也很期待。
[原文] [Sam Altman]: I was at a dinner party with him recently and he was playing with a puppy. And he was like in a very silly move, very endearing and I was thinking like, oh man, this is like not the side of the Ilya that the world sees the most.
[译文] [Sam Altman]: 我最近和他参加一个晚宴,他在和一只小狗玩。他当时的举动非常顽皮,非常讨人喜欢,我当时就在想,噢天哪,这可不是世人常见到的那一面 Ilya。
[原文] [Lex Fridman]: So just to wrap up this whole saga, are you feeling good about the board structure about all of this and where it's moving?
[译文] [Lex Fridman]: 那么,总结一下这整个传奇故事,你对现在的董事会结构、对这一切以及未来的走向感觉良好吗?
[原文] [Sam Altman]: I feel great about the new board. In terms of the structure of OpenAI, one of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don't have I think super deep things to say. It was a crazy, very painful experience. I think it was like a perfect storm of weirdness. It was like a preview for me of what's gonna happen as the stakes get higher and higher and the need that we have like robust governance structures and processes and people. I am kind of happy it happened when it did, but it was a shockingly painful thing to go through.
[译文] [Sam Altman]: 我对新董事会感觉很好。就 OpenAI 的结构而言,董事会的任务之一就是审视它,看看我们在哪里可以使其更稳健。我们想先让新董事会成员到位,但在整个过程中我们显然学到了关于结构的教训。我认为我没有什么特别深刻的话要说。那是一次疯狂、非常痛苦的经历。我觉得那就像一场诡异的完美风暴(perfect storm of weirdness)。这对我来说就像是一个预演,预示着随着赌注越来越高会发生什么,以及我们需要稳健的治理结构、流程和人员。我还算高兴它在这个时候发生了,但这确实是一段令人震惊的痛苦经历。
[原文] [Lex Fridman]: Did it make you be more hesitant in trusting people?
[译文] [Lex Fridman]: 这件事有没有让你在信任他人时更加犹豫?
[原文] [Sam Altman]: Yes.
[译文] [Sam Altman]: 是的。
[原文] [Lex Fridman]: Just on a personal level.
[译文] [Lex Fridman]: 就仅仅在个人层面上。
[原文] [Sam Altman]: Yes. I think I'm like an extremely trusting person. I've always had a life philosophy of like don't worry about all of the paranoia, don't worry about the edge cases. You get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed and I really don't like this. It's definitely changed how I think about just like default trust of people and planning for the bad scenarios.
[译文] [Sam Altman]: 是的。我认为我曾是一个极其信任他人的人。我一直有一种人生哲学,就是不要担心那些偏执的想法,不要担心极端情况。你会因此吃点亏,但换来的是可以卸下防备地生活。而这次经历对我来说太令人震惊了。我完全被打得措手不及,这绝对改变了一些东西,而且我真的不喜欢这种改变。这绝对改变了我对默认信任他人以及为糟糕情况做计划的看法。
[原文] [Lex Fridman]: You gotta be careful with that. Are you worried about becoming a little too cynical?
[译文] [Lex Fridman]: 你得小心这一点。你担心自己变得有点太愤世嫉俗吗?
[原文] [Sam Altman]: I'm not worried about becoming too cynical. I think I'm like the extreme opposite of a cynical person. But I'm worried about just becoming like less of a default trusting person.
[译文] [Sam Altman]: 我不担心变得太愤世嫉俗。我觉得我完全是愤世嫉俗者的反面。但我担心自己不再像以前那样默认信任他人了。
[原文] [Lex Fridman]: I'm actually not sure which mode is best to operate in for a person who's developing AGI, trusting or untrusting. It's an interesting journey you're on. But in terms of structure, see, I'm more interested on the human level. How do you surround yourself with humans that are building cool shit, but also are making wise decisions? Because the more money you start making, the more power the thing has the weirder people get.
[译文] [Lex Fridman]: 我其实不确定对于一个开发 AGI 的人来说,哪种模式运作最好,是信任还是不信任。你正处于一段有趣的旅程中。但就结构而言,你看,我更感兴趣的是人的层面。你如何让自己周围不仅围绕着那些能造出这种酷东西的人,而且还是能做出明智决定的人?因为你赚的钱越多,这东西的权力越大,人就会变得越奇怪。
[原文] [Sam Altman]: I think you could make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently. But in terms of the team here, I think you'd have to like give me a very good grade on that one. And I have just like enormous gratitude and trust and respect for the people that I work with every day. And I think being surrounded with people like that is really important.
[译文] [Sam Altman]: 我觉得你可以对董事会成员、我本该有的信任程度,或者我本该如何以不同方式处理事情做出各种评论。但就这里的团队而言,我认为你在这一点上必须给我打个高分。我对每天共事的人怀有巨大的感激、信任和尊重。我认为被这样的人包围真的很重要。
章节 4:Elon Musk 的诉讼、开源之争与“OpenAI”的名字
📝 本节摘要:
本章中,Lex 引入了 Elon Musk 起诉 OpenAI 的话题。Sam 解释了 OpenAI 如何从最初的非营利研究实验室,为了获取算力和资金而不得不调整架构,最终形成了如今这种“令人侧目”的混合结构。他反驳了 Elon 的指控,透露 Elon 当初想要完全控制 OpenAI 或将其并入 Tesla。关于“OpenAI”这个名字,Sam 承认如果能预知未来可能会改名,但他重新定义了“Open”的含义——即向公众免费提供强大的工具,而非单纯的开源代码。面对 Elon “改名 ClosedAI 就撤诉”的挑衅,Sam 表达了失望与悲伤,他怀念那个作为“伟大的构建者”的旧 Elon,并希望双方能通过良性竞争而非诉讼来推动技术进步。
[原文] [Lex Fridman]: Our mutual friend Elon sued OpenAI. What is the essence of what he's criticizing? To what degree does he have a point? To what degree is he wrong?
[译文] [Lex Fridman]: 我们共同的朋友 Elon 起诉了 OpenAI。他批评的本质是什么?他在多大程度上是有道理的?又在多大程度上是错的?
[原文] [Sam Altman]: I don't know what it's really about. We started off just thinking we were gonna be a research lab and having no idea about how this technology was gonna go. Because it was only seven or eight years ago, it's hard to go back and really remember what it was like then. But before language models were a big deal, this was before we had any idea about an API or selling access to a chat bot. It was before we had any idea we were gonna productize at all. So we're like we're just gonna try to do research and we don't really know what we're gonna do with that.
[译文] [Sam Altman]: 我不知道这到底是为了什么。我们刚开始时只是想做一个研究实验室,完全不知道这项技术会如何发展。因为那只是七八年前的事,很难回到过去真正回想起当时的情景。但在语言模型变得重要之前,在我们对 API 或出售聊天机器人访问权有任何概念之前,甚至在我们完全没想到要产品化之前,我们的想法就是:“我们只是试着做研究,我们真的不知道我们要拿它做什么。”
[原文] [Sam Altman]: I think with many new fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong. And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said, "Okay, well, the structure doesn't quite work for that. How do we patch the structure?" And then you patch it again and patch it again and you end up with something that does look kind of eyebrow raising to say the least. But we got here gradually with I think reasonable decisions at each point along the way and doesn't mean I wouldn't do it totally differently if we could go back now with an oracle, but you don't get the oracle at the time. But anyway, in terms of what Elon's real motivations here are I don't know.
[译文] [Sam Altman]: 我认为对于许多根本性的新鲜事物,你都是在黑暗中摸索(fumbling through the dark)开始的,你会做出一些假设,而其中大部分结果都是错的。后来情况变得很清楚,我们需要做不同的事情,而且需要海量的资金。所以我们说:“好吧,原来的结构不太适合这个。我们怎么修补这个结构?”然后你修补一次,再修补一次,最终你得到的结构,至少可以说是让人侧目(eyebrow raising)的。但我们是逐步走到这一步的,我认为沿途的每一个决策点都是合理的。这不代表如果现在能带着预知能力(oracle)回到过去,我不会完全换种做法,但在当时你并没有预知能力。总之,至于 Elon 在这里的真正动机是什么,我不知道。
[原文] [Lex Fridman]: To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?
[译文] [Lex Fridman]: 就你记得的程度,OpenAI 在博客文章中给出的回应是什么?你能总结一下吗?
[原文] [Sam Altman]: Oh, we just said like Elon said this set of things, here's our characterization or here's this sort of not our characterization, here's like the characterization of how this went down. We tried to not make it emotional and just sort of say like here's the history.
[译文] [Sam Altman]: 噢,我们只是说,Elon 说了这一套东西,这是我们的描述,或者说这不是我们的描述,而是事情经过的真实描述。我们试图不带情绪,只是陈述:“这就是历史。”
[原文] [Lex Fridman]: I do think there's a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a bunch of like a small group of researchers crazily talking about AGI when everybody's laughing at that thought.
[译文] [Lex Fridman]: 我确实认为 Elon 在这里对你刚才提到的一个观点存在一定程度的错误描述,那就是你们当时的不确定程度。你们当时就像是一小群研究人员,在所有人都嘲笑这个想法的时候,疯狂地谈论着 AGI。
[原文] [Sam Altman]: Wasn't that long ago Elon was crazily talking about launching rockets when people were laughing at that thought? So I think he'd have more empathy for this.
[译文] [Sam Altman]: 不久前 Elon 不也在人们嘲笑的时候疯狂地谈论发射火箭吗?所以我以为他对此会有更多的同理心。
[原文] [Lex Fridman]: I mean, I do think that there's personal stuff here that there was a split that OpenAI and a lot of amazing people here chose to part ways of Elon. So there's a personal-
[译文] [Lex Fridman]: 我的意思是,我确实认为这里面有个人恩怨,OpenAI 发生了分裂,这里很多了不起的人选择与 Elon 分道扬镳。所以有个人的——
[原文] [Sam Altman]: Elon chose to part ways.
[译文] [Sam Altman]: 是 Elon 选择分道扬镳的。
[原文] [Lex Fridman]: Can you describe that exactly, the choosing to part ways?
[译文] [Lex Fridman]: 你能确切描述一下吗,那种选择分道扬镳的情况?
[原文] [Sam Altman]: He thought OpenAI was gonna fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAI into a for-profit company that he could have control of or have it merged with Tesla. We didn't want to do that and he decided to leave, which that's fine.
[译文] [Sam Altman]: 他当时认为 OpenAI 会失败。他想要完全的控制权来扭转局面。而我们想继续沿着现在成为 OpenAI 的这个方向前进。他还希望 Tesla 能够建立 AGI 项目。在不同时期,他想把 OpenAI 变成一家他能控制的营利性公司,或者将其与 Tesla 合并。我们不想那样做,于是他决定离开,这没问题。
[原文] [Lex Fridman]: So what is the word open in OpenAI mean to Elon at the time? Ilya has talked about this in in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?
[译文] [Lex Fridman]: 那么 OpenAI 里的“Open”一词当时对 Elon 意味着什么?Ilya 在邮件往来和这类东西里谈到过这一点。当时这对你意味着什么?现在又意味着什么?
[原文] [Sam Altman]: I would definitely pick a diff... Speaking of going back with an oracle, I'd pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. We don't run ads on our free version. We don't monetize it in other ways. We just say it's part of our mission. We wanna put increasingly powerful tools in the hands of people for free and get them to use them. And I think that kind of open is really important to our mission.
[译文] [Sam Altman]: 我肯定会选一个不……说到带着预知能力回到过去,我会选一个不同的名字。我认为 OpenAI 正在做的所有事情中,最重要的一点就是将强大的技术作为一种公共产品(public good)免费交到人们手中。我们在免费版本上不投放广告。我们也不通过其他方式变现。我们只是说这是我们使命的一部分。我们希望将越来越强大的工具免费交到人们手中,让他们去使用。我认为这种“开放”(open)对我们的使命非常重要。
[原文] [Sam Altman]: Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this like religious battle line where nuance is hard to have, but I think nuance is the right answer.
[译文] [Sam Altman]: 至于开不开源,是的,我认为我们应该开源一些东西,而不开源另一些。这确实变成了一条像宗教战争一样的战线,很难保留细微差别(nuance),但我认为细微差别才是正确的答案。
[原文] [Lex Fridman]: So he said change your name to ClosedAI and I'll drop the lawsuit. I mean, is it going to become this battleground in the land of memes about the name?
[译文] [Lex Fridman]: 他说把你们的名字改成“ClosedAI”,他就撤诉。我是说,难道这要变成一个关于名字的梗(meme)战场吗?
[原文] [Sam Altman]: I think that speaks to the seriousness with which Elon means the lawsuit. I mean, that's like an astonishing thing to say, I think.
[译文] [Sam Altman]: 我觉得这说明了 Elon 对待这场诉讼的严肃程度。我是说,我认为那是句令人震惊的话。
[原文] [Lex Fridman]: Well, I don't think the lawsuit maybe, correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way.
[译文] [Lex Fridman]: 嗯,我不认为这场诉讼……如果我错了请纠正我,但我不认为这场诉讼在法律上是严肃的。它更多是为了表达关于 AGI 未来以及目前处于领先地位的这家公司的某种观点。
[原文] [Sam Altman]: Look, I mean Grok had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that Grok open source things this week. I don't think open source versus not is what this is really about for him.
[译文] [Sam Altman]: 听着,我是说 Grok 之前也没有开源任何东西,直到人们指出这有点虚伪,然后他才在本周宣布 Grok 开源。我不认为开源与否是他真正关心的问题。
[原文] [Sam Altman]: Look, I think this whole thing is like unbecoming of a builder, and I respect Elon is one of the great builders of our time. And I know he knows what it's like to have like haters attack him and it makes me extra sad he's doing the toss.
[译文] [Sam Altman]: 听着,我认为这整件事有失构建者(builder)的风范,而我尊敬 Elon 是我们要时代伟大的构建者之一。我知道他明白被黑粉攻击是什么滋味,这让我对他现在的所作所为感到格外难过。
[原文] [Lex Fridman]: Yeah, he is one of the greatest builders of all time, potentially the greatest builder of all time.
[译文] [Lex Fridman]: 是的,他是有史以来最伟大的构建者之一,可能是最伟大的。
[原文] [Sam Altman]: It makes me sad. And I think it makes a lot of people sad. There's a lot of people who've really looked up to him for a long time and said this. I said in some interview or something that I missed the old Elon and the number of messages I got being like that exactly encapsulates how I feel.
[译文] [Sam Altman]: 这让我很难过。我想这也让很多人感到难过。有很多人长期以来一直非常仰视他。我在某个采访还是什么地方说过,我怀念“旧的 Elon”(the old Elon),结果我收到了无数条信息说:“这完全概括了我的感受。”
[原文] [Lex Fridman]: I think he should just win. He should just make Grok beat GPT and then GPT beats Grok and it's just a competition, and it's beautiful for everybody.
[译文] [Lex Fridman]: 我觉得他就应该去赢。他应该让 Grok 打败 GPT,然后 GPT 再打败 Grok,这只是一场竞争,这对每个人来说都是美好的。
[原文] [Lex Fridman]: What are the pros and cons of open sourcing? Have you played around with this idea?
[译文] [Lex Fridman]: 开源的利弊是什么?你有考虑过这个想法吗?
[原文] [Sam Altman]: Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally, I think there's huge demand for. I think there will be some open source models, there will be some closed source models. It won't be unlike other ecosystems in that way.
[译文] [Sam Altman]: 是的,我认为开源模型绝对有一席之地,特别是人们可以在本地运行的小型模型,我认为这有巨大的需求。我认为会有一些开源模型,也会有一些闭源模型。在这方面,它与其他生态系统没什么不同。
[原文] [Lex Fridman]: I listened to all in podcasts talking about this lawsuit and all that kind of stuff and they were more concerned about the precedent of going from nonprofit to this cap for profit. What precedent that sets for other startups?
[译文] [Lex Fridman]: 我听了《All-In Podcast》谈论这场诉讼之类的节目,他们更担心的是从非营利组织转变为这种有上限的营利组织(capped for profit)所树立的先例。这给其他初创公司树立了什么先例?
[原文] [Sam Altman]: I would heavily discourage any startup that was thinking about starting as a non-profit and adding like a for-profit arm later. I'd heavily discourage them from doing that. I don't think we'll set a precedent here.
[译文] [Sam Altman]: 我会强烈劝阻任何想要以非营利组织起步、随后再增加营利性部门的初创公司。我会强烈劝阻他们不要那样做。我不认为我们会在这里树立一个先例。
[原文] [Lex Fridman]: Where do you hope this goes with Elon? Well, this tension, this dance, what do you hope this? Like if we go one, two, three years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.
[译文] [Lex Fridman]: 你希望与 Elon 的这件事如何发展?这种紧张关系,这场博弈,你希望怎样?比如如果我们快进到一两三年后,你和他在个人层面的关系,比如友谊、友好的竞争,所有这些。
[原文] [Sam Altman]: Yeah. I mean, I really respect Elon. And I hope that years in the future, we have an amicable relationship.
[译文] [Sam Altman]: 是的。我是说,我真的很尊敬 Elon。我希望在未来的岁月里,我们能拥有一段友好的关系(amicable relationship)。
章节 5:Sora——世界模拟器、版权危机与“任务”经济学
📝 本节摘要:
本章从 Lex 对 Sora 的惊叹开始。Sam 指出 Sora 不仅仅是视频生成工具,它通过处理遮挡(occlusion)等物理现象,展现了对三维世界物理规律的某种理解,。针对 Sora 存在的缺陷(如猫长出多余肢体),Sam 认为是技术路线和规模化的问题,终将被解决。
随后,话题转向敏感的版权与训练数据。Sam 承认使用了大量人类数据,并明确表示创作者应当获得补偿,甚至提出通过“退出机制(opt-out)”和新的经济模型来解决类似从 Napster 到 Spotify 的转型问题,。
最后,Sam 提出了关于未来就业的重要理论:不应关注 AI 取代了多少百分比的“工作(jobs)”,而应关注它能完成多少百分比的“任务(tasks)”。他认为 AI 将作为一种工具,让人们在更高的抽象层面上工作,就像摄影术的诞生并没有消灭艺术,而是创造了新的艺术形式,。
[原文] [Lex Fridman]: So speaking of cool shit. Sora, there's like a million questions I could ask. First of all, it's amazing, it truly is amazing on a product level, but also just on a philosophical level. So let me just technical/philosophical ask. What do you think it understands about the world more or less than GPT-4, for example, like the world model when you train on these patches versus language tokens?
[译文] [Lex Fridman]: 既然说到了很酷的东西。关于 Sora,我有一百万个问题想问。首先,它太神奇了,无论是在产品层面,还是在哲学层面,都令人惊叹。我想问一个技术兼哲学的问题。你认为它对世界的理解,比起 GPT-4 来说多了什么或少了什么?比如当你用这些视觉补丁(patches)而不是语言标记(tokens)进行训练时,其中的世界模型是怎样的?
[原文] [Sam Altman]: I think all of these models understand something more about the world model than most of us give them credit for. And because they're also very clear things they just don't understand or don't get right, it's easy to look at the weaknesses, see through the veil and say this is all fake, but it's not all fake. It's just some of it works and some of it doesn't work. I remember when I started first watching Sora videos and I would see like a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, "This is pretty good." Or there's examples where the underlying physics looks so well represented over a lot of steps in a sequence. It's like oh this is like quite impressive. But, fundamentally, these models are just getting better and that will keep happening.
[译文] [Sam Altman]: 我认为所有这些模型对“世界模型”的理解,都比我们大多数人认为的要多。但因为有些东西它们显然不懂或者做不对,所以人们很容易盯着弱点看,透过面纱说“这全是假的”,但这并不全是假的。只是有些部分行得通,有些行不通。我记得我刚开始看 Sora 生成的视频时,看到一个人在某个物体前走过,遮挡(occlude)了它几秒钟,然后走开,而那个物体仍然在那儿。我就想:“这相当不错。”还有些例子,其中的底层物理规律在一个长序列的许多步骤中表现得非常到位。这真的很令人印象深刻。从根本上说,这些模型正在变得越来越好,而且这种趋势会持续下去。
[原文] [Lex Fridman]: Well, the thing you just mentioned is kind of with the occlusions is basically modeling the physics of three dimensional physics of the world sufficiently well to capture those kinds of things.
[译文] [Lex Fridman]: 嗯,你刚才提到的关于遮挡的事情,基本上就是对世界的三维物理学进行了足够好的建模,才能捕捉到这类现象。
[原文] [Sam Altman]: Yeah, so what I would say is it's doing something to deal with occlusions really well. What I represent that it has like a great underlying 3D model of the world. It's a little bit more of a stretch
[译文] [Sam Altman]: 是的,我想说的是它在处理遮挡问题上做得非常好。如果说这代表它拥有一个完美的底层 3D 世界模型,那可能有点言过其实了。
[原文] [Lex Fridman]: What are some interesting limitations of the system that you've seen? I mean, there's been some fun ones you've posted.
[译文] [Lex Fridman]: 你看到的系统有哪些有趣的局限性?我是说,你发过一些很有意思的例子。
[原文] [Sam Altman]: There's all kinds of fun. I mean, like cats sprouting a extra limit at random points in a video, like pick what you want, but there's still a lot of problem, there's a lot of weaknesses.
[译文] [Sam Altman]: 各种好玩的事都有。比如猫在视频的随机时间点突然长出了多余的肢体(注:原文 "limit" 应为口误或转录错误,实指 "limb"),随你挑,但确实还存在很多问题,很多弱点。
[原文] [Lex Fridman]: Do you think that's a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting extremes?
[译文] [Lex Fridman]: 你认为这是这种方法的根本缺陷吗?还是说更大的模型、更好的技术细节、或者更好更多的数据就能解决“猫长出多余肢体”这种极端情况?
[原文] [Sam Altman]: I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also, I think it'll get better with scale.
[译文] [Sam Altman]: 我会说两者皆有。我认为这种方法确实感觉与我们的思考和学习方式有所不同。同时我也认为,随着规模的扩大,它会变得更好。
[原文] [Lex Fridman]: Is the training to the degree you can say fully self-supervised there? Is there some manual labeling going on? What's the involvement of humans in all this?
[译文] [Lex Fridman]: 就你能透露的程度而言,这种训练是完全自监督(self-supervised)的吗?有没有人工标注的参与?人类在这一切中参与了多少?
[原文] [Sam Altman]: I mean, without saying anything specific about the Sora approach, we use lots of human data in our work.
[译文] [Sam Altman]: 我是说,如果不具体谈论 Sora 的方法,我们在工作中确实使用了大量的人类数据。
[原文] [Lex Fridman]: But not internet scale data. So lots of humans, lots of complicated word, Sam.
[译文] [Lex Fridman]: 但不是互联网规模的数据。所以“很多”人类……Sam,“很多”是个复杂的词。
[原文] [Sam Altman]: More than three people work on labeling the data for these models, yeah.
[译文] [Sam Altman]: 是的,不止三个人在为这些模型标注数据。
[原文] [Lex Fridman]: What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?
[译文] [Lex Fridman]: 危险是什么?你为什么对发布这个系统感到担忧?这可能带来哪些危险?
[原文] [Sam Altman]: But you can imagine like issues with deep fakes, misinformation. We try to be a thoughtful company about what we put out into the world and it doesn't take much thought to think about the ways this can go badly.
[译文] [Sam Altman]: 你可以想象诸如深度伪造(deep fakes)、虚假信息之类的问题。我们努力成为一家对发布到世界上的东西深思熟虑的公司,而不难想象这东西会在哪些方面造成恶果。
[原文] [Lex Fridman]: Do you think training AI should be or is fair use under copyright law?
[译文] [Lex Fridman]: 你认为在版权法下,训练 AI 应该属于或者就是合理使用(fair use)吗?
[原文] [Sam Altman]: I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it? And that I think the answer is yes. I don't know yet what the answer is. People have proposed a lot of different things. We've some tried some different models. But if I'm like an artist, for example, I would like to be able to opt out of people generating art in my style and B, if they do generate art in my style, I'd like to have some economic model associated with that.
[译文] [Sam Altman]: 我认为这个问题背后的核心是:创造了有价值数据的人,是否理应获得某种方式的补偿?我认为答案是肯定的。我还不知道具体的答案是什么。人们提出了很多不同的建议。我们也尝试了一些不同的模式。但如果我是一个艺术家,比如,我会希望:A,我有权选择退出(opt out),不让别人生成我这种风格的艺术作品;B,如果他们确实生成了我这种风格的作品,我希望这背后能有某种经济模型让我获益。
[原文] [Lex Fridman]: Yeah, it's that transition from CDs to Napster to Spotify. We have to figure out some kind of model.
[译文] [Lex Fridman]: 是的,这就是从 CD 到 Napster 再到 Spotify 的那种转型。我们必须想出某种模式。
[原文] [Sam Altman]: The model changes, but people have gotta get paid.
[译文] [Sam Altman]: 模式会变,但人们必须得到报酬。
[原文] [Lex Fridman]: But artists and creators are worried. When they see Sora, they're like, "Holy shit."
[译文] [Lex Fridman]: 但是艺术家和创作者很担心。当他们看到 Sora 时,他们的反应是:“我靠(Holy shit)。”
[原文] [Sam Altman]: Sure. Artists were also super worried when photography came out. And then photography became a new art form and people made a lot of money taking pictures. And I think things like that will keep happening. People will use the new tools in new ways.
[译文] [Sam Altman]: 当然。当摄影术出现时,艺术家们也超级担心。后来摄影成了一种新的艺术形式,人们通过拍照赚了很多钱。我认为类似的事情会继续发生。人们会以新的方式使用新工具。
[原文] [Lex Fridman]: If we just look on YouTube or something like this, how much of that will be using Sora, like AI-generated content do you think in the next five years?
[译文] [Lex Fridman]: 如果我们看看 YouTube 之类的地方,你认为未来五年内,会有多少内容是使用 Sora 或类似 AI 生成的?
[原文] [Sam Altman]: People talk about like how many jobs is AI gonna do in five years and the framework that people have is what percentage of current jobs are just gonna be totally replaced by some AI doing the job? The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do and over what time horizon. So if you think of all of the like-five second tasks in the economy, five-minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do?
[译文] [Sam Altman]: 人们在讨论 AI 在五年内会做多少工作(jobs),大家的思维框架通常是:现有工作中有多大比例会被 AI 完全取代?我的思考方式不是 AI 会做百分之多少的“工作”,而是 AI 会做百分之多少的“任务(tasks)”,以及在什么时间范围内。如果你考虑经济活动中所有那些 5 秒钟的任务、5 分钟的任务、5 小时的任务,甚至 5 天的任务,其中有多少是 AI 能做的?
[原文] [Sam Altman]: And I think that's a way more interesting, impactful, important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction.
[译文] [Sam Altman]: 我认为这是一个比“AI 能做多少工作”更有趣、更有影响力、更重要的问题。因为它是一种工具,将在越来越高的复杂程度、越来越长的时间跨度上处理越来越多的任务,从而让人们在更高的抽象层面上进行操作。
[原文] [Lex Fridman]: I tend to believe that humans like to watch other humans or other human like-
[译文] [Lex Fridman]: 我倾向于相信人类喜欢看其他人类,或者其他像人类的——
[原文] [Sam Altman]: Humans really care about other humans a lot.
[译文] [Sam Altman]: 人类确实非常关心其他人类。
[原文] [Lex Fridman]: If there's a cooler thing that's better than a human, humans care about that for like two days and then they go back to humans.
[译文] [Lex Fridman]: 如果有一个比人类更好的酷东西,人类大概会关心它两天,然后就会回头去关注人类。
[原文] [Sam Altman]: That seems very deeply wired.
[译文] [Sam Altman]: 这似乎是深深刻在基因里的(deeply wired)。
章节 6:GPT-4 其实“很烂”?——从无限上下文到整合创伤记忆
📝 本节摘要:
本章开头,Sam 出人意料地评价 GPT-4“有点烂(kind of sucks)”,但他解释说这是站在未来指数级进步的视角看的——就像现在看 GPT-3 一样。他分享了自己将 GPT-4 用作“创意头脑风暴伙伴”的体验,并预言未来 AI 的上下文窗口(Context Window)将扩展到数十亿,能够容纳一个人的全部生平数据。
随后,话题转向 AI 的记忆功能与幻觉问题。Lex 幽默地询问 Sam 是否希望 AI 能让他忘记去年 11 月的董事会危机,Sam 则给出了一个动人的回答:他不想遗忘或压抑那段痛苦,而是希望像未来的 AI 一样,将创伤整合成经验与智慧,继续前行。
[原文] [Lex Fridman]: Let me ask you about GPT-4. There's so many questions. First of all, also amazing. Looking back, it'll probably be this kind of historic pivotal moment with three, five, and four which had GPT.
[译文] [Lex Fridman]: 让我问问关于 GPT-4 的事。我有太多问题了。首先,这也是惊人的。回顾过去,这可能是一个历史性的关键时刻,包括 GPT-3、GPT-5 和 GPT-4。
[原文] [Sam Altman]: Maybe five will be the pivotal moment. I don't know. Hard to say that looking forwards.
[译文] [Sam Altman]: 也许 GPT-5 才会是那个关键时刻。我不知道。向前展望很难断言。
[原文] [Lex Fridman]: We never know. That's the annoying thing about the future, it's hard to predict. But for me, looking back GPT-4, ChatGPT is pretty impressive, historically impressive. So allow me to ask, what's been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo?
[译文] [Lex Fridman]: 我们永远不知道。这就是未来烦人的地方,很难预测。但在我看来,回顾 GPT-4 和 ChatGPT,它们相当令人印象深刻,是历史级的印象深刻。所以请允许我问一下,对你来说,GPT-4 和 GPT-4 Turbo 最令人印象深刻的能力是什么?
[原文] [Sam Altman]: I think it kind of sucks.
[译文] [Sam Altman]: 我觉得它其实有点烂(kind of sucks)。
[原文] [Lex Fridman]: Hmm. Typical human also gotten used to an awesome thing.
[译文] [Lex Fridman]: 嗯。典型的人类反应,已经习惯了好东西。
[原文] [Sam Altman]: No, I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of like GPT-3, people were like, "Oh this is amazing. This is this like marvel of technology," and it is, it was. But now we have GPT-4 and look at GPT-3 and you're like that's unimaginable horrible. I expect that the delta between five and four will be the same as between four and three. And I think it is our job to live a few years in the future and remember that the tools we have now are gonna kind of suck looking backwards at them and that's how we make sure the future is better.
[译文] [Sam Altman]: 不,我认为它确实是个惊人的东西,但相对于我们需要达到的目标以及我相信我们将达到的高度而言……就像在 GPT-3 那个时候,人们会说:“噢,这太神奇了。这是技术的奇迹。”它确实是,当时也的确如此。但现在我们有了 GPT-4,再回看 GPT-3,你会觉得那简直难以想象的糟糕。我预计 GPT-5 和 GPT-4 之间的差距,将与 GPT-4 和 GPT-3 之间的差距一样大。我认为我们的工作就是活在未来的几年里,并记住我们现在拥有的工具回过头看其实会有点烂,这正是我们要确保未来变得更好的方式。
[原文] [Lex Fridman]: What are the most glorious ways that GPT-4 sucks? Meaning-
[译文] [Lex Fridman]: GPT-4 在哪些方面“烂”得最光荣?意思是——
[原文] [Sam Altman]: What are the best things it can do?
[译文] [Sam Altman]: 它能做的最好的事情是什么?
[原文] [Lex Fridman]: What are the best things it can do in the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future?
[译文] [Lex Fridman]: 它能做的最好的事情是什么?而在这些最好事情的局限中,是什么让你说它“烂”,从而给你对未来的灵感和希望?
[原文] [Sam Altman]: One thing I've been using it for more recently is sort of a like a brainstorming partner. There's a glimmer of something amazing in there. I don't think it gets... When people talk about it, what it does, they're like, "Helps me code more productively. It helps me write more faster and better. It helps me translate from this language to another," all these like amazing things. But there's something about the kind of creative brainstorming partner. I need to come up with a name for this thing. I need to think about this problem in a different way. I'm not sure what to do here. That I think like gives a glimpse of something I hope to see more of. One of the other things that you can see a very small glimpse of is what I can help on longer horizon tasks. Break down something in multiple steps, maybe execute some of those steps, search the internet, write code, whatever. Put that together. When that works, which is not very often, it's like very magical,
[译文] [Sam Altman]: 我最近更多地把它用作某种创意头脑风暴的伙伴。那里有一丝惊人的微光。我不认为它已经完全……当人们谈论它能做什么时,他们会说:“帮我更高效地写代码。帮我写得更快更好。帮我把这种语言翻译成另一种,”所有这些惊人的事情。但在作为那种创意头脑风暴伙伴方面,有一些特别之处。比如“我需要给这个东西起个名字”、“我需要换个角度思考这个问题”、“我不确定这里该怎么做”。我认为这让我瞥见了我希望看到更多的东西。另一件你能瞥见一点点端倪的事情是,它能在更长周期的任务上提供帮助。把某件事分解成多个步骤,也许执行其中的一些步骤,搜索互联网,写代码,不管什么。把这些整合在一起。当这行得通时——虽然不是很常见——感觉非常神奇。
[原文] [Lex Fridman]: How does the context window of going from 8K to 128K tokens compare from GPT-4 to to GPT-4 Turbo?
[译文] [Lex Fridman]: 从 GPT-4 到 GPT-4 Turbo,上下文窗口从 8K 增加到 128K token,这种变化如何?
[原文] [Sam Altman]: Most people don't need all the way to 128, most of the time although. If we dream into the distant future, we'll have like way distant future, we'll have like context length of several billion. You will feed in all of your information, all of your history time, and it'll just get to know you better and better and that'll be great. For now, the way people use these models, they're not doing that.
[译文] [Sam Altman]: 绝大多数人在绝大多数时候并不需要用到 128K。不过,如果我们畅想遥远的未来,在非常遥远的未来,我们将拥有数十亿长度的上下文。你将输入你所有的信息、你所有的历史时间,它会越来越了解你,那将非常棒。而目前,人们使用这些模型的方式还没到那一步。
[原文] [Lex Fridman]: I like that this is your I have a dream speech. One day, you'll be judged by the full context of your character or of your whole lifetime. That's interesting. So like that's part of the expansion that you're hoping for is a greater and greater context.
[译文] [Lex Fridman]: 我喜欢这段,这是你的“我有一个梦想”演讲(I have a dream speech)。终有一天,你将通过你性格的完整背景或你的一生来被评判。这很有趣。所以这是你希望看到的扩展的一部分,就是越来越大的上下文。
[原文] [Sam Altman]: I saw this internet clip once. I'm gonna get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer. Maybe it was 64K, maybe 640K, something like that. And most of it was used for the screen buffer. And he just couldn't seem genuine. This couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do or you always do just need to follow the exponential of technology. We will find out how to use better technology. So I can't really imagine what it's like right now for context links to go out to the billion someday and they might not literally go there, but effectively it'll feel like that. But I know we'll use it and really not wanna go back once we have it.
[译文] [Sam Altman]: 我看过一个网络片段。我可能会把数字弄错,但那是比尔·盖茨在谈论某些早期计算机的内存容量。也许是 64K,也许是 640K,类似那样。其中大部分还被用于屏幕缓冲区。他看起来真的无法……无法想象这个世界最终会在一台电脑里需要千兆字节(GB)的内存,或者太字节(TB)的内存。而你总是会需要,或者说你只需要遵循技术的指数级发展。我们会找出如何使用更好技术的方法。所以我现在无法真正想象有一天上下文长度达到十亿会是什么样——也许未必字面上达到那个数,但效果上会感觉如此。但我知道我们会用到它,而且一旦拥有了,就真的不想再回去了。
[原文] [Lex Fridman]: One of the things that concerns me for knowledge task when I start with GPT is I'll usually have to do fact checking after, like check that it didn't come up with fake stuff. How do you figure that out that GPT can come up with fake stuff that sounds really convincing? So how do you ground it in truth?
[译文] [Lex Fridman]: 当我用 GPT 开始知识类任务时,让我担心的一件事是,我通常事后必须进行事实核查,检查它有没有编造假东西。你们怎么解决 GPT 会编造听起来非常令人信服的假东西这一问题?如何让它立足于真相?
[原文] [Sam Altman]: That's obviously an area of intense interest for us. I think it's gonna get a lot better with upcoming versions, but we'll have to continue to work on it and we're not gonna have it like all solved this year.
[译文] [Sam Altman]: 这显然是我们极度关注的一个领域。我认为这在即将到来的版本中会有很大改善,但我们必须继续努力,而且今年不可能把问题全都解决。
[原文] [Lex Fridman]: You've given ChatGPT the ability to have memories. You've been playing with that about previous conversations and also the ability to turn off memory, which I wish I could do that sometimes, just turn on and off depending. I guess sometimes alcohol can do that, but not optimally, I suppose. What have you seen through that, like playing around with that idea of remembering conversations and not?
[译文] [Lex Fridman]: 你已经赋予了 ChatGPT 拥有记忆的能力。你在以前的对话中试玩过这个功能,还有关闭记忆的能力——有时候我也希望我能做到这一点,根据情况开启或关闭。我猜有时候酒精能做到这一点,虽然效果不一定理想。通过试玩这种记住或不记住对话的想法,你看到了什么?
[原文] [Sam Altman]: We're very early in our explorations here, but I think what people want or at least what I want for myself is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there's like a lot of other things to do, but that's where we'd like to head. You'd like to use a model and over the course of your life or use a system, it'd be many models. And over the course of your life, it gets better and better.
[译文] [Sam Altman]: 我们在这方面的探索还非常早期,但我认为人们想要的,或者至少我自己想要的,是一个能逐渐了解我、并随着时间推移对我越来越有用的模型。这是一个早期探索。我认为还有很多其他事情要做,但这正是我们想努力的方向。你希望使用一个模型,在你的一生中——或者使用一个系统,里面包含很多模型——在你的一生中,它变得越来越好。
[原文] [Lex Fridman]: 'Cause right now, it's more like remembering little factoids and preferences and so on. What about remembering, like don't you want GPT to remember all the shit you went through in November and all the drama and then you can-
[译文] [Lex Fridman]: 因为现在,它更像是记住一些小知识点和偏好之类的。那记住(更大的事)呢,比如你难道不想让 GPT 记住你在 11 月经历的所有那些破事和所有的抓马(drama),然后你可以——
[原文] [Sam Altman]: Yeah, yeah, yeah.
[译文] [Sam Altman]: 是的,是的,是的。
[原文] [Lex Fridman]: Because right now, you're clearly blocking it out a little bit.
[译文] [Lex Fridman]: 因为现在,很明显你在把那些事屏蔽掉一点。
[原文] [Sam Altman]: It's not just that I want it to remember that. I want it to integrate the lessons of that and remind me in the future what to do differently or what to watch out for. And we all gain from experience over the course of our lives, varying degrees. And I'd like my AI agent to gain with that experience too.
[译文] [Sam Altman]: 不仅仅是我想让它记住那些。我希望它能整合从中吸取的教训,并在未来提醒我应该在哪些方面做得不同,或者需要注意什么。我们在我们的一生中都会从经验中获益,程度不一。我希望我的 AI 代理也能随着这些经验一同成长。
[原文] [Lex Fridman]: You mentioned earlier that I'm like blocking out the November stuff.
[译文] [Lex Fridman]: 你刚才提到我在屏蔽 11 月发生的那些事。
[原文] [Lex Fridman]: I'm just teasing you.
[译文] [Lex Fridman]: 我只是在逗你。
[原文] [Sam Altman]: Well, I mean I think it was a very traumatic thing and it did immobilize me for a long period of time. Like definitely the hardest, like the hardest work thing I've had to do was just like keep working that period because I had to try to come back in here and put the pieces together while I was just like in sort of shock and pain. Nobody really cares about that. I mean, the team gave me a pass and I was not working at my normal level, but there was a period where I was just, like, it was really hard to have to do both. But I kind of woke up one morning and I was like, "This was a horrible thing to happen to me. I think I could just feel like a victim forever," or I can say, "This is like the most important work I'll ever touch in my life and I need to get back to it." And it doesn't mean that I've repressed it because sometimes I wake in the middle of the night thinking about it, but I do feel like an obligation to keep moving forward.
[译文] [Sam Altman]: 嗯,我的意思是,我认为那是一件非常创伤性的事情,它确实让我瘫痪了很长一段时间。那绝对是最艰难的……我在工作上做过的最艰难的事就是在那段时间坚持工作,因为我必须在自己处于震惊和痛苦中时,试着回到这里把一切重新拼凑起来。没人真的在意那个。我是说,团队给了我一些宽容(pass),我当时没在我的正常水平上工作,但在那段时间里,要同时兼顾两者真的很难。但我某天早上醒来,我想:“这发生在我身上确实很糟糕。我觉得我可以永远觉得自己像个受害者,”或者我也可以说,“这是我一生中将要接触的最重要的工作,我需要回到工作中去。”这不代表我压抑了它,因为有时我会在半夜醒来想起这件事,但我确实感到一种继续前行的责任。
章节 7:Q-Star、“慢思考”范式与 GPT-5 的发布玄机
📝 本节摘要:
本章聚焦于 OpenAI 的技术路线图。Lex 询问 AI 是否需要类似人类的“慢思考”能力(即为难题分配更多算力),Sam 确认这是未来的重要范式,。随后,Lex 试图打探神秘项目 Q-Star 的细节,Sam 虽然守口如瓶,但幽默地否认了“秘密核设施”的传言,并承认提高推理能力是重点,。
在谈到 GPT-5 时,Sam 解释了 OpenAI “迭代部署(iterative deployment)”的策略,旨在避免给世界带来“惊吓”,。对于发布时间,他坦言不知道确切日期,但承诺今年会发布一个“惊人的模型”。最后,他揭示了技术突破的本质:并非单一的魔法,而是将“200 个中等大小的东西”乘在一起的系统工程。
[原文] [Lex Fridman]: Is there room there in this kind of approach to slower thinking, sequential thinking?
[译文] [Lex Fridman]: 在这种方法中,是否有空间进行更慢的思考、顺序性的思考?
[原文] [Sam Altman]: I think there will be a new paradigm for that kind of thinking.
[译文] [Sam Altman]: 我认为将会有一种新的范式来实现那种思考。
[原文] [Lex Fridman]: Will it be similar like architecturally as what we're seeing now with LLMs? Is it a layer on top of the LLMs?
[译文] [Lex Fridman]: 在架构上它会和我们现在看到的 LLM(大语言模型)相似吗?是叠加在 LLM 之上的一层吗?
[原文] [Sam Altman]: I can imagine many ways to implement that. I think that's less important than the question you were getting out, which is do we need a way to do a slower kind of thinking where the answer doesn't have to get like... I guess like spiritually, you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important.
[译文] [Sam Altman]: 我能想象出很多种实现方式。我认为这不如你提出的那个核心问题重要,那就是:我们是否需要一种进行“更慢思考”的方式?在这种方式下,答案不必……我想从精神上(spiritually)你可以说,你希望 AI 能够针对更难的问题进行更深入的思考,而针对更简单的问题则回答得更快。我认为这将会很重要。
[原文] [Lex Fridman]: Is that like a human thought that we're just having, you should be able to think hard? Is that a wrong intuition?
[译文] [Lex Fridman]: 这是不是只是我们人类的一种想法,觉得“你应该能够努力思考”?这种直觉是错的吗?
[原文] [Sam Altman]: I suspect that's a reasonable intuition.
[译文] [Sam Altman]: 我怀疑这是一个合理的直觉。
[原文] [Lex Fridman]: Interesting. So it's not possible once the GPT gets like GPT-7 would just be instantaneously be able to see, here's the proof of from RSTM.
[译文] [Lex Fridman]: 有意思。所以哪怕 GPT 到了 GPT-7 这种级别,也不可能瞬间就看出来,比如费马大定理(原文口误为 RSTM,似指 Fermat's Last Theorem)的证明。
[原文] [Sam Altman]: It seems to me like you want to be able to allocate more compute to harder problems. It seems to me that a system knowing if you ask a system like that, proof from us last theorem versus... What's today's date? Unless it already knew and had memorized the answer to the proof, assuming it's gotta go figure that out, seems like that will take more compute.
[译文] [Sam Altman]: 在我看来,你会希望能够为更难的问题分配更多的算力。如果通过一个系统——比如你问它费马大定理的证明,对比问它“今天是几号”——除非它已经知道并背下了证明的答案,否则假设它必须去推导出来,这看起来确实需要消耗更多的算力。
[原文] [Lex Fridman]: This does make me think of the mysterious, the lore behind Q-Star. What's this mysterious Q-Star project? Is it also in the same nuclear facility?
[译文] [Lex Fridman]: 这确实让我联想到那个神秘的、关于 Q-Star 的传说。这个神秘的 Q-Star 项目到底是什么?它也在同一个核设施里吗?
[原文] [Sam Altman]: There is no nuclear facility.
[译文] [Sam Altman]: 没有什么核设施。
[原文] [Lex Fridman]: That's what a person with a nuclear facility always says.
[译文] [Lex Fridman]: 拥有核设施的人总是这么说。
[原文] [Sam Altman]: I would love to have a secret nuclear facility. There isn't one.
[译文] [Sam Altman]: 我倒是很想拥有一个秘密核设施。但真没有。
[原文] [Lex Fridman]: All right.
[译文] [Lex Fridman]: 好吧。
[原文] [Sam Altman]: OpenAI is not a good company to keeping secrets. It would be nice. We're like been plagued by a lot of leaks and it would be nice if we were able to have something like that.
[译文] [Sam Altman]: OpenAI 不是一家擅长保守秘密的公司。要是能那样就好了。我们深受各种泄密事件的困扰,如果我们能拥有那样的地方(指高度保密的设施)就好了。
[原文] [Lex Fridman]: Can you speak to what Q-Star is?
[译文] [Lex Fridman]: 你能说说 Q-Star 是什么吗?
[原文] [Sam Altman]: We are not ready to talk about that.
[译文] [Sam Altman]: 我们还没准备好谈论那个。
[原文] [Sam Altman]: I mean, we work on all kinds of research. We have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue. We haven't cracked the code yet. We're very interested in it.
[译文] [Sam Altman]: 我是说,我们从事各种各样的研究。我们已经说过有一段时间了,我们认为在这些系统中实现更好的推理能力(reasoning)是我们想要追求的一个重要方向。我们还没有完全破解这个难题。我们对它非常感兴趣。
[原文] [Lex Fridman]: Is there gonna be moments Q-Star or otherwise where there's going to be leaps similar to GPT where you're like-
[译文] [Lex Fridman]: 会不会有某些时刻,无论是 Q-Star 还是其他什么,会出现类似于 GPT 那样的飞跃,让你觉得——
[原文] [Sam Altman]: That's a good question. What do I think about that? It's interesting to me it all feels pretty continuous.
[译文] [Sam Altman]: 这是一个好问题。我对此怎么看?对我来说有趣的是,这一切感觉都相当连续。
[原文] [Lex Fridman]: This is kind of a theme that you're saying is you're basically gradually going up an exponential slope. But from an outsider's perspective for me, just watching it that it does feel like there's leaps, but to you there isn't.
[译文] [Lex Fridman]: 这一直是你所说的一个主题,基本上就是在沿着指数曲线逐渐上升。但对我这个局外人来说,观察这一过程确实感觉存在飞跃,但对你来说却没有。
[原文] [Sam Altman]: I do wonder if we should have... So part of the reason that we deploy the way we do is that we think, we call it iterative deployment. Rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT 1, 2, 3 and 4. And part of the reason there is, I think, AI and surprise don't go together. And also the world, people, institutions, whatever you wanna call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy and we get the world to pay attention to the progress to take AGI seriously to think about what systems, and structures, and governance we want in place before, we're like under the gun and have to make a rush decision. I think that's really good. But the fact that people like you and others say you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively.
[译文] [Sam Altman]: 我确实在想我们是否应该……我们采取这种部署方式的部分原因是,我们称之为迭代部署(iterative deployment)。与其在秘密中闭门造车直到做出 GPT-5,我们决定发布 GPT 1、2、3 和 4。部分原因在于,我认为 AI 和“惊喜(surprise)”并不搭。而且世界、人们、机构,无论你怎么称呼,都需要时间来适应和思考这些事情。我认为 OpenAI 做得最好的事情之一就是这个策略,我们让世界关注进展,认真对待 AGI,在被迫仓促做决定之前,思考我们需要什么样的系统、结构和治理。我认为这真的很好。但像你和其他人仍然觉得存在“飞跃”这一事实,让我思考也许我们应该以更加迭代的方式进行发布。
[原文] [Lex Fridman]: So when is GPT-5 coming out again?
[译文] [Lex Fridman]: 那么 GPT-5 到底什么时候出?
[原文] [Sam Altman]: I don't know. That's an honest answer.
[译文] [Sam Altman]: 我不知道。这是诚实的回答。
[原文] [Lex Fridman]: Oh, that's the honest answer. Is it blink twice if it's this year?
[译文] [Lex Fridman]: 噢,这是诚实的回答。如果是今年,眨两下眼?
[原文] [Sam Altman]: We will release an amazing model this year. I don't know what we'll call it.
[译文] [Sam Altman]: 我们今年会发布一个惊人的模型。我不知道我们会叫它什么。
[原文] [Lex Fridman]: So that goes to the question of like, what's the way we release this thing?
[译文] [Lex Fridman]: 所以这又回到了那个问题,我们以什么方式发布这个东西?
[原文] [Sam Altman]: We'll release, over in the coming months, many different things. I think they'll be very cool. I think before we talk about like a GPT-5 like model called that or called or not called that or a little bit worse or a little bit better than what you'd expect from a GPT-5, I know we have a lot of other important things to release first.
[译文] [Sam Altman]: 在接下来的几个月里,我们会发布很多不同的东西。我觉得它们会非常酷。我认为在谈论像 GPT-5 这样的模型——不管叫不叫这个名字,或者比你预期的 GPT-5 稍微差一点还是好一点——之前,我知道我们还有很多其他重要的东西要先发布。
[原文] [Lex Fridman]: What are some of the biggest challenges in bottlenecks to overcome for whatever it ends up being called, but let's call it GPT-5? Just interesting to ask, is it on the compute side? Is it in the technical side?
[译文] [Lex Fridman]: 不管它最终叫什么,我们暂且叫它 GPT-5,要克服的最大挑战和瓶颈是什么?只是好奇问问,是在算力方面?还是技术方面?
[原文] [Sam Altman]: It's always all of these. What's the one big unlock? Is it a bigger computer? Is it like a new secret? Is it something else? It's all of these things together. The thing that OpenAI I think does really well, this is actually an original Ilya quote that I'm gonna butcher, but it's something like we multiply 200 medium-sized things together into one giant thing.
[译文] [Sam Altman]: 总是所有这些因素。什么是那个大的解锁点(big unlock)?是更大的计算机吗?是某个新的秘密吗?还是别的什么?它是所有这些东西的结合。我认为 OpenAI 做得真正好的地方——这其实是 Ilya(Sutskever)的一句名言,我可能会复述得不太准确——大概意思是:我们将 200 个中等大小的东西乘在一起,变成了一个巨大的东西。
[原文] [Lex Fridman]: So there's this distributed constant innovation happening.
[译文] [Lex Fridman]: 所以这是一种分布式的、持续不断的创新。
[原文] [Sam Altman]: Especially on the technical side.
[译文] [Sam Altman]: 特别是在技术方面。
[原文] [Sam Altman]: There's a few people who have to think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.
[译文] [Sam Altman]: 只有少数人必须思考如何把整个东西整合在一起,但很多人都在努力将大部分图景保留在脑海中。
[原文] [Sam Altman]: Even if most of the time, you're operating in the weeds in one area, pays off with surprising insights. In fact, one of the things that I used to have and I think was super valuable was I used to have like a good map of all of the frontier or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only deep in one area, I wouldn't be able to have the idea for because I wouldn't have all the data and I don't really have that much anymore. I'm like super deep now. But I know that it's a valuable thing.
[译文] [Sam Altman]: 即使你大部分时间都在某个领域的细节(in the weeds)中运作,这也会带来令人惊讶的洞察力作为回报。事实上,我过去拥有一样东西,我认为非常有价值,那就是我曾经拥有一张涵盖科技行业所有前沿或大部分前沿的“好地图”。我有时能看到这些连接,或者看到哪些新事物是可能的,如果我只在某个领域钻研得很深,我就无法产生这些想法,因为我没有所有的数据。但我现在真的没有那么多了。我现在陷得超级深。但我知道那是一件有价值的事情。
章节 8:算力作为新货币、挑战 Google 与人类的“脚手架”
📝 本节摘要:
本章首先澄清了关于 Sam 寻求“7 万亿美元”融资的传闻,他强调算力(Compute)将成为未来的货币,而核聚变(Fusion)是解决其能源瓶颈的关键。随后,Sam 直言不想做一个更烂的 Google 副本,通过“不喜欢广告”的审美偏好,暗示了 OpenAI 与 Google 截然不同的商业路径。
在谈及 Google Gemini 的“黑色纳粹”偏见事件时,Sam 提出应公开模型的“行为规范文档”以界定 Bug 与特性。访谈最后进入了存在主义领域:Sam 预测本年代末将出现类似 AGI 的系统,他对模拟宇宙持开放态度,并认为外星文明大概率存在。以此为引,他表达了对人类的希望——AGI 不是单一的大脑,而是建立在人类集体智慧(Scaffolding)之上的新高度。
[原文] [Lex Fridman]: You tweeted about needing $7 trillion.
[译文] [Lex Fridman]: 你发推特说需要 7 万亿美元。
[原文] [Sam Altman]: I did not tweet about that. I never said like we're raising $7 trillion or blah blah blah.
[译文] [Sam Altman]: 我没发推特说那个。我从没说过比如我们要筹集 7 万亿美元或者诸如此类的话。
[原文] [Lex Fridman]: Oh, that's somebody else. Oh, but you said it, "Fuck it, maybe eight," I think.
[译文] [Lex Fridman]: 噢,那是别人说的。噢,但你好像说了,“去他的,也许是八万亿吧(Fuck it, maybe eight)”,我想是这样。
[原文] [Sam Altman]: Okay. I meme like once there's like misinformation out in the world. Look, I think compute is gonna be the currency of the future. I think it will be maybe the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute.
[译文] [Sam Altman]: 好吧。一旦世界上出现了某种错误信息,我就会玩个梗(meme)。听着,我认为算力(compute)将会成为未来的货币。我认为它可能会成为世界上最珍贵的商品。而且我认为我们应该大力投资以制造更多的算力。,
[原文] [Lex Fridman]: How do you solve the energy puzzle?
[译文] [Lex Fridman]: 你如何解决能源难题?
[原文] [Sam Altman]: Nuclear.
[译文] [Sam Altman]: 核能。
[原文] [Lex Fridman]: Fusion?
[译文] [Lex Fridman]: 核聚变?
[原文] [Sam Altman]: That's what I believe. I think Helion's doing the best work, but I'm happy there's like a race for fusion right now.
[译文] [Sam Altman]: 这正是我的信念。我认为 Helion 做得最好,但我很高兴现在有一场核聚变的竞赛。
[原文] [Lex Fridman]: Google, with the help of search, has been dominating the past 20 years. So is OpenAI going to really take on this thing that Google started 20 years ago?
[译文] [Lex Fridman]: Google 借助搜索在过去 20 年里一直占据主导地位。那么 OpenAI 真的要挑战 Google 20 年前开启的这项事业吗?
[原文] [Sam Altman]: I find that boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should go... Like people should use a better product. But I think that would so understate what this can be. Google shows you like 10 blue links, like 13 ads and then 10 blue links and that's like one way to find information. But the thing that's exciting to me is not that we can go build a better copy of Google Search, but that maybe there's just some much better way to help people find and act on and synthesize information.
[译文] [Sam Altman]: 我觉得那样很无聊。我的意思是,如果问题是我们能不能造出一个比 Google 更好的搜索引擎之类的,那当然,我们应该去……人们应该使用更好的产品。但我认为那太低估了这东西的潜力。Google 给你展示 10 个蓝色链接,或者大概 13 个广告然后才是 10 个蓝色链接,那只是获取信息的一种方式。但真正让我兴奋的不是我们可以去造一个更好的 Google 搜索的副本,而是也许有一种好得多的方式来帮助人们寻找、利用和综合信息。
[原文] [Lex Fridman]: What about the ads side? Have you ever considered monetization?
[译文] [Lex Fridman]: 那广告方面呢?你有没有考虑过变现?
[原文] [Sam Altman]: I kind of hate ads just as like an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons to get it going, but it's a more mature industry. The world is richer now. I like that people pay for ChatGPT and know that the answers they're getting are not influenced by advertisers.
[译文] [Sam Altman]: 出于一种审美选择,我有点讨厌广告。我认为互联网在起步阶段因为种种原因需要广告,但现在这是一个更成熟的行业了。世界现在更富裕了。我喜欢人们为 ChatGPT 付费,并且知道他们得到的答案不会受到广告商的影响。,
[原文] [Lex Fridman]: The Gemini 1.5 came out recently. There's a lot of drama around it. And it generated Black Nazis and Black founding fathers. How do you deal with that?
[译文] [Lex Fridman]: Gemini 1.5 最近发布了。围绕它有很多抓马。它生成了黑人纳粹和黑人国父。你们怎么处理这种情况?
[原文] [Sam Altman]: I mean, we work super hard not to do things like that. One thing that we've been thinking about more and more is... It'd be nice to write out what the desired behavior of a model is, make that public take input on it. And then when a model is not behaving in a way that you want, it's at least clear about whether that's a bug the company should fix or behaving as intended and you should debate the policy.
[译文] [Sam Altman]: 我是说,我们非常努力地避免发生那样的事情。我们越来越多在思考的一件事是……如果能写出模型的预期行为规范,将其公开并听取意见,那会很好。然后当模型没有按你想要的方式行事时,至少能分清楚那是一个公司应该修复的 Bug(漏洞),还是它在按预期行事、你需要辩论的是政策本身。,
[原文] [Lex Fridman]: When do you think we you and we as humanity will build AGI?
[译文] [Lex Fridman]: 你认为我们,你和作为人类的我们,什么时候能造出 AGI?
[原文] [Sam Altman]: I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable.
[译文] [Sam Altman]: 我预计在这个十年结束之前(by the end of this decade),甚至可能稍微早一点,我们将拥有非常强大的系统,当我们看着它们时会惊叹:“哇,这真的很了不起。”
[原文] [Lex Fridman]: You're quite possibly would be the person to build the AGI to be able to interact with it before anyone else does. What kind of stuff would you talk about?
[译文] [Lex Fridman]: 你很可能就是那个造出 AGI 并比其他人都先与它互动的人。你会跟它聊什么?
[原文] [Sam Altman]: I don't think, like go explain to me the grand unified theory of physics, the theory of everything for physics. I'd love to ask that question.
[译文] [Sam Altman]: 我不觉得(它一开始就能回答),但我会去问:给我解释一下物理学的大统一理论,物理学的万物理论。我很想问这个问题。
[原文] [Lex Fridman]: Given Sora's ability to generate simulated worlds, does this increase your belief if you ever had one that we live in a simulation?
[译文] [Lex Fridman]: 鉴于 Sora 生成模拟世界的能力,这是否增加了你对“我们生活在一个模拟中”的信念(如果你曾有过的话)?
[原文] [Sam Altman]: Yes, somewhat. I don't think that's like the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone's probability somewhat or at least open to it.
[译文] [Sam Altman]: 是的,有所增加。我不认为这是最有力的证据。但我认为我们可以生成世界这一事实,应该会让每个人对此的概率评估都增加一点,或者至少对这种可能性持更加开放的态度。
[原文] [Lex Fridman]: Do you think, as I mentioned before, there's other aliens, civilizations out there?
[译文] [Lex Fridman]: 就像我之前提到的,你认为外面有其他外星人、其他文明吗?
[原文] [Sam Altman]: I deeply want to believe that the answer is yes. I do find the fermi paradox very, very puzzling.
[译文] [Sam Altman]: 我内心深处非常希望答案是肯定的。我确实觉得费米悖论非常、非常令人费解。
[原文] [Lex Fridman]: What gives you hope about the future of humanity?
[译文] [Lex Fridman]: 是什么让你对人类的未来充满希望?
[原文] [Sam Altman]: One thing that I wonder about is, is AGI gonna be more like some single brain, or is it more like the sort of scaffolding in society between all of us? You have not had a great deal of genetic drift from your great-great-great grandparents, and yet what you're capable of is dramatically different. But what you have is this scaffolding that we all contributed to built on top of. And so in some sense, that like we all created that and that fills me with hope for the future. That was a very collective thing.
[译文] [Sam Altman]: 我在思考的一件事是,AGI 到底会更像是一个单一的大脑,还是更像我们所有人之间这种社会的“脚手架”(scaffolding)?你和你的曾曾曾祖父母相比,并没有发生太大的基因漂变,但你的能力却截然不同。你所拥有的是我们所有人共同贡献并在此基础上构建的这个脚手架。所以在某种意义上,那是我们共同创造的,这让我对未来充满希望。那是一件非常集体化的事情。
章节 9:面对死亡、好奇心与阿瑟·克拉克的预言
📝 本节摘要:
在访谈的最后几分钟,Lex 将话题引向了终极的个人问题:死亡。鉴于 Sam 之前提到过这行工作可能带来的生命危险,Lex 询问他是否害怕死亡。Sam 坦然回答,如果生命突然终结,他最大的遗憾将是无法看到未来会发生什么,因为这是一个如此“好奇的时刻”,但他对已经拥有的人生充满感激。节目最后,Lex 引用了科幻大师阿瑟·克拉克(Arthur C. Clarke)的一句名言,为这场关于 AGI 的对话画上了充满哲理的句号:也许我们的角色不是崇拜神,而是创造神。
[原文] [Lex Fridman]: Yeah. We really are standing on the shoulders of giants. You mentioned when we were talking about theatrical, dramatic AI risks that sometimes you might be afraid for your own life. Do you think about your death? Are you afraid of it?
[译文] [Lex Fridman]: 是的。我们确实是站在巨人的肩膀上。你之前在谈到戏剧性的、剧烈的 AI 风险时提到,有时你可能会担心自己的生命安全。你会思考你的死亡吗?你害怕它吗?
[原文] [Sam Altman]: I mean, I like if I got shot tomorrow and I knew it today, I'd be like, "Oh, that's sad. I wanna see what's gonna happen."
[译文] [Sam Altman]: 我的意思是,如果我明天就会被枪杀,而我今天知道了,我会觉得:“噢,那真令人悲伤。我想看看接下来会发生什么。”
[原文] [Lex Fridman]: Yeah.
[译文] [Lex Fridman]: 是的。
[原文] [Sam Altman]: What a curious time. What an interesting time. But I would mostly just feel like very grateful for my life.
[译文] [Sam Altman]: 这是一个多么令人好奇的时代。这是一个多么有趣的时代。但我主要会感到对我的生命充满感激。
[原文] [Lex Fridman]: The moments that you did get... Yeah, me too. It's a pretty awesome life. I get to enjoy awesome creations of humans of which I believe ChatGPT is one of, and everything that OpenAI is doing. Sam, it's really an honor and pleasure to talk to you again.
[译文] [Lex Fridman]: 那些你确实拥有的时刻……是的,我也是。这真是一段非常棒的人生。我有幸享受人类那些了不起的创造,我相信 ChatGPT 就是其中之一,还有 OpenAI 正在做的所有事情。Sam,能再次与你交谈真的是我的荣幸和快乐。
[原文] [Sam Altman]: Great to talk to you. Thank you for having me.
[译文] [Sam Altman]: 很荣幸与你交谈。谢谢你邀请我。
[原文] [Lex Fridman]: Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Arthur C. Clarke and maybe that our role on this planet is not to worship God, but to create Him. Thank you for listening and hope to see you next time.
[译文] [Lex Fridman]: 感谢收听这场与 Sam Altman 的对话。为了支持本播客,请查看描述中的赞助商信息。现在,让我用阿瑟·克拉克(Arthur C. Clarke)的一段话留给你们思考:也许我们在这个星球上的角色不是崇拜上帝,而是创造祂。感谢收听,希望下次再见。