OpenAI’s Sam Altman Talks ChatGPT, AI Agents and S
### 章节 1:Sora 演示、智能图谱与“工具论”视角 📝 **本节摘要**: > 主持人 Chris Anderson 欢迎 Sam Altman 登台,并通过展示 OpenAI 最新视频生成模型 Sora 的作品以及一张区分“智能”与“意识”的图表,引出了对 GPT-4o 综合能力的讨论...
Category: Podcasts📝 本节摘要:
主持人 Chris Anderson 欢迎 Sam Altman 登台,并通过展示 OpenAI 最新视频生成模型 Sora 的作品以及一张区分“智能”与“意识”的图表,引出了对 GPT-4o 综合能力的讨论。针对技术进步可能引发的职业焦虑(以管理咨询为例),Sam 提出了乐观的“工具论”视角:如同历史上的技术革命,AI 将大幅提升人类的能力上限,而非单纯的替代。本章以一个关于 Charlie Brown 自我认知的“元回答”结尾,探讨了内容生成的深度与训练数据的关系。
[原文] Chris Anderson: Sam, welcome to TED. Thank you so much for coming.
[译文] Chris Anderson: Sam,欢迎来到 TED。非常感谢你的到来。
[原文] Sam Altman: Thank you. It's an honor.
[译文] Sam Altman: 谢谢。这是我的荣幸。
[原文] CA: Your company has been releasing crazy insane new models pretty much every other week it feels like.
[译文] CA: 你们公司最近感觉几乎每隔一周就在发布疯狂、不可思议的新模型。
[原文] I've been playing with a couple of them. I'd like to show you what I've been playing.
[译文] 我一直在试用其中的几个。我想向你展示一下我试用的成果。
[原文] So, Sora, this is the image and video generator. I asked Sora this: What will it look like when you share some shocking revelations here at TED?
[译文] 首先是 Sora,这是图像和视频生成器。我问了 Sora 这个问题:当你在 TED 这里分享一些令人震惊的内幕时,画面会是什么样的?
[原文] You want to see how it imagined it, you know? (Laughter) I mean, not bad, right?
[译文] 你想看看它是怎么想象那个场景的吗,你知道吧?(笑声)我是说,还不错,对吧?
[原文] How would you grade that? Five fingers on all hands.
[译文] 你会给它打几分?所有的手上都有五根手指。
[原文] SA: Very close to what I'm wearing, you know, it's good.
[译文] SA: 和我穿的衣服非常接近,你知道,这很不错。
[原文] CA: I've never seen you quite that animated.
[译文] CA: 我从来没见过你那么生动活泼。
[原文] SA: No, I'm not that animated of a person.
[译文] SA: 不,我不是那种动作幅度很大的人。
[原文] CA: So maybe a B-plus. But this one genuinely astounded me.
[译文] CA: 所以也许给个 B+ 吧。但是这一个(演示)真的让我震惊了。
[原文] When I asked it to come up with a diagram that shows the difference between intelligence and consciousness. Like how would you do that?
[译文] 当我要求它生成一张图表,展示“智能”和“意识”之间的区别时。试想如果是你,你会怎么做?
[原文] This is what it did. I mean, this is so simple, but it's incredible.
[译文] 这就是它做出来的结果。我的意思是,这太简洁了,但简直不可思议。
[原文] What is the kind of process that would allow -- like this is clearly not just image generation.
[译文] 是什么样的过程让它能够——我是说这显然不仅仅是图像生成。
[原文] It's linking into the core intelligences that your overall model has.
[译文] 它是连接到了你们整体模型中所具备的核心智能。
[原文] SA: Yeah, the new image generation model is part of GPT-4o, so it's got all of the intelligence in there.
[译文] SA: 是的,新的图像生成模型是 GPT-4o 的一部分,所以它包含了那里面所有的智能。
[原文] And I think that's one of the reasons it's been able to do these things that people really love.
[译文] 我认为这也是它能够做到这些人们真正喜欢的事情的原因之一。
[原文] CA: I mean, if I'm a management consultant and I'm playing with some of this stuff, I'm thinking, uh oh, what does my future look like?
[译文] CA: 我的意思是,假如我是一名管理顾问,我在玩这些东西的时候,我会想,“噢不,我的未来会变成什么样?”
[原文] SA: I mean, I think there are sort of two views you can take. You can say, oh, man, it's doing everything I do.
[译文] SA: 我觉得,你可以持有两种观点。你可以说,“天哪,它在做我做的所有事情。”
[原文] What's going to happen to me? Or you can say, like through every other technological revolution in history, OK, now there's this new tool.
[译文] “我会发生什么事?”或者你可以说,就像历史上的每一次技术革命一样,“好吧,现在有了这个新工具。”
[原文] I can do a lot more. What am I going to be able to do?
[译文] “我可以做更多的事情了。我将来能做到什么?”
[原文] It is true that the expectation of what we’ll have for someone in a particular job increases, but the capabilities will increase so dramatically that I think it will be easy to rise to that occasion.
[译文] 确实,我们对某个特定职位的人的期望值会提高,但能力的提升将是如此巨大,以至于我认为人们很容易应对这一挑战。
[原文] CA: So this impressed me too. I asked it to imagine Charlie Brown as thinking of himself as an AI. It came up with this.
[译文] CA: 这一点也让我印象深刻。我让它想象查理·布朗(Charlie Brown)把自己当成一个人工智能。它给出了这个结果。
[原文] I thought this was actually rather profound. What do you think? (Laughs)
[译文] 我觉得这实际上相当深刻。你怎么看?(笑)
[原文] I mean, the writing quality of some of the new models, not just here, but in detail, is really going to a new level.
[译文] 我的意思是,一些新模型的写作质量,不仅仅是在这里,而是在细节上,真的达到了一个新的水平。
[原文] SA: I mean, this is an incredible meta answer, but there's really no way to know if it is thinking that or it just saw that a lot of times in the training set.
[译文] SA: 我是说,这是一个令人难以置信的“元”回答(meta answer),但真的没办法知道它是在思考那个问题,还是它只是在训练集中看过很多次类似的内容。
[原文] And of course like if you can’t tell the difference, how much do you care?
[译文] 当然,如果你分辨不出区别,你又会有多在乎呢?
📝 本节摘要:
针对上一章展示的“查理·布朗”案例,Chris 尖锐地指出了潜在的“知识产权盗窃”问题,并质问 OpenAI 是否取得了授权。Sam 承认这是一个复杂领域,强调 AI 旨在辅助人类创意,而非简单复制。双方深入探讨了“风格模仿”与“直接剽窃”的界限,以及如何量化 AI 对特定艺术家作品的借鉴。Chris 提出应建立一种能让被点名的创作者获得收益的机制,Sam 对此表示赞同,并透露未来可能会探索“选择加入(opt-in)”式的收入分成模型。
[原文] CA: So that's really interesting. We don't know.
[译文] CA: 这真的很有趣。我们无从得知。
[原文] Isn't there though ... like at first glance this looks like IP theft.
[译文] 但这难道不存在……就像乍一看这像是知识产权(IP)盗窃。
[原文] Like you guys don’t have a deal with the “Peanuts” estate?
[译文] 就像你们并没有和“花生漫画”(Peanuts)的遗产管理委员会达成协议吧?
[原文] (Applause)
[译文] (掌声)
[原文] You can clap about that all you want, enjoy. (Laughter and murmuring)
[译文] 你们尽管鼓掌吧,尽情享受。(笑声和低语声)
[原文] I will say that I think the creative spirit of humanity is an incredibly important thing, and we want to build tools that lift that up, that make it so that new people can create better art, better content, write better novels that we all enjoy.
[译文] 我想说,我认为人类的创造精神是一件极其重要的事情,我们希望构建能提升这种精神的工具,让新人能够创造出更好的艺术、更好的内容、写出更好的小说供我们所有人享受。
[原文] I believe very deeply that humans will be at the center of that.
[译文] 我深信人类将处于这一过程的中心。
[原文] I also believe that we probably do need to figure out some sort of new model around the economics of creative output.
[译文] 我也相信,我们可能确实需要围绕创意产出的经济学想出某种新的模式。
[原文] I think people have been building on the creativity of others for a long time. People take inspiration for a long time.
[译文] 我认为长期以来,人们一直是在他人的创造力基础之上进行构建的。人们长期以来都在汲取灵感。
[原文] But as the access to creativity gets incredibly democratized and people are building off of each other's ideas all the time, I think there are incredible new business models that we and others are excited to explore.
[译文] 但随着创造力的获取变得极其民主化(普及化),且人们时刻都在彼此想法的基础上进行构建,我认为有令人难以置信的新商业模式值得我们和其他人去探索。
[原文] Exactly what that's going to look like, I'm not sure.
[译文] 具体会是什么样子,我不确定。
[原文] Clearly, there’s some cut and dry stuff, like you can’t copy someone else’s work.
[译文] 显然,有些事情是界限分明的,比如你不能抄袭别人的作品。
[原文] But how much inspiration can you take?
[译文] 但是你可以汲取多少灵感呢?
[原文] If you say, I want to generate art in the style of these seven people, all of whom have consented to that, how do you, like divvy up how much money goes to each one?
[译文] 如果你说,我想用这七个人的风格生成艺术作品,而且他们所有人都同意了,你该如何分配,比如每人分多少钱?
[原文] These are like big questions.
[译文] 这些都是重大的问题。
[原文] But every time throughout history we have put better and more powerful technology in the hands of creators.
[译文] 但纵观历史,每一次我们将更好、更强大的技术交到创作者手中时。
[原文] I think we collectively get better creative output and people do just more amazing stuff.
[译文] 我认为我们集体获得了更好的创意产出,人们也确实做出了更令人惊叹的东西。
[原文] CA: An even bigger question is when they haven't consented to it.
[译文] CA: 一个更大的问题是,当他们没有对此表示同意的时候。
[原文] In our opening session, Carole Cadwalladr, showed, you know, "ChatGPT give a talk in the style of Carole Cadwalladr" and sure enough, it gave a talk that wasn't quite as good as the talk she gave, but it was pretty impressive.,
[译文] 在我们的开幕环节,Carole Cadwalladr 展示了,你知道,“ChatGPT 用 Carole Cadwalladr 的风格做个演讲”,果不其然,它给出的演讲虽然不如她本人的好,但也相当令人印象深刻。,
[原文] And she said, "OK, it's great, but I did not consent to this."
[译文] 她说,“好吧,这很棒,但我并没有同意这么做。”
[原文] How are we going to navigate this?
[译文] 我们将如何处理这个问题?
[原文] Like isn’t there a way, should it just be people who’ve consented?
[译文] 难道就没有一种方法,是不是应该只针对那些已经同意的人?
[原文] Or shouldn’t there be a model that somehow says that any named individual in a prompt whose work is then used, they should get something for that?
[译文] 或者难道不应该有一种模式,可以说提示词(prompt)中提到的任何个人,如果其作品随后被使用了,他们就应该为此获得一些东西?
[原文] SA: So right now, if you use our image-gen thing and say, I want something in the style of a living artist, it won't do that.,
[译文] SA: 目前,如果你使用我们的图像生成工具说,我想要某位在世艺术家的风格,它是不会那么做的。,
[原文] But if you say I want it in the style of this particular like kind of vibe, or this studio or this art movement or whatever, it will.
[译文] 但如果你说我想要某种特定的氛围(vibe),或者这个工作室,或者这个艺术运动之类的风格,它会做的。
[原文] And obviously if you’re like, you know, output a song that is like a copy of a song, it won't do that.
[译文] 显然,如果你要求,你知道,输出一首像是某首歌复制品的歌曲,它也不会做。
[原文] The question of like where that line should be and how people say like, this is too much, we sorted that out before with copyright law and kind of what fair use looks like.
[译文] 关于界限应该在哪里,以及人们如何判断“这太过分了”的问题,我们以前通过版权法和“合理使用”(fair use)的界定已经解决过这类问题。
[原文] Again, I think in the world of AI, there will be a new model that we figure out.,
[译文] 同样地,我认为在 AI 的世界里,我们会探索出一个新的模式。,
[原文] CA: From the point of view, I mean, creative people are some of the angriest people right now or the most scared people about AI.
[译文] CA: 从某种角度来看,我是说,创意人士是目前对 AI 最愤怒或最恐惧的人群之一。
[原文] And the difference between feeling your work is being stolen from you and your future is being stolen from you, and feeling your work is being amplified and can be amplified, those are such different feelings.
[译文] 感觉你的作品被窃取了、你的未来被窃取了,与感觉你的作品正在被放大、并且能够被放大,这两种感觉有着天壤之别。
[原文] And if we could shift to the other one, to the second one, I think that really changes how much humanity as a whole embraces all this.
[译文] 如果我们能转向另一种,即第二种感觉,我认为这将真正改变人类作为一个整体拥抱这一切的程度。
[原文] SA: Well, again, I would say some creative people are very upset.
[译文] SA: 嗯,还要说的是,有些创意人士确实非常沮丧。
[原文] Some creatives are like, "This is the most amazing tool ever, I'm doing incredible new work."
[译文] 也有一些创意人士觉得,“这是有史以来最神奇的工具,我正在做出不可思议的新作品。”
[原文] But you know like it’s definitely a change.
[译文] 但你也知道,这绝对是一个改变。
[原文] And I have a lot of like empathy to people who are just like, "I wish this change weren't happening. I liked the way things were before."
[译文] 我对那些觉得“我希望这种改变没有发生,我喜欢以前的样子”的人抱有极大的同情。
[原文] CA: But in principle, you can calculate from any given prompt how much ... there should be some way of being able to calculate what percentage of a subscription, revenue or whatever goes towards each answer.,
[译文] CA: 但原则上,你可以根据任何给定的提示词计算出多少……应该有某种方法能够计算出订阅费、收入或其他什么的百分之多少应该归属于每一个回答。,
[原文] In principle, it should be possible if one could get the rest of the rules figured out.
[译文] 原则上,如果能把其余的规则弄清楚,这应该是可能的。
[原文] It's obviously complicated. You could calculate some kind of revenue share, no?
[译文] 这显然很复杂。但你们可以计算某种收入分成,不是吗?
[原文] SA: If you're a musician and you spend your whole life, your whole childhood, listening to music, and then you get an idea and you go compose a song that is inspired by what you've heard before, but a new direction, it'd be very hard for you to say like, this much was from this song I heard when I was 11.
[译文] SA: 如果你是一名音乐家,你花了一辈子,整个童年都在听音乐,然后你有了一个想法,去创作一首受你以前听过的东西启发但又有新方向的歌,你会很难说清楚,比如,“这一部分是来自我 11 岁时听过的那首歌”。
[原文] CA: That's right. But we're talking here about the situation where someone specifically in a prompt names someone.,
[译文] CA: 没错。但我们这里谈论的是有人在提示词中明确点名某人的情况。,
[原文] SA: Well, again, right now, if you try to like, go generate an image in a named style, we just say that artist is living, we don't do it.
[译文] SA: 嗯,再说一次,现在如果你试图生成某种被点名的风格的图像,我们会直接说那位艺术家还在世,我们不这么做。
[原文] But I think it would be cool to figure out a new model where if you say, I want to do it in the name of this artist and they opt in, there's a revenue model there.
[译文] 但我认为如果能设计出一个新模式会很酷:如果你说我想以这位艺术家的名义来做,而且他们选择加入(opt in),那么就会有一个收入模型。
[原文] I think that's a good thing to explore.
[译文] 我认为这是一件值得探索的好事。
[原文] CA: So, I think the world should help you figure out that model quickly.
[译文] CA: 所以,我认为世界应该帮助你们尽快弄清楚这个模式。
[原文] And I think it will make a huge difference actually.
[译文] 而且我认为这实际上会产生巨大的影响。
📝 本节摘要:
主持人 Chris 提及竞争对手 DeepSeek 的崛起,引发了关于开源与闭源模型的激烈讨论。Sam 确认 OpenAI 将推出强力的开源模型,但强调在算力紧缺的当下,未来的竞争核心将从单纯的“模型智能”转向综合性的“产品体验”。Sam 透露 ChatGPT 的周活跃用户已达 5 亿,并深入解析了新推出的“记忆(Memory)”功能——AI 正从一个问答工具演变为能够理解用户全貌、随时间共同成长的长期伴侣。
[原文] CA: I want to switch topics quickly. (Applause)
[译文] CA: 我想快速切换一下话题。(掌声)
[原文] The battle between your model and open source. How much were you shaken up by the arrival of DeepSeek?
[译文] 关于你们的模型与开源之间的战役。DeepSeek 的到来对你有多大的震动?
[原文] SA: I think open source has an important place.
[译文] SA: 我认为开源占有重要的地位。
[原文] We actually, just last night, hosted our first community session to kind of decide the parameters of our open-source model and how we want to shape it.
[译文] 实际上,就在昨晚,我们要举办了第一场社区会议,来决定我们开源模型的参数以及我们希望如何塑造它。
[原文] We're going to do a very powerful open-source model. I think this is important.
[译文] 我们将要推出一个非常强大的开源模型。我认为这很重要。
[原文] We're going to do something near the frontier, I think, better than any current open-source model out there.
[译文] 我们要做一些接近前沿的东西,我想,会比目前市面上任何开源模型都要好。
[原文] This will not be all -- there will be people who will use this in ways that some people in this room, maybe you or I, don’t like.
[译文] 这还不是全部——将会有人以这屋子里的一些人,也许是你或我,不喜欢的方式去使用它。
[原文] But there is going to be an important place for open-source models as part of the constellation here.
[译文] 但开源模型作为这里星系(生态系统)的一部分,将会占有重要的位置。
[原文] And, you know, I think we were late to act on that, but we're going to do it really well now.
[译文] 而且,你知道,我认为我们在这一点上行动迟缓了,但我们现在会把它做得很好。
[原文] CA: I mean, you're spending it seems, like an order, or even orders of magnitude more than DeepSeek allegedly spent, although I know there's controversy around that.
[译文] CA: 我的意思是,你们的投入似乎比 DeepSeek 据称的投入要多出一个数量级,甚至几个数量级,虽然我知道这周围存在争议。
[原文] Are you confident that the actual better model is going to be recognized?
[译文] 你有信心真正更好的模型会被认可吗?
[原文] Or are you actually like, isn't this in some ways life-threatening to the notion that, yeah, by going to massive scale, tens of billions of dollars of investment, we can maintain an incredible lead.
[译文] 或者你其实觉得,这在某种程度上难道不是对那种观念——即“通过大规模扩展、数百亿美元的投资,我们就能保持惊人的领先优势”——构成了致命威胁吗?
[原文] SA: All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained.
[译文] SA: 我整天都在给人们打电话,乞求他们把 GPU 给我们。我们的资源极其受限。
[原文] Our growth is going like this. DeepSeek launched, and it didn’t seem to impact it.
[译文] 我们的增长曲线是这样的(手势)。DeepSeek 发布了,但这似乎并没有影响到它。
[原文] There's other stuff that's happening.
[译文] 还有其他事情正在发生。
[原文] CA: Tell us about the growth, actually. You gave me a shocking number backstage there.
[译文] CA: 实际上,给我们讲讲增长情况吧。你在后台告诉了我一个令人震惊的数字。
[原文] SA: I have never seen growth in any company, one that I've been involved with or not, like this, like the growth of ChatGPT.
[译文] SA: 我从未在任何公司见过这样的增长,无论是我参与过的还是没参与过的,就像 ChatGPT 的这种增长。
[原文] It's really fun. I feel like great, deeply honored.
[译文] 这真的很有趣。我觉得非常棒,深感荣幸。
[原文] But it is crazy to live through, and our teams are exhausted and stressed. And we’re trying to keep things up.
[译文] 但经历这一切也是疯狂的,我们的团队精疲力竭、压力山大。我们在努力维持运转。
[原文] CA: How many users do you have now?
[译文] CA: 你们现在有多少用户?
[原文] SA: I think the last time we said was 500 million weekly actives, and it is growing very rapidly.
[译文] SA: 我想我们上次公布的是 5 亿周活跃用户,而且增长非常迅速。
[原文] CA: I mean, you told me that like doubled in just a few weeks. Like in terms of compute or in terms of ...
[译文] CA: 我的意思是,你告诉我这好像在短短几周内就翻了一番。是指算力方面还是指……
[原文] SA: I said that privately, but I guess ...
[译文] SA: 那是我私下说的,但我猜……
[原文] CA: Oh. (Laughter) I misremembered, Sam, I'm sorry. We can edit that out of the thing if you really want to.
[译文] CA: 噢。(笑声)我记错了,Sam,我很抱歉。如果你真的想的话,我们可以把这一段剪掉。
[原文] And no one here would tweet it.
[译文] 而且这里没人会发推特泄露出去的。
[原文] SA: It's growing very fast. (Laughter)
[译文] SA: 它增长得非常快。(笑声)
[原文] CA: So you're confident, you're seeing it grow, take off like a rocket ship, you're releasing incredible new models all the time.
[译文] CA: 所以你很自信,你看着它增长,像火箭一样起飞,你们一直在发布令人难以置信的新模型。
[原文] What are you seeing in your best internal models right now that you haven't yet shared with the world but you would love to here on this stage?
[译文] 你现在在你们最好的内部模型中看到了什么还没与世界分享、但你很乐意在这个舞台上分享的东西?
[原文] SA: So first of all, you asked about, are we worried about this model or that model?
[译文] SA: 首先,你问到我们是否担心这个模型或那个模型?
[原文] There will be a lot of intelligent models in the world. Very smart models will be commoditized to some degree.
[译文] 世界上将会有很多智能模型。非常聪明的模型在某种程度上将会变得商品化(同质化)。
[原文] I think we’ll have the best, and for some use you'll want that.
[译文] 我认为我们会拥有最好的,而在某些用途上你会需要最好的。
[原文] But like, honestly, the models are now so smart that for most of the things most people want to do, they're good enough.
[译文] 但老实说,现在的模型已经非常聪明了,对于大多数人想要做的大多数事情来说,它们已经足够好了。
[原文] I hope that'll change over time because people will raise their expectations.
[译文] 我希望这会随着时间推移而改变,因为人们会提高他们的期望值。
[原文] But like, if you're kind of using ChatGPT as a standard user, the model capability is very smart.
[译文] 但如果你只是作为一个普通用户使用 ChatGPT,模型的能力已经非常聪明了。
[原文] But we have to build a great product, not just a great model.
[译文] 但我们要打造的是一个伟大的产品,而不仅仅是一个伟大的模型。
[原文] And so there will be a lot of people with great models, and we will try to build the best product.
[译文] 所以,将来会有很多人拥有伟大的模型,而我们要努力打造最好的产品。
[原文] And people want their image-gen, you know, you saw some Sora examples for video earlier. They want to integrate it with all their stuff.
[译文] 人们想要他们的图像生成,你知道,你之前看到了一些 Sora 的视频示例。他们想把它与他们所有的东西整合在一起。
[原文] We just launched a new feature, it's still called Memory, but it's way better than the Memory before, where this model will get to know you over the course of your lifetime.
[译文] 我们刚刚推出了一个新功能,它仍然叫“记忆(Memory)”,但比以前的“记忆”要好得多,这个模型将在你的一生中逐渐了解你。
[原文] And we have a lot more stuff to come to build like this great integrated product.
[译文] 我们还有很多东西要推出,来构建这样一个伟大的综合性产品。
[原文] And, you know, I think people will stick with that.
[译文] 而且,我认为人们会因此而留存下来。
[原文] So there will be many models, but I think we will, I hope, continue to focus on building the best defining product in this space.
[译文] 所以会有很多模型,但我认为我们将——我希望——继续专注于构建这个领域内最好的、具有定义性的产品。
[原文] CA: I mean after I saw your announcement yesterday that ChatGPT will know all of your query history, I entered, "Tell me about me, ChatGPT, from all you know."
[译文] CA: 我的意思是,昨天看到你们宣布 ChatGPT 将知晓你所有的查询历史后,我输入了:“ChatGPT,根据你所知道的一切,告诉我关于我的事。”
[原文] And my jaw dropped, Sam, it was shocking. It knew who I was and all these sort of interests that hopefully mostly were pretty much appropriate and shareable.
[译文] 我下巴都惊掉了,Sam,太令人震惊了。它知道我是谁,以及所有这类兴趣爱好——希望大部分都是比较得体且可以分享的。
[原文] But it was astonishing. And I felt the sense of real excitement, a little bit queasy, but mainly excitement, actually, at how much more that would allow it to be useful to me.
[译文] 但这太惊人了。我感到一种真正的兴奋感,有一点点反胃(不安),但实际上主要是兴奋,因为它能因此对我变得更有用。
[原文] SA: One of our researchers tweeted, you know, kind of like yesterday or this morning, that the upload happens bit by bit.
[译文] SA: 我们的一位研究员发推特说,就在昨天还是今早,说这种“上传”是一点一点发生的。
[原文] It’s not you know, that you plug your brain in one day.
[译文] 并不是说有一天你突然把大脑插上去。
[原文] But you will talk to ChatGPT over the course of your life and some day, maybe if you want, it'll be listening to you throughout the day and sort of observing what you're doing,
[译文] 而是你会在你的一生中与 ChatGPT 交谈,有一天,也许如果你愿意的话,它会整天倾听你的声音,观察你在做什么。
[原文] and it'll get to know you and it'll become this extension of yourself, this companion, this thing that just tries to, like, help you be the best, do the best you can.
[译文] 它会逐渐了解你,它会成为你自己的延伸,成为你的伴侣,成为那个只是试图帮助你成为最好的自己、做到最好的东西。
[原文] CA: In the movie "Her," the AI basically announces that she's read all of his emails and decided he's a great writer and you know, persuades a publisher to publish him. That might be coming sooner than we think.
[译文] CA: 在电影《她》(Her)中,AI 基本上就是宣称她读了他所有的邮件,认定他是个伟大的作家,然后说服出版商出版他的作品。这可能比我们想象的来得更快。
[原文] SA: I don't think it will happen exactly like that, but yeah, I think something in the direction where AI -- you don’t have to just, like, go to ChatGPT or whatever and say, I have a question, give me an answer.
[译文] SA: 我不认为事情会完全像那样发生,但是是的,我认为会朝着那个方向发展,即 AI——你不必只是去 ChatGPT 或其他地方说,我有个问题,给我个答案。
[原文] But you're getting like, proactively pushed things that help you, that make you better or whatever. That does seem like it's soon.
[译文] 而是你会获得那种主动推送的、能帮助你、让你变得更好之类的东西。这看起来确实很快就会实现。
📝 本节摘要:
Sam 预测,AI 下一阶段最令人兴奋的突破将发生在“科学发现”(如物理学与疾病攻克)与“代理式软件工程”(Agentic Software Engineering)领域。当被问及内部是否有“令人恐惧”的时刻时,Sam 否认了关于“拥有秘密意识模型”的传言,但坦承了对生物恐怖主义、网络安全及失控风险的担忧。针对安全团队成员离职的争议,他辩护了 OpenAI 的“迭代部署”(iterative deployment)策略——即在风险尚低时通过实际应用来学习安全边界,而非闭门造车。
[原文] CA: So what have you seen that's coming up, internally, that you think is going to blow people's minds? Give us at least a hint of what the next big jaw dropper is.
[译文] CA: 那么你在内部看到了什么即将到来的、你认为会让人大吃一惊的东西?至少给我们一点提示,下一个让人惊掉下巴的大动作是什么。
[原文] SA: The thing that I'm personally most excited about is AI for science at this point.
[译文] SA: 目前我个人最兴奋的是用于科学领域的 AI(AI for science)。
[原文] I am a big believer that the most important driver of the world and people's lives getting better and better is new scientific discovery.
[译文] 我坚信,推动世界进步和人们生活日益美好的最重要动力是新的科学发现。
[原文] We can do more things with less, we sort of push back the frontier of what's possible.
[译文] 我们可以事半功倍,我们在某种程度上推展了可能性的边界。
[原文] We're starting to hear a lot from scientists with our latest models that they're actually just more productive than they were before.
[译文] 我们开始从科学家那里听到很多反馈,使用我们最新的模型,他们的效率确实比以前更高了。
[原文] That's actually mattering to what they can discover.
[译文] 这实际上对他们能发现什么有着重要影响。
[原文] CA: What’s the plausible near-term discovery, like, room temperature --
[译文] CA: 近期有什么可能的发现,比如,室温——
[原文] SA: Superconductors?
[译文] SA: 超导体?
[原文] CA: Superconducting, yeah. Is that possible?
[译文] CA: 超导,是的。这可能吗?
[原文] SA: I don't think that's prevented by the laws of physics. So it should be possible. But we don't know for sure.
[译文] SA: 我认为物理定律并没有排除这种可能性。所以它应该是可能的。但我们还不能确定。
[原文] I think you'll start to see some ... meaningful progress against disease with AI-assisted tools.
[译文] 我想你会开始看到……利用 AI 辅助工具在对抗疾病方面取得一些有意义的进展。
[原文] You know, physics maybe takes a little bit longer, but I hope for it.
[译文] 你知道,物理学可能需要更长一点的时间,但我对此抱有希望。
[原文] So that's like, one direction. Another that I think is big is starting pretty soon, like in the coming months.
[译文] 所以那是其中一个方向。我认为另一个即将在未来几个月内开始的重大方向是——
[原文] Software development has already been pretty transformed.
[译文] 软件开发已经被彻底改变了。
[原文] Like it’s quite amazing how different the process of creating software is now than it was two years ago.
[译文] 现在的软件开发过程与两年前相比截然不同,这简直令人惊叹。
[原文] But I expect like another move that big in the coming months as agentic software engineering really starts to happen.
[译文] 但我预计在未来几个月里,随着“代理式软件工程”(agentic software engineering)真正开始发生,会有另一次如此巨大的飞跃。
[原文] CA: I've heard engineers say that they've had almost like religious-like moments with some of the new models where suddenly, they can do in an afternoon what would have taken them two years.
[译文] CA: 我听工程师们说过,他们在使用一些新模型时,有过几乎像宗教般的体验时刻,突然之间,他们可以在一个下午完成过去需要两年才能做完的事情。
[原文] SA: Yeah, it's like mind -- it really like, that’s been one of my big “feel the AGI” moments.
[译文] SA: 是的,这就像——这真的就像是我那些“感受到通用人工智能(AGI)”的重大时刻之一。
[原文] CA: But talk about what is the scariest thing that you've seen.
[译文] CA: 但谈谈你见过的最可怕的事情是什么。
[原文] Because like, outside, a lot of people picture you as, you know, you have access to this stuff.
[译文] 因为,在外界看来,很多人把你想象成——你知道,你能接触到这些东西。
[原文] And we hear all these rumors coming out of AI, and it's like, "Oh my God, they've seen consciousness," or "They've seen AGI," or "They've seen some kind of apocalypse coming."
[译文] 我们听到所有这些关于 AI 的传言,就像是,“天哪,他们看到了意识,”或者“他们看到了 AGI,”又或者“他们看到了某种即将到来的世界末日。”
[原文] Have you seen, has there been a scary moment when you've seen something internally and thought, "Uh oh, we need to pay attention to this?"
[译文] 你有没有见过——有没有过一个可怕的时刻,当你在内部看到某样东西时心想,“噢不,我们必须注意这个了?”
[原文] SA: There have been like moments of awe. And I think with that is always like, how far is this going to go? What is this going to be?
[译文] SA: 有过那种令人敬畏的时刻。而在那种时刻,伴随而来的总是想:这会发展到什么地步?这会变成什么样?
[原文] But there's no like, we don't secretly have, we're not secretly sitting on a conscious model or something that's capable of self-improvement or anything like that.
[译文] 但并没有那种——我们并没有秘密拥有、也没有秘密地雪藏一个有意识的模型,或者具备自我进化能力之类的东西。
[原文] You know, I ... people have very different views of what the big AI risks are going to be.
[译文] 你知道,我……对于 AI 的重大风险究竟是什么,人们有着非常不同的看法。
[原文] And I myself have like evolved on thinking about where we're going to see those.
[译文] 我自己关于我们将在哪里看到这些风险的思考也在不断演变。
[原文] I continue to believe there will come very powerful models that people can misuse in big ways.
[译文] 我继续相信,将会出现非常强大的模型,人们可能会以严重的方式滥用它们。
[原文] People talk a lot about the potential for new kinds of bioterror, models that can present like a real cybersecurity challenge, models that are capable of self-improvement in a way that leads to some sort of loss of control.
[译文] 人们经常谈论新型生物恐怖主义的潜力、可能带来真正网络安全挑战的模型,以及具备自我进化能力从而导致某种失控的模型。
[原文] So I think there are big risks there.
[译文] 所以我认为那里存在巨大的风险。
[原文] And then there's a lot of other stuff, which honestly is kind of what I think, many people mean, where people talk about disinformation or models saying things that they don't like or things like that.
[译文] 还有很多其他的事情,老实说,我认为这其实是很多人指的——当人们谈论虚假信息,或者模型说出他们不喜欢的话之类的事情。
[原文] CA: Sticking with the first of those, do you check for that internally before release?
[译文] CA: 针对第一类风险,你们在发布前会在内部进行检查吗?
[原文] SA: Of course, yeah. So we have this preparedness framework that outlines how we do that.
[译文] SA: 当然,是的。我们有一个“准备框架”(preparedness framework),概述了我们如何进行这项工作。
[原文] CA: I mean, you've had some departures from your safety team. How many people have departed, why have they left?
[译文] CA: 我的意思是,你们的安全团队有一些人离职了。有多少人离开了,他们为什么离开?
[原文] SA: We have, I don't know the exact number, but there are clearly different views about AI safety systems.
[译文] SA: 我们确实有人离开,我不清楚确切的数字,但关于 AI 安全系统显然存在不同的观点。
[原文] I would really point to our track record. There are people who will say all sorts of things.
[译文] 我真的想指出我们的过往记录(track record)。有些人会说各种各样的话。
[原文] You know, something like 10 percent of the world uses our systems now a lot. And we are very proud of the safety track record.
[译文] 你知道,现在全世界大约有 10% 的人频繁使用我们的系统。我们对这个安全记录感到非常自豪。
[原文] CA: But track record isn't the issue in a way --
[译文] CA: 但从某种意义上说,过往记录并不是问题的关键——
[原文] SA: No, it kind of is.
[译文] SA: 不,这某种程度上就是关键。
[原文] CA: Because we're talking about an exponentially growing power where we fear that we may wake up one day and the world is ending.
[译文] CA: 因为我们谈论的是一种呈指数级增长的力量,我们担心也许有一天醒来,世界就末日了。
[原文] So it's really not about track record, it's about plausibly saying that the pieces are in place to shut things down quickly if we see a danger.
[译文] 所以这真的不在于过往记录,而在于能否令人信服地说明,如果看到危险,我们有适当的机制可以迅速关闭系统。
[原文] SA: Yeah, no, of course, of course that's important.
[译文] SA: 是的,不,当然,那当然很重要。
[原文] You don't, like, wake up one day and say, "Hey, we didn't have any safety process in place. Now we think the model is really smart. So now we have to care about safety."
[译文] 你不能有一天醒来突然说,“嘿,我们没有任何安全流程。现在我们觉得模型真的很聪明了,所以现在我们必须关心安全了。”
[原文] You have to care about it all along this exponential curve.
[译文] 你必须在这条指数曲线的整个过程中一直关心它。
[原文] Of course the stakes increase, and there are big challenges.
[译文] 当然,赌注(风险)会增加,挑战也会变大。
[原文] But the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback, while the stakes are relatively low, learning about like, this is something we have to address.
[译文] 但我们学习如何构建安全系统的方法,是这种将它们部署到世界、获取反馈的迭代过程——趁着风险相对较低的时候,学习比如“这是我们必须解决的问题”。
📝 本节摘要:
Chris 追问为何 ChatGPT 还不能被称为 AGI(通用人工智能),Sam 指出目前的模型尚无法“持续自我学习”或独立完成复杂的长程工作任务。他用“10 个研究员有 14 种定义”的笑话说明了界定 AGI 的困难,并主张应将关注点从寻找某个具体的“AGI 时刻”,转移到认知这一不可阻挡的“指数级增长曲线”上来。随后,话题转向具备自主行动能力的“代理式 AI”(Agentic AI),Chris 以 OpenAI 的新工具“Operator”为例,表达了对 AI 自主操控互联网的担忧。Sam 承认这是目前面临的最具后果性的安全挑战,并提出在代理时代,“安全性”与“产品能力”实际上已合二为一——不安全的产品根本无法被使用。
[原文] CA: So let's talk about agentic systems and the relationship between that and AGI.
[译文] CA: 那么让我们来谈谈代理系统(agentic systems),以及它与 AGI(通用人工智能)之间的关系。
[原文] I think there's confusion out there, I'm confused.
[译文] 我觉得外面存在困惑,我也很困惑。
[原文] So artificial general intelligence, it feels like ChatGPT is already a general intelligence.
[译文] 所谓的通用人工智能,感觉 ChatGPT 已经是一个通用智能了。
[原文] I can ask it about anything, and it comes back with an intelligent answer. Why isn't that AGI?
[译文] 我可以问它任何事情,它都能给出一个智能的回答。为什么那不算是 AGI?
[原文] SA: It doesn't ... First of all, you can't ask it anything.
[译文] SA: 它并不……首先,你不能问它“任何”事情。
[原文] That's very nice of you to say, but there's a lot of things that it's still embarrassingly bad at.
[译文] 你这么说很客气,但在很多事情上它仍然表现得糟糕透顶。
[原文] But even if we fixed those, which hopefully we will, it doesn't continuously learn and improve.
[译文] 但即使我们修复了那些问题——希望我们能做到——它也无法持续地学习和改进。
[原文] It can't go get better at something that it's currently weak at.
[译文] 它无法在它目前薄弱的方面自我提升。
[原文] It can't go discover new science and update its understanding and do that.
[译文] 它无法去发现新科学并更新它的理解,做不到那些。
[原文] And it also kind of can't, even if we lower the bar, it can't just sort of do any knowledge work you could do in front of a computer.
[译文] 而且它也几乎无法——即便我们降低标准——它无法完成你坐在电脑前能做的任何脑力工作(knowledge work)。
[原文] I actually, even without the sort of ability to get better at something it doesn't know yet, I might accept that as a definition of AGI.
[译文] 实际上,即使没有那种“在未知领域自我提升”的能力,我可能也会接受那个(能做所有脑力工作)作为 AGI 的定义。
[原文] But the current systems, you can't say like, hey, go do this task for my job, and it goes off and clicks around the internet and calls someone and looks at your files and does it.
[译文] 但目前的系统,你不能说,“嘿,去为我的工作完成这个任务”,然后它就跑去在互联网上点击一通、给某人打电话、查看你的文件并把事情做完。
[原文] And without that, it feels definitely short of it.
[译文] 如果做不到这一点,感觉它绝对还达不到 AGI 的标准。
[原文] CA: I mean, do you guys have internally a clear definition of what AGI is, and when do you think that we may be there?
[译文] CA: 我的意思是,你们内部对什么是 AGI 有清晰的定义吗?以及你认为我们何时能达到那个阶段?
[原文] SA: It's like the joke, if you’ve got 10 OpenAI researchers in a room and ask to define AGI, you’d get 14 definitions.
[译文] SA: 这就像那个笑话:如果你把 10 个 OpenAI 研究员关在一个房间里,让他们定义 AGI,你会得到 14 个定义。
[原文] CA: That's worrying, though, isn't it? Because that has been the mission initially, “We’re going to be the first to get to AGI. We'll do so safely. But we don't have a clear definition of what it is."
[译文] CA: 但这令人担忧,不是吗?因为这最初就是你们的使命——“我们要成为第一个实现 AGI 的人。我们会安全地实现它。但我们要并没有一个清晰的定义它是什么。”
[原文] SA: I was going to finish the answer.
[译文] SA: 我正准备把回答说完。
[原文] CA: Sorry.
[译文] CA: 抱歉。
[原文] SA: What I think matters though, and what people want to know is not where is this one, you know, magic moment of, “We finished.”
[译文] SA: 但我认为真正重要的是,而且人们想知道的,并不是那个神奇的时刻在哪里,比如“我们完成了”。
[原文] But given that what looks like is going to happen is that the models are just going to get smarter and more capable and smarter and more capable, on this long exponential, different people will call it AGI at different points.
[译文] 鉴于目前看来将会发生的情况是,模型只会在这条漫长的指数曲线上变得更聪明、更强,再更聪明、更强,不同的人会在不同的节点称之为 AGI。
[原文] But we all agree it’s going to go way, way past that.
[译文] 但我们都同意,它将远远、远远地超越那个点。
[原文] You know, to whatever you want to call these systems that get much more capable than we are.
[译文] 你知道,无论你想怎么称呼这些变得比我们更有能力的系统。
[原文] The thing that matters is how do we talk about a system that is safe through all of these steps and beyond, as the system gets more capable than we are, as the system can do things that we don't totally understand.
[译文] 真正重要的是,我们如何讨论一个在所有这些步骤以及未来阶段都保持安全的系统,尤其是当系统变得比我们更有能力、能做我们不完全理解的事情时。
[原文] And I think more important than when is AGI coming and what's the definition of it, it's recognizing that we are in this unbelievable exponential curve.
[译文] 我认为比 AGI 何时到来以及它的定义是什么更重要的是,我们要认识到我们正处于这条不可思议的指数曲线上。
[原文] And you can, you know, say this is what I think AGI is. You can say you think this is what you think AGI is.
[译文] 你可以,你知道,说“我认为这就是 AGI”。你可以说“你认为那就是 AGI”。
[原文] Someone else can say superintelligence is out here, but we're going to have to contend and get wonderful benefits from this incredible system.
[译文] 另一个人可以说超级智能就在那里,但我们将不得不与这个不可思议的系统共存并从中获得巨大的益处。
[原文] And so I think we should shift the conversation away from what's the AGI moment to a recognition that, like, this thing is not going to stop, it's going to go way beyond what any of us would call AGI.
[译文] 所以我认为我们应该把对话的重心从“什么是 AGI 时刻”转移开,转而认识到:这东西不会停下来,它会远远超越我们任何人所谓的 AGI。
[原文] And we have to build a society to get the tremendous benefits of this and figure out how to make it safe.
[译文] 我们必须建立一个社会来获取它的巨大益处,并弄清楚如何让它变得安全。
[原文] CA: Well, one of the conversations this week has been that the real change moment is -- I mean, AGI is a fuzzy thing, but what is clear is agentic AI -- when AI is set free to pursue projects on its own and to put the pieces together -- you’ve actually, you've got a thing called Operator which starts to do this.
[译文] CA: 嗯,这周的一个话题是,真正的变革时刻是——我是说,AGI 是个模糊的概念,但清晰的是代理式 AI(agentic AI)——当 AI 被释放去独自追求项目并将各个部分整合在一起时——你们实际上有一个叫 Operator 的东西已经开始做这个了。
[原文] And I tried it out. You know, I wanted to book a restaurant, and it's kind of incredible.
[译文] 我试用了一下。你知道,我想订一家餐厅,它有点不可思议。
[原文] It kind of can go ahead and do it, but this is what it said. You know, it was an intriguing process.
[译文] 它某种程度上可以直接去把事办了,但它是这么说的。你知道,这是个有趣的过程。
[原文] And, you know, “Give me your credit card” and everything else, and I declined on this case to go forward.
[译文] 比如,“把你的信用卡给我”之类的,而在这种情况下我拒绝了继续。
[原文] But I think this is the challenge that people are going to have. It's kind of like, it's an incredible superpower. It's a little bit scary.
[译文] 但我认为这就是人们将要面临的挑战。它有点像是一种不可思议的超能力,但也有一点可怕。
[原文] And Yoshua Bengio, when he spoke here, said that agentic AI is the thing to pay attention to.
[译文] Yoshua Bengio 在这里演讲时也说,代理式 AI 是值得关注的东西。
[原文] This is when everything could go wrong as we give power to AI to go out onto the internet to do stuff.
[译文] 当我们赋予 AI 权力去互联网上做事时,这就是一切可能出错的时候。
[原文] I mean, going out onto the internet was always, in the sci-fi stories, the moment where, you know, escape happened and potential -- things could go horribly wrong.
[译文] 我的意思是,在科幻故事里,“进入互联网”总是那个发生逃逸、潜在的——事情可能变得极其糟糕的时刻。
[原文] How do you both release agentic AI and have guardrails in place that it doesn't go too far?
[译文] 你如何既发布代理式 AI,又设置好护栏以防它做得过火?
[原文] SA: First of all, obviously you can choose not to do this and say, I don't want this. I'm going to call the restaurant and read them my credit card over the phone.
[译文] SA: 首先,显然你可以选择不这样做,说“我不想要这个。我要给餐厅打电话,在电话里念我的信用卡号。”
[原文] CA: I could choose, but someone else might say, “Oh, go out, ChatGPT onto the internet at large and rewrite the internet to make it better for humans,” or whatever.
[译文] CA: 我可以选择,但别人可能会说,“噢,去吧 ChatGPT,进入广阔的互联网,重写互联网让它对人类更美好,”或者别的什么。
[原文] SA: The point I was going to make is just with any new technology, it takes a while for people to get comfortable.
[译文] SA: 我想表达的观点是,对于任何新技术,人们都需要一段时间来适应。
[原文] I remember when I wouldn't put my credit card on the internet because my parents had convinced me someone was going to read the number, and you had to fill out the form and then call them.
[译文] 我记得以前我不敢在互联网上输入信用卡号,因为我父母让我确信有人会窃取那个号码,你得填好表格然后给他们打电话。
[原文] And then we kind of all said, OK, we’ll build anti-fraud systems, and we can get comfortable with this.
[译文] 后来我们大家都说,好吧,我们会建立反欺诈系统,我们可以适应这个。
[原文] I think people are going to be slow to get comfortable with agentic AI in many ways.
[译文] 我认为人们在很多方面适应代理式 AI 的速度会比较慢。
[原文] But I also really agree with what you said, which is that even if some people are comfortable with it and some aren't, we are going to have AI systems clicking around the internet.
[译文] 但我也非常同意你所说的,那就是即便有些人适应、有些人不适应,我们终将拥有在互联网上四处点击操作的 AI 系统。
[原文] And this is, I think, the most interesting and consequential safety challenge we have yet faced.
[译文] 我认为,这是我们迄今为止面临的最有趣、也是后果最重大的安全挑战。
[原文] Because AI that you give access to your systems, your information, your ability to click around on your computer, now, those, you know, when AI makes a mistake, it's much higher stakes.
[译文] 因为如果你让 AI 访问你的系统、你的信息,赋予它在你的电脑上点击操作的能力,那么,你知道,当 AI 犯错时,代价要高得多。
[原文] It is the gate on -- so we talked earlier about safety and capability. I kind of think they're increasingly becoming one-dimensional.
[译文] 这是一个门槛——所以我们之前谈到了安全性和能力。我倾向于认为它们正日益变成同一个维度(one-dimensional)。
[原文] Like a good product is a safe product.
[译文] 比如,一个好的产品就是一个安全的产品。
[原文] You will not use our agents if you do not trust that they’re not going to like empty your bank account or delete your data or who knows what else.
[译文] 如果你不相信我们的代理体(agents)不会清空你的银行账户,或者删除你的数据,或是天知道别的什么事,你就不会使用它。
[原文] And so people want to use agents that they can really trust, that are really safe.
[译文] 所以人们想要使用他们真正能信任的、真正安全的代理体。
[原文] And I think we are gated on our ability to make progress on our ability to do that. But it's a fundamental part of the product.
[译文] 我认为我们能否取得进展,取决于我们是否有能力做到这一点。但这是产品的一个基础组成部分。
[原文] CA: In a world where agency is out there and say that, you know, maybe it’s open models are widely distributed and someone says, "OK, AGI, I want you to go out onto the internet and, you know, spread a meme however you can that X people are evil,” or whatever it is.
[译文] CA: 在一个代理能力已经普及的世界里,比如说,也许开源模型被广泛分发,有人说,“好的,AGI,我要你进入互联网,尽你所能去传播一个‘X 这群人是邪恶的’这样的迷因(meme),”或者不管是什么。
[原文] It doesn't have to be an individual choice. A single person could let that agent out there, and the agent could decide, "Well, in order to execute on that function, I've got to copy myself everywhere," and, you know.
[译文] 这不必是个人的选择。一个人就可以把那个代理体释放出去,而那个代理体可能会决定,“好吧,为了执行那个功能,我必须把自己复制得到处都是,”以此类推。
[原文] Are there red lines that you have clearly drawn internally, where you know what the danger moments are, and that we cannot put out something that could go beyond this?
[译文] 你们内部有没有划定清晰的红线,让你们知道哪些是危险时刻,知道我们不能发布可能越过这条线的东西?
[原文] SA: Yeah, so this is the purpose of our preparedness framework.
[译文] SA: 是的,这就是我们“准备框架”(preparedness framework)的目的。
[原文] And we'll update that over time.
[译文] 我们会随着时间的推移更新它。
[原文] But we’ve tried to outline where we think the most important danger moments are, or what the categories are, how we measure that, and how we would mitigate something before releasing it.
[译文] 但我们已经尝试概述了我们认为最重要的危险时刻在哪里,或者类别有哪些,我们如何衡量它,以及我们在发布之前如何减轻风险。
📝 本节摘要:
对话的氛围在这一章变得极具张力。Chris 首先引用了 o1-pro 模型提出的一个“最尖锐问题”:谁赋予了 Sam 重塑人类命运的道德权威?随后,Chris 抛出了外界关于 Sam 的“双重叙事”——他究竟是改变世界的远见者,还是正被财富与权力(Elon Musk 所谓的“至尊魔戒”)腐蚀的科技巨头?Sam 坦然回应了关于商业化转型的争议及个人对权力的感受。章节最后,话题转向家庭,Sam 动情地分享了初为人父的体验,这种生物学上的深刻情感让他对“不毁灭世界”有了更具体的责任感。
[原文] CA: I can tell from the conversation you're not a big AI fan.
[译文] CA: 从对话中我可以看出来,你并不是一个狂热的 AI 粉丝。
[原文] CA: Actually, on the contrary, I use it every day. I'm awed by it.
[译文] CA: 实际上恰恰相反,我每天都用它。我对它充满敬畏。
[原文] I think this is an incredible time to be alive. I wouldn't be alive any other time, and I cannot wait to see where it goes.
[译文] 我认为这是一个令人难以置信的生存时代。我不想生活在其他任何时代,我也迫不及待地想看看它会走向何方。
[原文] We've been holding ... I think it's essential to hold ... like we can’t divide people into those camps.
[译文] 我们一直持有……我认为这是至关重要的……我们不能把人简单地划分成那些阵营。
[原文] You have to hold a passionate belief in the possibility, but not be overseduced by it because things could go horribly wrong.
[译文] 你必须对这种可能性怀有充满激情的信念,但又不能被它过度诱惑,因为事情可能会变得极其糟糕。
[原文] SA: What I was going to say is I totally understand that.
[译文] SA: 我想说的是,我完全理解这一点。
[原文] I totally understand looking at this and saying this is an unbelievable change coming to the world.
[译文] 我完全理解看着这一切并说“这是世界即将面临的不可思议的巨变”。
[原文] And, you know, maybe I don't want this, or maybe I love parts of it.
[译文] 而且,你知道,也许我不想要这个,或者也许我喜欢它的某些部分。
[原文] Maybe I love talking to ChatGPT, but I worry about what's going to happen to art, and I worry about the pace of change, and I worry about these agents clicking around the internet.
[译文] 也许我喜欢和 ChatGPT 聊天,但我担心艺术会发生什么变化,我担心变革的速度,我担心这些在互联网上四处点击的代理体。
[原文] And maybe, on balance, I wish this weren't happening. Or maybe I wish it were happening a little slower.
[译文] 也许,总的来说,我希望这没有发生。或者也许我希望它发生得慢一点。
[原文] Or maybe I wish it were happening in a way where I could pick and choose what parts of progress were going to happen.
[译文] 又或者我希望它发生的方式能让我挑选哪些进步会发生。
[原文] And I think, the fear is totally rational. Sort of, the anxiety is totally rational. We all have a lot of it, too.
[译文] 我认为,这种恐惧是完全理性的。那种焦虑也是完全理性的。我们也都有很多这样的情绪。
[原文] But ... A, there will be tremendous upside. Obviously, you know, you use it every day, you like it.
[译文] 但是……第一,会有巨大的上行空间(好处)。显然,你知道,你每天都在用,你喜欢它。
[原文] B ... I really believe that society figures out, over time, with some big mistakes along the way, how to get technology right.
[译文] 第二……我真的相信社会随着时间的推移,虽然沿途会犯一些大错,但最终会弄清楚如何正确地利用技术。
[原文] And C, this is going to happen. This is like a discovery of fundamental physics that the world now knows about.
[译文] 第三,这终究会发生。这就像是一个世界已经知晓的基础物理学发现。
[原文] And it's going to be part of our world. And I think this conversation is really important.
[译文] 它将成为我们世界的一部分。所以我认为这场对话非常重要。
[原文] I think talking about these areas of danger [is] really important to talk about. New economic models are really important.
[译文] 我认为讨论这些危险领域真的很重要。新的经济模型真的很重要。
[原文] But we have to embrace this with like, caution but not fear, or we will get run by with other people that use AI to do better.
[译文] 但我们必须带着谨慎而非恐惧去拥抱它,否则我们会被其他利用 AI 做得更好的人甩在身后。
[原文] CA: You've actually been one of the most eloquent proponents of safety. You testified in the Senate.
[译文] CA: 实际上你一直是最有说服力的安全倡导者之一。你在参议院作证过。
[原文] I think you said basically that we should form a new safety agency that licenses any effort, ie. it will refuse to license certain efforts.
[译文] 我记得你基本上说过,我们要成立一个新的安全机构来对任何项目进行许可,也就是说,它将拒绝许可某些项目。
[原文] Do you still believe in that policy proposal?
[译文] 你还相信那个政策提议吗?
[原文] SA: I have learned more about how the government works. I don't think this is quite the right policy proposal.
[译文] SA: 我对政府的运作方式有了更多了解。我不认为这是个完全正确的政策提议。
[原文] CA: What is the right policy proposal?
[译文] CA: 那正确的政策提议是什么?
[原文] SA: But, I do think the idea that as these systems get more advanced and have legitimate global impact, we need some way, you know, maybe the companies themselves put together the right framework or the right sort of model for this, but we need some way that very advanced models have external safety testing.
[译文] SA: 但是,我确实认为,随着这些系统变得更先进并产生合理的全球影响,我们需要某种方式——也许是公司自己制定正确的框架或模式——我们需要某种方式让非常先进的模型接受外部安全测试。
[原文] And we understand when we get close to some of these danger zones. I very much still believe in that.
[译文] 并且我们要明白何时会接近这些危险区域。我仍然非常坚信这一点。
[原文] CA: So Sam, I asked your o1-pro reasoning model, which is incredibly --
[译文] CA: Sam,我问了你们的 o1-pro 推理模型,它令人难以置信——
[原文] SA: Thank you for the 200 dollars.
[译文] SA: 谢谢你付的那 200 美元(订阅费)。
[原文] CA: (Laughs) 200 dollars a month. It's a bargain at the price.
[译文] CA: (笑)一个月 200 美元。以这个价格来说很划算了。
[原文] I said, what is the single most penetrating question I could ask you? It thought about it for two minutes.
[译文] 我问它,我能问你的最一针见血的一个问题是什么?它思考了两分钟。
[原文] You want to see the question? "Sam, given that you're helping create technology that could reshape the destiny of our entire species, who granted you (or anyone) the moral authority to do that?”
[译文] 你想看看这个问题吗?“Sam,鉴于你正在帮助创造可能重塑我们要整个物种命运的技术,是谁赋予了你(或任何人)这样做的道德权威?”
[原文] "And how are you personally accountable if you're wrong?"
[译文] “如果你错了,你个人如何负责?”
[原文] SA: No, it was good. You've been asking me versions of this for the last half hour. What do you think?
[译文] SA: 不,这问题很好。过去半小时你一直在问我这个问题的各种版本。你怎么看?
[原文] CA: What I would say is this. Here's my version of that question.
[译文] CA: 我想说的是。这是我那个问题的版本。
[原文] There are two narratives about you out there. One is, you know, you are this incredible visionary who's done the impossible, and you shocked the world.
[译文] 外界关于你有两种叙事。一种是,你是一位不可思议的远见者,完成了不可能的事,震惊了世界。
[原文] But the other narrative is that you have shifted ground, that you've shifted from being OpenAI, this open thing, to the allure of building something super powerful.
[译文] 但另一种叙事是,你已经改变了立场,你从 OpenAI 这个开放的东西,转向了构建某种超级强大东西的诱惑。
[原文] Some people believe that you're not to be trusted in this space. I would love to know who you are. What is your narrative about yourself?
[译文] 有些人认为在这个领域你是不值得信任的。我很想知道你是谁。你自己对自己的叙事是什么?
[原文] SA: Look, I think like anyone else, I'm a nuanced character that doesn't reduce well to one dimension here.
[译文] SA: 听着,我想像其他人一样,我是一个性格微妙复杂的人,不能在这里被简化为一个维度。
[原文] In terms of OpenAI, our goal is to make AGI and distribute it, make it safe, for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction.
[译文] 就 OpenAI 而言,我们的目标是制造 AGI 并分发它,确保其安全,造福广泛的人类。我认为无论从哪个角度看,我们在那个方向上都做了很多。
[原文] Clearly our tactics have shifted over time. I think we didn't really know what we were going to be when we grew up.
[译文] 显然我们的策略随着时间推移发生了转变。我觉得我们最初并不知道我们长大后会变成什么样。
[原文] But I think we've been, in terms of putting incredibly capable AI with a high degree of safety in the hands of a lot of people ... I think it'd be hard to give us a bad grade on that.
[译文] 但我认为,在将极其强大且高度安全的 AI 交到许多人手中这一点上……我认为很难在那方面给我们打低分。
[原文] I do think it's fair that we should be open sourcing more.
[译文] 我确实认为我们应该更多地开源,这是公平的。
[原文] And it is time for us to put very capable open systems out into the world.
[译文] 现在也是我们将非常有能力的开放系统推向世界的时候了。
[原文] CA: You posted this -- Well, OK, so here's the Ring of Power from "Lord of the rings."
[译文] CA: 你发了这个——好吧,这是《指环王》里的“至尊魔戒”(Ring of Power)。
[原文] Your rival, I will say, not your best friend at the moment, Elon Musk, claimed that, you know, he thought that you'd been corrupted by the Ring of Power.
[译文] 你的竞争对手,可以说目前不是你最好的朋友,Elon Musk,声称他认为你已经被“至尊魔戒”腐蚀了。
[原文] Does the power and the wealth make it impossible to sometimes do the right thing and you just have to cling tightly to that ring?
[译文] 权力和财富是否会让有时做正确的事变得不可能,而你只能紧紧抓住那枚戒指?
[原文] SA: How do you think I'm doing, relative to other CEOs, that have gotten a lot of power and changed how they act?
[译文] SA: 相对于其他获得了巨大权力并改变了行为方式的 CEO 们,你觉得我做得怎么样?
[原文] CA: You have a beautiful ... you are not a rude, angry person who comes out and says aggressive things to other people.
[译文] CA: 你有一种美好的……你不是那种粗鲁、愤怒、出来对别人说咄咄逼人话语的人。
[原文] SA: Sometimes I do that. That's my single vice, you know? (Laughter)
[译文] SA: 有时候我也那么做。那是我唯一的恶习,你知道吗?(笑声)
[原文] CA: I think the fear is that just the transition of OpenAI to a for-profit model, is, you know, some people say, well, there you go. You got corrupted by the desire for wealth.
[译文] CA: 我认为恐惧在于,OpenAI 向营利模式的转变,有些人会说,好吧,你看。你被对财富的渴望腐蚀了。
[原文] What does it feel like?
[译文] 那感觉究竟如何?
[原文] SA: Shockingly, the same as before. I think you can get used to anything step by step.
[译文] SA: 令人震惊的是,和以前一样。我认为你可以一步步适应任何事情。
[原文] And it's strange to be sitting here talking about this, but like, you know, the monotony of day-to-day life, which I mean in the best possible way, feels exactly the same.
[译文] 坐在这里谈论这个感觉很奇怪,但是,你知道,日常生活的单调感——我是以最好的方式来表达这个意思——感觉完全一样。
[原文] CA: This was a beautiful thing you posted, your son.
[译文] CA: 你发的这个关于你儿子的内容很美。
[原文] I mean, that last thing you said there, "I've never felt love like this," I think any parent in the room so knows that feeling.
[译文] 我是说,你在那最后说的那句,“我从未感受过这样的爱”,我想在座的任何父母都非常了解那种感觉。
[原文] And I'm wondering whether that's changed how you think about things like if, you know, say, here's a red box, here's a black box with the red button on it, you can press that button and you give your son likely the most unbelievable life, but also you inject a 10 percent chance that he gets destroyed. Do you press that button?
[译文] 我在想这是否改变了你的思考方式,比如,这里有个红盒子,或者有个带红按钮的黑盒子,按下它你能给你儿子一个可能最不可思议的生活,但也注入了 10% 他被毁灭的几率。你会按那个按钮吗?
[原文] SA: In the literal case, no.
[译文] SA: 如果是字面上的这种情况,不按。
[原文] Having a kid changed a lot of things. And by far the most amazing thing that has ever happened to me.
[译文] 有了孩子改变了很多事情。这是目前为止发生在我身上最神奇的事。
[原文] A thing my cofounder Ilya said once is... "I don't know what the meaning of life is, but for sure it has something to do with babies." And it's like, unbelievably accurate.
[译文] 我的联合创始人 Ilya 曾经说过……“我不知道人生的意义是什么,但肯定和婴儿有关。”这简直准确得令人难以置信。
[原文] But, you know, I really cared about like not destroying the world before. I really care about it now. I didn't need a kid for that part.
[译文] 不过,你知道,我以前就真的很在乎不要毁灭世界。我现在也真的很在乎。这一部分我不需要有了孩子才懂。
[原文] I mean, I definitely think more about like what the future will be like for him in particular, but I feel a responsibility to do the best thing I can for the future of everybody.
[译文] 我的意思是,我确实更多地思考未来对他个人来说会是什么样,但我感到有责任为了所有人的未来尽我所能做到最好。
📝 本节摘要:
Chris 引用 Tristan Harris 的观点,质疑“不可避免的军备竞赛”是否会将人类推向危险,并询问能否通过集体协议让技术发展“减速”。Sam 反驳了外界对“疯狂冲刺”的刻板印象,透露各大实验室之间(除一家外)其实保持着密切沟通,且行业内部常因安全考量推迟发布。
>
在安全哲学上,Sam 宣布了一项重大转变:OpenAI 正从少数精英主导的严苛审查,转向由广大用户和集体智慧界定的“许可”模式,特别是在言论和图像生成的边界上。访谈最后,Sam 描绘了未来的图景——他的孩子将生活在一个物质极度丰富、AI 无处不在的世界,并会像看“坏掉的 iPad”一样,带着怜悯与怀旧回顾我们这个充满局限的时代。
[原文] CA: Tristan Harris gave a very powerful talk here this week in which he said that the key problem, in his view, was that you and your peers in these other models all feel basically, that the development of advanced AI is inevitable, that the race is on, and that there is no choice but to try and win that race
[译文] CA: Tristan Harris 这周在这里发表了一场非常有力的演讲,他认为关键问题在于,你和你在这个领域的同行们基本上都觉得,先进 AI 的发展是不可避免的,竞赛已经开始,除了试图赢得这场竞赛别无选择。
[原文] and to do so as responsibly as you can. And maybe there’s a scenario where your superintelligent AI can act as a brake on everyone else's or something like that.
[译文] 并且只能尽可能负责任地去赢。也许存在一种设想,你们的超级智能 AI 可以作为其他人的刹车,或者诸如此类。
[原文] But that the very fact that everyone believes it is inevitable, that is a pathway to serious risk and instability.
[译文] 但正是因为每个人都相信这是不可避免的,这一事实本身就是通向严重风险和不稳定的路径。
[原文] Do you think that you and your peers do feel that it's inevitable, and can you see any pathway out of that where we could collectively agree to just slow things down a bit, have society as a whole weigh in a bit and say, no, you know, we don't want this to happen quite as fast.
[译文] 你认为你和你的同行们真的觉得这是不可避免的吗?你能看到任何摆脱这种局面的路径吗?比如我们可以集体达成一致,稍微慢下来一点,让整个社会更多地参与进来说,“不,你知道,我们不希望这发生得这么快。”
[原文] It's too disruptive.
[译文] “这太具有破坏性了。”
[原文] SA: First of all, I think people slow things down all the time because the technology is not ready, because something's not safe enough, because something doesn't work.
[译文] SA: 首先,我认为人们一直在放慢脚步,因为技术还没准备好,因为某些东西不够安全,或者因为某些东西无法运作。
[原文] There are, I think, all of the efforts hold on things, pause on things, delay on things, don't release certain capabilities. So I think this happens.
[译文] 我认为所有的团队都在对某些事情进行搁置、暂停、延迟,或者不发布某些能力。所以我认为这(减速)是正在发生的。
[原文] And again, this is where I think the track record does matter. If we were rushing things out and there were all sorts of problems, either the product didn’t work as people wanted it to or there were real safety issues or other things there.,
[译文] 再一次,我认为这就是过往记录(track record)重要的地方。如果我们匆忙推出产品,出现各种问题,要么产品不如人们所愿,要么存在真正的安全问题或其他问题。
[原文] And I will come back to a change we made, I think you could do that. There is communication between most of the efforts, with one exception.
[译文] 我稍后会谈到一个我们做出的改变。大多数团队之间是有沟通的,只有一个例外。
[原文] I think all of the efforts care a lot about AI safety. And I think that --
[译文] 我认为所有的团队都非常关心 AI 安全。而且我认为——
[原文] CA: Who's the exception?
[译文] CA: 那个例外是谁?
[原文] SA: I'm not going to say. And I think that there's really deep care to get this right.
[译文] SA: 我不会说。我认为大家都真的很用心地想把这件事做对。
[原文] I think the caricature of this as just like this crazy race or sprint or whatever, misses the nuance of people are trying to put out models quickly and make great products for people.,
[译文] 我认为把这描绘成只是某种疯狂的竞赛或短跑冲刺之类的讽刺画,忽略了其中的细微差别:人们确实在努力快速推出模型并为用户制造伟大的产品。
[原文] But people feel the impact of this so incredibly that ... you know, I think if you could go sit in a meeting in OpenAI or other companies, you'd be like, oh, these people are really kind of caring about this.
[译文] 但人们对这种影响的感受如此强烈,以至于……你知道,如果你能去参加 OpenAI 或其他公司的会议,你会觉得,“噢,这些人真的很关心这个问题。”
[原文] Now we did make a change recently to how we think about one part of what's traditionally been understood as safety.
[译文] 我们最近确实改变了对传统上被理解为安全的一部分内容的看法。
[原文] Which is, with our new image model, we've given users much more freedom on what we would traditionally think about as speech harms.
[译文] 那就是,在我们的新图像模型中,我们在传统上被认为是“言论伤害(speech harms)”的方面,给予了用户更多的自由度。
[原文] You know, if you try to get offended by the model, will the model let you be offended?,
[译文] 你知道,如果你试图被模型冒犯,模型会让你被冒犯吗?
[原文] And in the past, we've had much tighter guardrails on this. But I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides.
[译文] 在过去,我们对此有更严格的护栏。但我认为模型对齐(alignment)的一部分,是在社会决定的非常宽泛的界限内,遵循模型用户想要它做的事情。
[原文] So if you ask the model to depict a bunch of violence or something like that or to sort of reinforce some stereotype, there's a question of whether or not it should do that.
[译文] 所以如果你要求模型描绘一堆暴力场面之类的,或者去强化某种刻板印象,这就存在一个它是否应该那样做的问题。
[原文] And we're taking a much more permissive stance. There's a place where that starts to interact with real-world harms that we have to figure out how to draw the line for, but, you know, I think there will be cases where a company says, OK, we've heard the feedback from society.,
[译文] 我们正在采取一种更加宽容的立场。当然,在这一立场与现实世界的伤害开始发生相互作用的地方,我们必须弄清楚如何划清界限。但是,你知道,我认为会有这样的情况,公司会说,“好吧,我们听到了来自社会的反馈。”
[原文] People really don't want models to censor them in ways that they don't think make sense. That's a fair safety negotiation.
[译文] “人们真的不希望模型以他们认为毫无意义的方式来审查他们。”这是一种公平的安全协商。
[原文] CA: But to the extent that this is a collective, a problem of collective belief, the solution to those kinds of problems is to bring people together and meet at one point and make a different agreement.
[译文] CA: 但如果这就某种程度而言是一个集体问题,一个集体信念的问题,那么解决这类问题的方法就是把人们聚集在一起,在一个点上会面并达成不同的协议。
[原文] If there was a group of people, say, here or out there in the world who were willing to host a summit of the best ethicists, technologists, but not too many people, small, and you and your peers to try to crack what agreed safety lines could be across the world, would you be willing to attend? Would you urge your colleagues to come?,
[译文] 如果有一群人,比如在这里或世界其他地方,愿意举办一次由最好的伦理学家、技术专家组成的峰会——人数不要太多,规模小一点——让你和你的同行们尝试解决全世界公认的安全底线可能是什么,你愿意参加吗?你会敦促你的同事来吗?
[原文] SA: Of course, but I'm much more interested in what our hundreds of millions of users want as a whole.
[译文] SA: 当然,但我对我们数亿用户作为一个整体想要什么更感兴趣。
[原文] I think a lot of the room has historically been decided in small elite summits. One of the cool new things about AI is our AI can talk to everybody on Earth, and we can learn the collective value preference of what everybody wants, rather than have a bunch of people who are, like, blessed by society to sit in a room and make these decisions, I think that's very cool.,
[译文] 我认为历史上很多决策都是在小型精英峰会上决定的。关于 AI 的一件很酷的新鲜事是,我们的 AI 可以和地球上的每个人交谈,我们可以了解每个人想要的集体价值偏好,而不是让一群被社会“神圣化(blessed)”的人坐在房间里做这些决定,我认为这非常酷。
[原文] (Applause)
[译文] (掌声)
[原文] And I think you will see us do more in that direction. And when we have gotten things wrong, because the elites in the room had a different opinion about what people wanted for the guardrails on image-gen than what people actually wanted, and we couldn't point to real-world harm, so we made that change.
[译文] 我想你会看到我们在那个方向上做更多。当我们把事情搞错的时候,往往是因为房间里的精英们对“人们想要什么样的图像生成护栏”有着与人们实际想法不同的意见,而且我们又指不出(旧规则下防止的)现实世界伤害,所以我们做出了那个改变。
[原文] I'm proud of that.
[译文] 我为此感到自豪。
[原文] CA: There is a long track record of unintended consequences coming out of the actions of hundreds of millions of people.
[译文] CA: 可是数亿人的行动导致意想不到的后果(unintended consequences),这方面可是有着长长的过往记录。
[原文] SA: Also 100 people in a room making a decision.
[译文] SA: 100 个人在房间里做决定也是一样。
[原文] CA: And the hundreds of millions of people don't have control over, they don't necessarily see what the next step could lead to.
[译文] CA: 而且这数亿人并没有控制权,他们不一定能看到下一步会导致什么。
[原文] SA: I am hopeful that -- that is totally accurate and totally right -- I am hopeful that AI can help us be wiser, make better decisions, can talk to us, and if we say, hey, I want thing X, you know, rather than like, the crowds spin that up, AI can say, hey, totally understand that's what you want.,
[译文] SA: 我希望能——这完全准确,完全正确——我希望 AI 能帮助我们变得更明智,做出更好的决定,能和我们交谈。如果我们说,“嘿,我想要 X 东西”,与其让群体盲目起哄,AI 可以说,“嘿,完全理解那是你想要的。”
[原文] If that's what you want at the end of this conversation, you're in control, you pick. But have you considered it from this person's perspective or the impact it will have on this?
[译文] “如果在这场对话结束时你还想要那个,控制权在你,你来选。但你有没有从这个人的角度,或者从它会对这件事产生的影响来考虑过?”
[原文] I think AI can help us be wiser and make better collective governance decisions than we could before.
[译文] 我认为 AI 可以帮助我们比以前更明智、做出更好的集体治理决策。
[原文] CA: We're out of time. Sam, I'll give you the last word. What kind of world do you believe, all things considered, your son will grow up into?
[译文] CA: 我们时间到了。Sam,最后一句话留给你。综合考虑所有因素,你认为你的儿子将在一个什么样的世界里长大?
[原文] SA: I remember -- it's so long ago now, I don't know when the first iPad came out. Is it like 15 years, something like that?
[译文] SA: 我记得——那是很久以前的事了,我不记得第一代 iPad 什么时候出的。是 15 年前吗,差不多?
[原文] I remember watching a YouTube video at the time, of like a little toddler sitting in a doctor's office waiting room or something, and there was a magazine, like one of those old, you know, glossy-cover magazines, and the toddler had his hand on it and was going like this and kind of angry.
[译文] 我记得当时看过一个 YouTube 视频,像是一个蹒跚学步的小孩坐在医生诊所的候诊室之类的地方,那里有一本杂志,就是那种老式的、你知道的,光面封面的杂志。那个小孩把手放在上面,像这样(滑动),然后有点生气。
[原文] And to that toddler, it was like a broken iPad. And he never, she never thought of a world that didn't have, you know, touch screens in them.,
[译文] 对那个小孩来说,那就像是一个坏掉的 iPad。他从来,她从来没有想过一个没有触摸屏的世界。
[原文] And to all the adults watching this, it was this amazing thing because it was like it's so new, it's so amazing, it's a miracle. Of course, you know, magazines are the way the world works.
[译文] 而对于所有观看这个视频的成年人来说,这是一件神奇的事情,因为它太新奇了、太神奇了、是个奇迹。当然,(成年人觉得)杂志才是世界原本运作的方式。
[原文] My kid, my kids hopefully, will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable.
[译文] 我的孩子,我的孩子们——希望如此——将永远不会比 AI 更聪明。他们成长的世界里,产品和服务无一不是极其聪明、极其能干的。
[原文] They will never grow up in a world where computers don't just kind of understand you. And do, you know, for some definition of whatever you can imagine, whatever you can imagine.,
[译文] 他们成长的世界里,电脑不仅仅是“某种程度上”理解你,而是能做到你所能想象的任何事情,任何你能想象的定义。
[原文] It'll be a world of incredible material abundance. It'll be a world where the rate of change is incredibly fast and amazing new things are happening.
[译文] 那将是一个物质极其丰富的世界。那将是一个变化速度极快、惊人的新事物不断发生的世界。
[原文] And it’ll be a world where, like individual ... ability, impact, whatever, is just so far beyond what a person can do today.
[译文] 而且在那样的世界里,个人的……能力、影响力,无论怎么说,都将远远超出今天一个人所能做到的。
[原文] I hope that my kids and all of your kids will look back at us with some like pity and nostalgia and be like, "They lived such horrible lives. They were so limited. The world sucked so much."
[译文] 我希望我的孩子和你们所有的孩子回顾我们时,会带着一些怜悯和怀旧,觉得:“他们过着如此糟糕的生活。他们如此受限。那时的世界太差劲了。”
[原文] I think that's great.
[译文] 我认为那样很好。
[原文] (Applause)
[译文] (掌声)
[原文] CA: It's incredible what you've built. It really is, it's unbelievable. I think over the next few years, you're going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any human in history, pretty much.
[译文] CA: 你所建立的一切令人难以置信。真的是,难以置信。我认为在接下来的几年里,你将面临或许是历史上任何人类所面临过的最大的机遇、最大的道德挑战和最大的决策。
[原文] You should know that everyone here will be cheering you on to do the right thing.
[译文] 你应该知道,这里的每个人都会为你加油,希望你做正确的事。
[原文] SA: We will do our best, thank you very much.
[译文] SA: 我们会尽最大努力,非常感谢。
[原文] CA: Thank you for coming to TED. (Applause) Thank you.
[译文] CA: 感谢你来到 TED。(掌声)谢谢。
[原文] SA: Thank you very much.
[译文] SA: 非常感谢。