Shipping at Inference-Speed
### 章节 1:引言——“Vibe Coding”与推理速度的质变 📝 **本节摘要**: > 本章回顾了自五月以来“Vibe Coding”(氛围编码)的巨大飞跃,作者指出代码生成的直接可用性已从惊喜变为常态。作者强烈反驳了“使用Agent会导致开发者与架构脱节”的观点,认为经验丰富的开发者能...
Category: AI📝 本节摘要:
本章回顾了自五月以来“Vibe Coding”(氛围编码)的巨大飞跃,作者指出代码生成的直接可用性已从惊喜变为常态。作者强烈反驳了“使用Agent会导致开发者与架构脱节”的观点,认为经验丰富的开发者能精准预判Agent的表现。同时,作者提出软件开发的瓶颈已从编码速度转移至推理时间和深度思考,并主张大多数应用应从CLI(命令行界面)起步,以便Agent能闭环验证。
[原文] [Peter Steinberger]: It’s incredible how far “vibe coding” has come this year.
[译文] [Peter Steinberger]: 今年“Vibe Coding”(氛围编码)的进展真是令人难以置信。
[原文] [Peter Steinberger]: Whereas in ~May I was amazed that *some* prompts produced code that worked out of the box, this is now my expectation.
[译文] [Peter Steinberger]: 虽然在5月左右,我还对 *某些* 提示词能直接生成开箱即用的代码感到惊讶,但 现在这已是我的基本预期。
[原文] [Peter Steinberger]: I can ship code now at a speed that seems unreal.
[译文] [Peter Steinberger]: 我现在的代码发布速度快得简直不真实。
[原文] [Peter Steinberger]: I burned a lot of tokens since then. Time for an update.
[译文] [Peter Steinberger]: 从那以后我消耗了大量的 Tokens(令牌)。是时候更新一下近况了。
[原文] [Peter Steinberger]: It’s funny how these agents work.
[译文] [Peter Steinberger]: 这些 Agents(智能体)的工作方式很有趣。
[原文] [Peter Steinberger]: There’s been this argument a few weeks ago that one needs to write code in order to feel bad architecture and that using agents creates a disconnection - and I couldn’t disagree more.
[译文] [Peter Steinberger]: 几周前有一种观点认为,人必须亲手写代码才能感知到糟糕的架构,而使用 Agents 会造成一种脱节——对此我 完全不敢苟同。
[原文] [Peter Steinberger]: When you spend enough time with agents, you know exactly how long sth should take, and when codex comes back and hasn’t solved it in one shot, I already get suspicious.
[译文] [Peter Steinberger]: 当你花足够的时间与 Agents 共处,你会确切知道某件事应该花多长时间,而当 Codex 返回结果却没能一次性解决问题时,我就已经开始怀疑了。
[原文] [Peter Steinberger]: The amount of software I can create is now mostly limited by inference time and hard thinking.
[译文] [Peter Steinberger]: 我现在能创造的软件数量主要 受限于推理时间和深度思考。
[原文] [Peter Steinberger]: And let’s be honest - most software does not require hard thinking.
[译文] [Peter Steinberger]: 而且老实说——大多数软件并不需要深度思考。
[原文] [Peter Steinberger]: Most apps shove data from one form to another, maybe store it somewhere, and then show it to the user in some way or another.
[译文] [Peter Steinberger]: 大多数应用程序只是将数据从一种形式推送到另一种形式,也许将其存储在某处,然后以某种方式展示给用户。
[原文] [Peter Steinberger]: The simplest form is text, so by default, whatever I wanna build, it starts as CLI.
[译文] [Peter Steinberger]: 最简单的形式是文本,所以默认情况下,无论我想构建什么,都从 CLI(命令行界面)开始。
[原文] [Peter Steinberger]: Agents can call it directly and verify output - closing the loop.
[译文] [Peter Steinberger]: Agents 可以直接调用它并验证输出——从而形成闭环。
📝 本节摘要:
本章详细阐述了 GPT 5 如何将软件开发转变为类似“工厂模式”的高效流程。作者坦言现在很少阅读生成的具体代码,转而关注组件位置和系统设计。重点转移到了对语言和生态系统的选择上:Web 端首选 TypeScript,CLI 工具选择 Go(因 Agent 擅长且 lint 速度快),macOS/UI 开发则选用 Swift。作者还特别指出,得益于 Swift 完善的构建设施和 Codex 的能力,现在的 iOS/Mac 开发已基本脱离对 Xcode 的强依赖。
[原文] [Peter Steinberger]: The real unlock into building like a factory was GPT 5.
[译文] [Peter Steinberger]: 像工厂一样构建软件的真正解锁点是 GPT 5。
[原文] [Peter Steinberger]: It took me a few weeks after the release to see it - and for codex to catch up on features that claude code had, and a bit to learn and understand the differences, but then I started trusting the model more and more.
[译文] [Peter Steinberger]: 发布后我花了几周时间才意识到这一点——等待 Codex 赶上 Claude Code 的功能,并花点时间学习和理解其中的差异,但随后我开始越来越信任这个模型。
[原文] [Peter Steinberger]: These days I don’t read much code anymore.
[译文] [Peter Steinberger]: 如今我不再阅读太多代码了。
[原文] [Peter Steinberger]: I watch the stream and sometimes look at key parts, but I gotta be honest - most code I don’t read.
[译文] [Peter Steinberger]: 我会看着代码流生成,有时会看关键部分,但老实说——大多数代码我都不读。
[原文] [Peter Steinberger]: I do know where which components are and how things are structured and how the overall system is designed, and that’s usually all that’s needed.
[译文] [Peter Steinberger]: 我确实知道哪些组件在哪里,事物是如何构成的,以及整个系统是如何设计的,通常这也就足够了。
[原文] [Peter Steinberger]: The important decisions these days are language/ecosystem and dependencies.
[译文] [Peter Steinberger]: 如今重要的决定在于 语言/生态系统和依赖项。
[原文] [Peter Steinberger]: My go-to languages are TypeScript for web stuff, Go for CLIs and Swift if it needs to use macOS stuff or has UI.
[译文] [Peter Steinberger]: 我常用的语言是:Web 开发用 TypeScript,CLI(命令行工具)用 Go,如果需要使用 macOS 功能或有 UI(用户界面)则用 Swift。
[原文] [Peter Steinberger]: Go wasn’t something I gave even the slightest thought even a few months ago, but eventually I played around and found that agents are really great at writing it, and its simple type system makes linting fast.
[译文] [Peter Steinberger]: 哪怕在几个月前,我都完全没考虑过 Go,但最终我试玩了一下,发现 Agents 非常擅长编写它,而且它简单的类型系统使得 Linting(代码检查)非常快。
[原文] [Peter Steinberger]: Folks building Mac or iOS stuff: You don’t need Xcode much anymore.
[译文] [Peter Steinberger]: 构建 Mac 或 iOS 应用的朋友们:你们不再那么需要 Xcode 了。
[原文] [Peter Steinberger]: I don’t even use xcodeproj files.
[译文] [Peter Steinberger]: 我甚至都不使用 xcodeproj 文件。
[原文] [Peter Steinberger]: Swift’s build infra is good enough for most things these days.
[译文] [Peter Steinberger]: 如今 Swift 的构建基础设施对大多数事情来说已经足够好了。
[原文] [Peter Steinberger]: codex knows how to run iOS apps and how to deal with the Simulator.
[译文] [Peter Steinberger]: Codex 知道如何运行 iOS 应用程序以及如何处理模拟器。
[原文] [Peter Steinberger]: No special stuff or MCPs needed.
[译文] [Peter Steinberger]: 不需要特殊的东西或 MCPs(模型上下文协议)。
📝 本节摘要:
本章通过一次大规模重构的实况,对比了 Codex 与 Opus 的核心差异。作者认为基准测试已不可靠,亲身体验表明 Codex 倾向于在编写代码前花费大量时间(甚至10-15分钟)“静默阅读”,这种“慢思考”显著降低了返工率。相比之下,Opus 虽反应敏捷适合小修补,但在大型任务中常因上下文缺失而导致产出低效。此外,作者指出“计划模式”已成为历史,现在的最佳实践是与模型进行开放式对话,共同制定方案后再执行构建。
[原文] [Peter Steinberger]: I’m writing this post here while codex crunches through a huge, multi-hour refactor and un-slops older crimes of Opus 4.0.
[译文] [Peter Steinberger]: 我写这篇文章的时候,Codex 正忙于处理一个巨大的、耗时数小时的重构工作,清理 Opus 4.0 以前留下的烂摊子。
[原文] [Peter Steinberger]: People on Twitter often ask me what’s the big difference between Opus and codex and why it even matters because the benchmarks are so close.
[译文] [Peter Steinberger]: Twitter 上经常有人问我 Opus 和 Codex 之间有什么大区别,既然基准测试分数这么接近,为什么这很重要。
[原文] [Peter Steinberger]: IMO it’s getting harder and harder to trust benchmarks - you need to try both to really understand.
[译文] [Peter Steinberger]: 在我看来,越来越难相信基准测试了——你需要亲自尝试两者才能真正理解。
[原文] [Peter Steinberger]: Whatever OpenAI did in post-training, codex has been trained to read LOTS of code before starting.
[译文] [Peter Steinberger]: 不管 OpenAI 在后训练阶段做了什么,Codex 被训练成在开始之前先阅读大量代码。
[原文] [Peter Steinberger]: Sometimes it just silently reads files for 10, 15 minutes before starting to write any code.
[译文] [Peter Steinberger]: 有时它只是 静默地阅读文件 10 到 15 分钟,然后才开始编写任何代码。
[原文] [Peter Steinberger]: On the one hand that’s annoying, on the other hand that’s amazing because it greatly increases the chance that it fixes the right thing.
[译文] [Peter Steinberger]: 一方面这很烦人,但另一方面这太棒了,因为它极大地增加了修复正确问题的几率。
[原文] [Peter Steinberger]: Opus on the other hand is much more eager - great for smaller edits - not so good for larger features or refactors, it often doesn’t read the whole file or misses parts and then delivers inefficient outcomes or misses sth.
[译文] [Peter Steinberger]: 另一方面,Opus 要急切得多——对于较小的编辑来说很棒——但对于较大的功能或重构就不那么好了,它经常不阅读整个文件或遗漏部分内容,然后交付低效的结果或遗漏某些东西。
[原文] [Peter Steinberger]: I noticed that even tho codex sometimes takes 4x longer than Opus for comparable tasks, I’m often faster because I don’t have to go back and fix the fix, sth that felt quite normal when I was still using Claude Code.
[译文] [Peter Steinberger]: 我注意到,即使 Codex 有时在类似任务上花费的时间是 Opus 的 4 倍,但我通常还是更快,因为我不必回过头去“修复那个修复”,而这在我还在使用 Claude Code 时感觉是很正常的。
[原文] [Peter Steinberger]: codex also allowed me to unlearn lots of charades that were necessary with Claude Code.
[译文] [Peter Steinberger]: Codex 还让我摒弃了许多在使用 Claude Code 时必须做的表面功夫。
[原文] [Peter Steinberger]: Instead of “ plan mode ”, I simply start a conversation with the model , ask a question, let it google, explore code, create a plan together, and when I’m happy with what I see, I write “build” or “write plan to docs/*.md and build this”.
[译文] [Peter Steinberger]: 取代 “计划模式”,我只需 开始与模型对话,问一个问题,让它去 Google、探索代码、共同制定计划,当我对我所看到的感到满意时,我写下 “build”(构建)或 “write plan to docs/.md and build this”(将计划写入 docs/.md 并构建它)。
[原文] [Peter Steinberger]: Plan mode feels like a hack that was necessary for older generations of models that were not great at adhering to prompts, so we had to take away their edit tools.
[译文] [Peter Steinberger]: 计划模式感觉像是一种为了老一代模型而必须采用的权宜之计,因为它们不太擅长遵循提示词,所以我们要拿走它们的编辑工具。
[原文] [Peter Steinberger]: There’s a highly misunderstood tweet of mine that’s still circling around that showed me that most people don’t get that plan mode is not magic.
[译文] [Peter Steinberger]: 我有一条被高度误解的推文至今仍在流传,这让我看到大多数人并不明白计划模式并不是魔法。
📝 本节摘要:
本章介绍了作者开发的 CLI 工具 “Oracle”,它曾通过调用 GPT 5 Pro 并联网浏览,解决了 Agent 无法获取外部信息的痛点,实现了任务闭环。然而,GPT 5.2 的发布带来了质变,其强大的原生能力使得大多数任务能被“一次性搞定”(One-shot),大大减少了对 Oracle 的依赖。此外,作者强调了知识截止日期的重要性:GPT 5.2 的数据更新至8月底,相比仍停留在3月的 Opus,在通过最新工具链开发时具有显著优势。
[原文] [Peter Steinberger]: The step from GPT 5/5.1 to 5.2 was massive.
[译文] [Peter Steinberger]: 从 GPT 5/5.1 到 5.2 的跨越是巨大的。
[原文] [Peter Steinberger]: I built oracle 🧿 about a month ago - it’s a CLI that allows the agent to run GPT 5 Pro and upload files + a prompt and manages sessions so answers can be retrieved later.
[译文] [Peter Steinberger]: 大约一个月前,我构建了 oracle 🧿——这是一个 CLI(命令行)工具,允许 Agent 运行 GPT 5 Pro 并上传文件+提示词,还能管理会话以便稍后检索答案。
[原文] [Peter Steinberger]: I did this because many times when agents were stuck, I asked it to write everything into a markdown file and then did the query myself, and that felt like a repetitive waste of time - and an opportunity to close the loop.
[译文] [Peter Steinberger]: 我之所以这样做,是因为很多时候当 Agents 卡住时,我会让它把所有内容写入 markdown 文件,然后我自己去查询,这感觉像是重复的时间浪费——也是一个实现闭环的机会。
[原文] [Peter Steinberger]: The instructions are in my global AGENTS.MD file and the model sometimes by itself triggered oracle when it got stuck.
[译文] [Peter Steinberger]: 指令都在我的全局 AGENTS.MD 文件中,模型有时在卡住时会自动触发 oracle。
[原文] [Peter Steinberger]: I used this multiple times per day. It was a massive unlock.
[译文] [Peter Steinberger]: 我每天使用它很多次。这是一个 巨大的解锁。
[原文] [Peter Steinberger]: Pro is insanely good at doing a speedrun across ~50 websites and then thinking really hard at it and in almost every case nailed the response.
[译文] [Peter Steinberger]: Pro 模型非常擅长快速浏览约 50 个网站,然后进行深度思考,几乎在所有情况下都能给出准确的回复。
[原文] [Peter Steinberger]: Sometimes it’s fast and takes 10 minutes, but I had runs that took more than an hour.
[译文] [Peter Steinberger]: 有时它很快,只需 10 分钟,但我也有过运行超过一小时的情况。
[原文] [Peter Steinberger]: Now that GPT 5.2 is out, I have far fewer situations where I need it.
[译文] [Peter Steinberger]: 既然 GPT 5.2 已经发布,我需要用到它的情况就少多了。
[原文] [Peter Steinberger]: I do use Pro myself sometimes for research, but the cases where I asked the model to “ask the oracle” went from multiple times per day to a few times per week.
[译文] [Peter Steinberger]: 我自己有时确实会用 Pro 做研究,但我要求模型“询问 oracle”的情况已从每天多次变成了每周几次。
[原文] [Peter Steinberger]: I’m not mad about this - building oracle was super fun and I learned lots about browser automation, Windows and finally took my time to look into skills, after dismissing that idea for quite some time.
[译文] [Peter Steinberger]: 我对此并不生气——构建 oracle 非常有趣,我学到了很多关于浏览器自动化、Windows 的知识,并在长时间忽视之后终于花时间研究了 Skills(技能)。
[原文] [Peter Steinberger]: What it does show is how much better 5.2 got for many real-life coding tasks.
[译文] [Peter Steinberger]: 这确实表明 5.2 在许多现实生活中的编码任务上变得有多好了。
[原文] [Peter Steinberger]: It one-shots almost anything I throw at it.
[译文] [Peter Steinberger]: 它几乎能 一次性搞定(One-shot) 我扔给它的任何任务。
[原文] [Peter Steinberger]: Another massive win is the knowledge cutoff date.
[译文] [Peter Steinberger]: 另一个巨大的胜利是 知识截止日期。
[原文] [Peter Steinberger]: GPT 5.2 goes till end of August whereas Opus is stuck in mid-March - that’s about 5 months.
[译文] [Peter Steinberger]: GPT 5.2 截止到 8 月底,而 Opus 仍停留在 3 月中旬——这大约是 5 个月的差距。
[原文] [Peter Steinberger]: Which is significant when you wanna use the latest available tools.
[译文] [Peter Steinberger]: 当你想要使用最新的可用工具时,这一点非常重要。
📝 本节摘要:
本章通过两个具体项目展示了 AI 模型能力的巨大飞跃。作者首先回顾了早期的“VibeTunnel”项目(终端多路复用器),曾因模型能力不足而难以将核心代码从 TypeScript 重构为 Zig,但如今 Codex 仅凭两句提示词便在 5 小时内一次性完成了这项复杂的转换任务。随后,作者介绍了当前的核心项目“Clawdis”——一个拥有全能权限的 AI 助手,它不仅能控制家居设备和数字账户,还能通过高效的“字符流”而非图像识别来监控其他 Agent 的工作状态。
[原文] [Peter Steinberger]: To give you another example on how far models have come.
[译文] [Peter Steinberger]: 给你们举另一个例子,说明模型已经发展到了什么程度。
[原文] [Peter Steinberger]: One of my early intense projects was VibeTunnel. A terminal-multiplexer so you can code on-the-go.
[译文] [Peter Steinberger]: 我早期投入大量精力的项目之一是 VibeTunnel。这是一个终端多路复用器,让你可以随时随地写代码。
[原文] [Peter Steinberger]: I poured pretty much all my time into this earlier this year, and after 2 months it was so good that I caught myself coding from my phone while out with friends… and decided that this is something I should stop, more for mental health than anything.
[译文] [Peter Steinberger]: 今年早些时候,我几乎把所有时间都倾注在它上面,两个月后它变得太好用了,以至于我发现自己甚至在和朋友出去玩时都在用手机写代码……于是我决定我应该停止这样做,更多是为了心理健康。
[原文] [Peter Steinberger]: Back then I tried to rewrite a core part of the multiplexer away from TypeScript, and the older models consistently failed me.
[译文] [Peter Steinberger]: 当时,我试图将多路复用器的一个核心部分从 TypeScript 重写(迁移),但旧模型总是让我失望。
[原文] [Peter Steinberger]: I tried Rust, Go… god forbid, even zig.
[译文] [Peter Steinberger]: 我尝试了 Rust,Go……老天保佑,甚至还试了 Zig。
[原文] [Peter Steinberger]: Of course I could have finished this refactor, but it would have required lots of manual work, so I never got around completing this before I put it to rest.
[译文] [Peter Steinberger]: 当然,我本可以完成这次重构,但这需要大量的人工操作,所以在我不做这个项目之前,我一直没能完成它。
[原文] [Peter Steinberger]: Last week I un-dusted this and gave codex a two sentence prompt to convert the whole forwarding-system to zig, and it ran over 5h and multiple compactions and delivered a working conversion in one shot.
[译文] [Peter Steinberger]: 上周我把它重新翻了出来,给 Codex 了一个 两句话的提示词,让它把整个转发系统转换为 Zig,它运行了超过 5 个小时,经历多次(上下文)压缩,一次性交付了可工作的转换结果。
[原文] [Peter Steinberger]: Why did I even un-dust it, you ask?
[译文] [Peter Steinberger]: 你可能会问,我为什么要把它翻出来?
[原文] [Peter Steinberger]: My current focus is Clawdis, an AI assistant that has full access to everything on all my computers, messages, emails, home automation, cameras, lights, music, heck it can even control the temperature of my bed.
[译文] [Peter Steinberger]: 我目前的重心是 Clawdis,这是一个 AI 助手,它拥有对我所有计算机、消息、电子邮件、家庭自动化、摄像头、灯光、音乐的 全部访问权限,见鬼,它甚至能控制我床的温度。
[原文] [Peter Steinberger]: Ofc it also has its own voice, a CLI to tweet and its own clawd.bot.
[译文] [Peter Steinberger]: 当然,它也有自己的声音,一个用来发推特的 CLI,以及它自己的 clawd.bot。
[原文] [Peter Steinberger]: Clawd can see and control my screen and sometimes makes snarky remarks, but I also wanted to give him the ability to check on my agents, and getting a character stream is just far more efficient than looking at images… if this will work out, we’ll see!
[译文] [Peter Steinberger]: Clawd 可以看到并控制我的屏幕,有时还会发表一些尖刻的评论,但我也想让它有能力检查我的 Agents,而获取 字符流 远比看图像高效得多……这是否行得通,我们拭目以待!
📝 本节摘要:
本章深入探讨了作者的高效工作流。他通常同时处理 3-8 个项目,利用 Codex 的队列功能管理灵感。作者反对复杂的自动任务管理系统,坚持“迭代式开发”——即边做边感受,而非预先规划全貌。最独特的理念是“永不回滚”:遇到问题直接让模型修正,并坚持直接推送到主分支(Main),以减少分支管理带来的认知负担。
[原文] [Peter Steinberger]: I usually work on multiple projects at the same time.
[译文] [Peter Steinberger]: 我通常同时处理 多个项目。
[原文] [Peter Steinberger]: Depending on complexity that can be between 3-8.
[译文] [Peter Steinberger]: 根据复杂程度,这可能在 3 到 8 个之间。
[原文] [Peter Steinberger]: The context switching can be tiresome, I really only can do that when I’m working at home, in silence and concentrated.
[译文] [Peter Steinberger]: 上下文切换可能会很累人,我真的只有在家、安静且精神集中的时候才能做到这一点。
[原文] [Peter Steinberger]: It’s a lot of mental models to shuffle.
[译文] [Peter Steinberger]: 这需要在很多思维模型之间来回切换。
[原文] [Peter Steinberger]: Luckily most software is boring.
[译文] [Peter Steinberger]: 幸运的是,大多数软件都很无聊。
[原文] [Peter Steinberger]: Creating a CLI to check up on your food delivery doesn’t need a lot of thinking.
[译文] [Peter Steinberger]: 创建一个 CLI(命令行工具)来检查你的外卖配送并不需要太多思考。
[原文] [Peter Steinberger]: Usually my focus is on one big project and satellite projects that chug along.
[译文] [Peter Steinberger]: 通常我的重心放在一个大项目上,而其他卫星项目则在旁边稳步推进。
[原文] [Peter Steinberger]: When you do enough agentic engineering, you develop a feeling for what’s gonna be easy and where the model likely will struggle, so often I just put in a prompt, codex will chug along for 30 minutes and I have what I need.
[译文] [Peter Steinberger]: 当你做了足够多的代理工程(Agentic Engineering),你就会对什么是容易的、模型可能在哪里挣扎产生一种感觉,所以通常我只是输入一个提示词,Codex 就会忙活 30 分钟,然后我就得到了我需要的东西。
[原文] [Peter Steinberger]: Sometimes it takes a little fiddling or creativity, but often things are straightforward.
[译文] [Peter Steinberger]: 有时这需要一点摆弄或创造力,但通常事情都很直截了当。
[原文] [Peter Steinberger]: I extensively use the queueing feature of codex - as I get a new idea, I add it to the pipeline.
[译文] [Peter Steinberger]: 我广泛使用 Codex 的 队列功能——一旦我有了一个新想法,我就把它添加到流水线中。
[原文] [Peter Steinberger]: I see many folks experimenting with various systems of multi-agent orchestration, emails or automatic task management - so far I don’t see much need for this - usually I’m the bottleneck.
[译文] [Peter Steinberger]: 我看到很多人在试验各种多智能体编排、电子邮件或自动任务管理系统——到目前为止,我还没看到对此有太大的需求——通常我才是那个瓶颈。
[原文] [Peter Steinberger]: My approach to building software is very iterative.
[译文] [Peter Steinberger]: 我构建软件的方法是非常迭代式的。
[原文] [Peter Steinberger]: I build sth, play with it, see how it “feels”, and then get new ideas to refine it.
[译文] [Peter Steinberger]: 我构建某个东西,试玩它,看看它“感觉”如何,然后获得新想法来改进它。
[原文] [Peter Steinberger]: Rarely do I have a complete picture of what I want in my head.
[译文] [Peter Steinberger]: 我脑海中很少有关于我想要什么的完整画面。
[原文] [Peter Steinberger]: Sure, I have a rough idea, but often that drastically changes as I explore the problem domain.
[译文] [Peter Steinberger]: 当然,我有一个粗略的想法,但随着我对问题领域的探索,这个想法通常会发生巨大的变化。
[原文] [Peter Steinberger]: So systems that take the complete idea as input and then deliver output wouldn’t work well for me.
[译文] [Peter Steinberger]: 所以那些将 完整的想法 作为输入然后交付输出的系统对我来说行不通。
[原文] [Peter Steinberger]: I need to play with it, touch it, feel it, see it, that’s how I evolve it.
[译文] [Peter Steinberger]: 我需要玩弄它、触摸它、感受它、看到它,这正是我进化它的方式。
[原文] [Peter Steinberger]: I basically never revert or use checkpointing.
[译文] [Peter Steinberger]: 我基本上 从不回滚(Revert) 或使用检查点。
[原文] [Peter Steinberger]: If something isn’t how I like it, I ask the model to change it.
[译文] [Peter Steinberger]: 如果有些东西不是我喜欢的样子,我就让模型去修改它。
[原文] [Peter Steinberger]: codex sometimes then resets a file, but often it simply reverts or modifies the edits, very rare that I have to back completely, and instead we just travel into a different direction.
[译文] [Peter Steinberger]: Codex 有时会重置文件,但通常它只是回滚或修改编辑内容,我很少需要完全回退,相反,我们只是转向一个不同的方向。
[原文] [Peter Steinberger]: Building software is like walking up a mountain.
[译文] [Peter Steinberger]: 构建软件就像登山。
[原文] [Peter Steinberger]: You don’t go straight up, you circle around it and take turns, sometimes you get off path and have to walk a bit back, and it’s imperfect, but eventually you get to where you need to be.
[译文] [Peter Steinberger]: 你不会直直地往上走,你会绕着山转弯,有时你会偏离路径不得不往回走一点,这并不完美,但最终你会到达你需要去的地方。
[原文] [Peter Steinberger]: I simply commit to main.
[译文] [Peter Steinberger]: 我干脆 直接提交到 Main 分支。
[原文] [Peter Steinberger]: Sometimes codex decides that it’s too messy and automatically creates a worktree and then merges changes back, but it’s rare and I only prompt that in exceptional cases.
[译文] [Peter Steinberger]: 有时 Codex 觉得太乱了,会自动创建一个 Worktree(工作树)然后把更改合并回来,但这很少见,我也只有在特殊情况下才会提示它这么做。
[原文] [Peter Steinberger]: I find the added cognitive load of having to think of different states in my projects unnecessary and prefer to evolve it linearly.
[译文] [Peter Steinberger]: 我发现不得不考虑项目中不同状态所增加的认知负担是不必要的,我更喜欢线性地进化它。
[原文] [Peter Steinberger]: Bigger tasks I keep for moments where I’m distracted - for example while writing this, I run refactors on 4 projects here that will take around 1-2h each to complete.
[译文] [Peter Steinberger]: 我把较大的任务留到我分心的时候——例如在写这篇文章时,我在 4 个项目上运行重构,每个项目大约需要 1-2 小时完成。
[原文] [Peter Steinberger]: Ofc I could do that in a worktree, but that would just cause lots of merge conflicts and suboptimal refactors.
[译文] [Peter Steinberger]: 当然我可以在 Worktree 中做这些,但这只会导致大量的合并冲突和次优的重构。
[原文] [Peter Steinberger]: Caveat: I usually work alone, if you work in a bigger team that workflow obv won’t fly.
[译文] [Peter Steinberger]: 警告:我通常独自工作,如果你在一个更大的团队中工作,这种工作流显然行不通。
📝 本节摘要:
本章分享了作者在上下文管理上的独到技巧。他不再依赖复杂的提示词,而是通过“跨项目引用”(例如让模型参考旧项目的目录)来高效复用已有的解决方案。作者摒弃了对过往会话的繁琐检索,转而在每个项目中维护 docs 文件夹,强制模型阅读特定文档以保持上下文更新。此外,作者指出 GPT 5.2 极佳的长上下文性能使得频繁重启会话变得多余,且 Codex 内部思维的“高度压缩”特性使其在上下文管理上比 Claude 更具优势。[原文] [Peter Steinberger]: I’ve already mentioned my way of planning a feature.
[译文] [Peter Steinberger]: 我已经提到过我规划功能的方法。
[原文] [Peter Steinberger]: I cross-reference projects all the time, esp if I know that I already solved sth somewhere else, I ask codex to look in ../project-folder and that’s usually enough for it to infer from context where to look.
[译文] [Peter Steinberger]: 我一直都在 交叉引用项目,特别是如果我知道我已经在其他地方解决了某件事,我会让 Codex 查看 ../project-folder(上级目录下的项目文件夹),这通常足以让它根据上下文推断出该看哪里。
[原文] [Peter Steinberger]: This is extremely useful to save on prompts.
[译文] [Peter Steinberger]: 这对节省提示词非常有用。
[原文] [Peter Steinberger]: I can just write “look at ../vibetunnel and do the same for Sparkle changelogs”, because it’s already solved there and with a 99% guarantee it’ll correctly copy things over and adapt to the new project.
[译文] [Peter Steinberger]: 我只需写“看看 ../vibetunnel,然后为 Sparkle changelogs 做同样的事”,因为它已经在那里解决了,并且有 99% 的把握它会正确地复制内容并适应新项目。
[原文] [Peter Steinberger]: That’s how I scaffold new projects as well.
[译文] [Peter Steinberger]: 我也是这样搭建新项目的。
[原文] [Peter Steinberger]: I’ve seen plenty of systems for folks wanting to refer to past sessions. Another thing I never need or use.
[译文] [Peter Steinberger]: 我见过很多系统是为那些想要引用过去会话的人准备的。这是另一件我从不需要或使用的东西。
[原文] [Peter Steinberger]: I maintain docs for subsystems and features in a docs folder in each project, and use a script + some instructions in my global AGENTS file to force the model to read docs on certain topics.
[译文] [Peter Steinberger]: 我在每个项目的 docs 文件夹 中维护子系统和功能的文档,并使用脚本 + 我的全局 AGENTS 文件中的一些指令,强制模型阅读特定主题的文档。
[原文] [Peter Steinberger]: This pays off more the larger the project is, so I don’t use it everywhere, but it is of great help to keep docs up-to-date and engineer a better context for my tasks.
[译文] [Peter Steinberger]: 项目越大,这种做法的回报越高,所以我并不在所有地方都用它,但它对于保持文档更新和为我的任务构建更好的上下文非常有帮助。
[原文] [Peter Steinberger]: Apropos context. I used to be really diligent to restart a session for new tasks.
[译文] [Peter Steinberger]: 说到上下文。我过去非常勤奋地为新任务重启会话。
[原文] [Peter Steinberger]: With GPT 5.2 this is no longer needed.
[译文] [Peter Steinberger]: 有了 GPT 5.2,这就不再需要了。
[原文] [Peter Steinberger]: Performance is extremely good even when the context is fuller, and often it helps with speed since the model works faster when it already has loaded plenty files.
[译文] [Peter Steinberger]: 即使上下文比较满,性能也非常好,而且通常这有助于提高速度,因为当模型已经加载了大量文件时,它的工作速度会更快。
[原文] [Peter Steinberger]: Obviously that only works well when you serialize your tasks or keep the changes so far apart that two sessions don’t touch each other much.
[译文] [Peter Steinberger]: 显然,只有当你串行处理任务,或者让变更彼此相距甚远以至于两个会话互不干扰时,这种方法才有效。
[原文] [Peter Steinberger]: codex has no system events for “this file changed”, unlike claude code, so you need to be more careful - on the flip side, codex is just FAR better at context management, I feel I get 5x more done on one codex session than with claude.
[译文] [Peter Steinberger]: 与 Claude Code 不同,Codex 没有“此文件已更改”的系统事件,所以你需要更小心——但另一方面,Codex 在上下文管理方面要好得多,我觉得我在一个 Codex 会话中完成的工作量是 Claude 的 5 倍。
[原文] [Peter Steinberger]: This is more than just the objectively larger context size, there’s other things at work.
[译文] [Peter Steinberger]: 这不仅仅是因为客观上更大的上下文容量,还有其他因素在起作用。
[原文] [Peter Steinberger]: My guess is that codex internally thinks really condensed to save tokens, whereas Opus is very wordy.
[译文] [Peter Steinberger]: 我猜 Codex 为了节省 Tokens,内部思维非常压缩,而 Opus 则非常啰嗦。
[原文] [Peter Steinberger]: Sometimes the model messes up and its internal thinking stream leaks to the user, so I’ve seen this quite a few times.
[译文] [Peter Steinberger]: 有时模型会出错,其内部思维流会泄漏给用户,所以我见过好几次这种情况。
[原文] [Peter Steinberger]: Really, codex has a way with words I find strangely entertaining.
[译文] [Peter Steinberger]: 真的,Codex 的遣词造句方式让我觉得莫名地有趣。
📝 本节摘要:
本章反映了作者在提示词策略上的极简主义转变:不再依赖冗长的语音听写,而是通过短语配合截图(“看图说话”)来高效修正 UI 或文案问题。在架构设计上,作者强调“为 Agent 而非人类设计代码库”,让模型自主管理文档结构。尽管编码变得容易,但作者指出核心难点转移到了“基础设施选型”——如判断依赖项的维护度与流行度(这决定了模型是否有足够的训练数据支持),以及复杂的系统设计决策(如通信协议与数据流向),这些仍需人类的深度思考。
[原文] [Peter Steinberger]: Prompts. I used to write long, elaborate prompts with voice dictation.
[译文] [Peter Steinberger]: 提示词。我过去常使用语音听写编写长而详尽的提示词。
[原文] [Peter Steinberger]: With codex, my prompts gotten much shorter , I often type again, and many times I add images, especially when iterating on UI (or text copies with CLIs).
[译文] [Peter Steinberger]: 有了 Codex,我的 提示词变得短多了,我通常重新打字,而且很多时候我会添加图片,特别是在迭代 UI(或者带有 CLI 的文本副本)时。
[原文] [Peter Steinberger]: If you show the model what’s wrong, just a few words are enough to make it do what you want.
[译文] [Peter Steinberger]: 如果你向模型展示哪里出了问题,只需几个词就足以让它按你的意愿行事。
[原文] [Peter Steinberger]: Yes, I’m that person that drags in a clipped image of some UI component with “fix padding” or “redesign”, many times that either solves my issue or gets me reasonably far.
[译文] [Peter Steinberger]: 是的,我就是那种会拖入某个 UI 组件的截图并附上“修复内边距”或“重新设计”的人,很多时候这要么解决了我的问题,要么让我取得了相当大的进展。
[原文] [Peter Steinberger]: I used to refer to markdown files, but with my docs:list script that’s no longer necessary.
[译文] [Peter Steinberger]: 我过去常引用 markdown 文件,但有了我的 docs:list 脚本,这不再是必要的了。
[原文] [Peter Steinberger]: Markdowns. Many times I write “ write docs to docs/*.md ” and simply let the model pick a filename.
[译文] [Peter Steinberger]: Markdowns。很多时候我写“把文档写入 docs/*.md”,然后干脆让模型自己选一个文件名。
[原文] [Peter Steinberger]: The more obvious you design the structure for what the model is trained on, the easier your work will be.
[译文] [Peter Steinberger]: 你为模型训练的内容设计的结构越明显,你的工作就越轻松。
[原文] [Peter Steinberger]: After all, I don’t design codebases to be easy to navigate for me, I engineer them so agents can work in it efficiently.
[译文] [Peter Steinberger]: 毕竟,我设计代码库不是为了让自己容易导航,我是为了让 Agents 能在其中高效工作而进行工程设计。
[原文] [Peter Steinberger]: Fighting the model is often a waste of time and tokens.
[译文] [Peter Steinberger]: 与模型对抗通常是浪费时间和 Tokens。
[原文] [Peter Steinberger]: What’s still hard? Picking the right dependency and framework to set on is something I invest quite some time on.
[译文] [Peter Steinberger]: 什么依然很难? 选择正确的依赖项和框架是我投入大量时间的事情。
[原文] [Peter Steinberger]: Is this well-maintained? How about peer dependencies? Is it popular = will have enough world knowledge so agents have an easy time?
[译文] [Peter Steinberger]: 它维护得好吗?对等依赖(Peer dependencies)呢?它流行吗 = 是否会有足够的世界知识让 Agents 能轻松应对?
[原文] [Peter Steinberger]: Equally, system design. Will we communicate via web sockets? HTML? What do I put into the server and what into the client?
[译文] [Peter Steinberger]: 同样地,系统设计。我们要通过 Web Sockets 通信吗?HTML?我该把什么放在服务器端,什么放在客户端?
[原文] [Peter Steinberger]: How and which data flows where to where? Often these are things that are a bit harder to explain to a model and where research and thinking pays off.
[译文] [Peter Steinberger]: 数据如何流动,从哪里流向哪里?通常这些事情比较难向模型解释,这时候研究和思考就会带来回报。
[原文] [Peter Steinberger]: Since I manage lots of projects, often I let an agent simply run in my project folder and when I figure out a new pattern, I ask it to “ find all my recent go projects and implement this change there too + update changelog”.
[译文] [Peter Steinberger]: 因为我管理很多项目,通常我让一个 Agent 在我的项目文件夹中运行,当我找出一个新模式时,我要求它“找到我所有最近的 Go 项目 并在那里也实施这个更改 + 更新变更日志”。
[原文] [Peter Steinberger]: Each of my project has a raised patch version in that file and when I revisit it, some improvements are already waiting for me to test.
[译文] [Peter Steinberger]: 我的每个项目在该文件中都有一个提升的补丁版本号,当我重新访问它时,一些改进已经在等着我测试了。
📝 本节摘要:
本章介绍了作者如何通过自动化脚本和多设备协同来提升效率。他利用 AGENTS 文件中的指令结合 Tailscale 网络,实现了跨设备控制(如远程更新 Mac Studio)。作者采用“双 Mac”工作流:MacBook Pro 用于交互,Mac Studio 处理后台重任务和 UI 自动化,两者通过 Git 同步而非 Worktree,以避免干扰并保持环境整洁。此外,作者表达了对终端控制权的偏好,认为简单的自然语言指令(如 "commit/push")比复杂的 Slash 命令更高效且稳定。[原文] [Peter Steinberger]: Ofc I automate everything . There’s a skill to register domains and change DNS. One to write good frontends.
[译文] [Peter Steinberger]: 当然,我 自动化了一切。有一个技能(Skill)用来注册域名和更改 DNS。还有一个用来编写优秀的前端。
[原文] [Peter Steinberger]: There’s a note in my AGENTS file about my tailscale network so I can just say “go to my mac studio and update xxx”.
[译文] [Peter Steinberger]: 在我的 AGENTS 文件中有一条关于我的 Tailscale 网络的备注,所以我可以直接说“去我的 Mac Studio 更新 xxx”。
[原文] [Peter Steinberger]: Apropos multiple Macs . I usually work on two Macs. My MacBook Pro on the big screen, and a Jump Desktop session to my Mac Studio on another screen.
[译文] [Peter Steinberger]: 顺便说一下 多台 Mac。我通常在两台 Mac 上工作。我的 MacBook Pro 连接大屏幕,而在另一个屏幕上通过 Jump Desktop 会话连接我的 Mac Studio。
[原文] [Peter Steinberger]: Some projects are cooking there, some here. Sometimes I edit different parts of the same project on each machine and sync via git.
[译文] [Peter Steinberger]: 有些项目在那里运行,有些在这里。有时我在每台机器上编辑同一个项目的不同部分,并通过 git 同步。
[原文] [Peter Steinberger]: Simpler than worktrees because drifts on main are easy to reconcile.
[译文] [Peter Steinberger]: 这比 Worktrees(工作树)更简单,因为 Main 分支上的偏差很容易调和。
[原文] [Peter Steinberger]: Has the added benefit that anything that needs UI or browser automation I can move to my Studio and it won’t annoy me with popups. (Yes, Playwright has headless mode but there’s enough situations where that won’t work)
[译文] [Peter Steinberger]: 还有一个额外的好处是,任何需要 UI 或浏览器自动化的东西我都可以移到我的 Studio 上,这样就不会有弹出窗口打扰我。(是的,Playwright 有无头模式,但在很多情况下它是行不通的)
[原文] [Peter Steinberger]: Another benefit is that tasks keep running there, so whenever I travel, remote becomes my main workstation and tasks simply keep running even if I close my Mac.
[译文] [Peter Steinberger]: 另一个好处是任务在那里 保持运行,所以每当我旅行时,远程端就成了我的主要工作站,即使我合上 Mac,任务也会继续运行。
[原文] [Peter Steinberger]: I did experiment with real async agents like codex or Cursor web in the past, but I miss the steerability, and ultimately the work ends up as pull request, which again adds complexity to my setup.
[译文] [Peter Steinberger]: 我过去确实尝试过像 Codex 或 Cursor Web 这样真正的异步 Agents,但我怀念那种可操控性,而且最终工作会变成 Pull Request,这又给我的设置增加了复杂性。
[原文] [Peter Steinberger]: I much prefer the simplicity of the terminal.
[译文] [Peter Steinberger]: 我更喜欢终端的简单性。
[原文] [Peter Steinberger]: I used to play with slash commands, but just never found them too useful.
[译文] [Peter Steinberger]: 我以前经常玩 Slash Commands(斜杠命令),但从未觉得它们太有用。
[原文] [Peter Steinberger]: Skills replaced some of it, and for the rest I keep writing “ commit/push ” because it takes the same time as /commit and always works.
[译文] [Peter Steinberger]: Skills 取代了其中的一部分,对于剩下的部分,我坚持写 “commit/push”,因为它和输入 /commit 花费的时间一样,而且总是有效。
📝 本节摘要:
本章阐述了作者“即兴重构”的开发习惯,他不再专门安排时间清理代码,而是一旦发现代码流中有“丑陋”的部分或提示词变慢,就立即着手解决。作者摒弃了传统的问题追踪器(Issue Tracker),认为对于个人开发者而言,直接修复 Bug 比记录后再切换上下文回来处理更高效。最后,作者重申了核心开发理念:无论构建什么(例如他的 YouTube 总结工具),都应先从 CLI(命令行)起步验证核心逻辑,然后再扩展至完整应用。
[原文] [Peter Steinberger]: In the past I often took dedicated days to refactor and clean up projects, I do this much more ad-hoc now.
[译文] [Peter Steinberger]: 过去我经常专门花几天时间来 重构和清理 项目,现在我更多地是即兴(Ad-hoc)做这件事。
[原文] [Peter Steinberger]: Whenever prompts start taking too long or I see sth ugly flying by in the code stream, I’ll deal with it right away.
[译文] [Peter Steinberger]: 每当提示词开始耗时过长,或者我看到代码流中飞过一些丑陋的东西时,我会立即处理它。
[原文] [Peter Steinberger]: I tried linear or other issue trackers , but nothing did stick.
[译文] [Peter Steinberger]: 我尝试过 Linear 或其他 问题追踪器,但都没能坚持下来。
[原文] [Peter Steinberger]: Important ideas I try right away, and everything else I’ll either remember or it wasn’t important.
[译文] [Peter Steinberger]: 重要的想法我会立即尝试,至于其他的,要么我会记住,要么它并不重要。
[原文] [Peter Steinberger]: Of course I have public bug trackers for bugs for folks that use my open source code, but when I find a bug, I’ll immediately prompt it - much faster than writing it down and then later having to switch context back to it.
[译文] [Peter Steinberger]: 当然,我有公开的 Bug 追踪器供使用我开源代码的人提交 Bug,但当我发现一个 Bug 时,我会立即用提示词修复它——这比把它写下来稍后再切换上下文回来处理要快得多。
[原文] [Peter Steinberger]: Whatever you build, start with the model and a CLI first .
[译文] [Peter Steinberger]: 无论你构建什么,首先从模型和 CLI(命令行界面)开始。
[原文] [Peter Steinberger]: I had this idea of a Chrome extension to summarize YouTube vids in my head for a long time.
[译文] [Peter Steinberger]: 我脑海中构思一个总结 YouTube 视频的 Chrome 扩展程序已经很久了。
[原文] [Peter Steinberger]: Last week I started working on summarize, a CLI that converts anything to markdown and then feeds that to a model for summarization.
[译文] [Peter Steinberger]: 上周我开始开发 summarize,这是一个 CLI 工具,它将任何内容转换为 markdown,然后将其提供给模型进行总结。
[原文] [Peter Steinberger]: First I got the core right, and once that worked great I built the whole extension in a day.
[译文] [Peter Steinberger]: 首先我把核心功能做对了,一旦那部分运行良好,我仅用一天时间就构建了整个扩展程序。
[原文] [Peter Steinberger]: I’m quite in love with it. Runs on local, free or paid models. Transcribes video or audio locally. Talks to a local daemon so it’s super fast. Give it a go!
[译文] [Peter Steinberger]: 我很喜欢它。它可以在本地、免费或付费模型上运行。在本地转录视频或音频。与本地守护进程对话,所以速度超快。试一试吧!
📝 本节摘要:
最后一章中,作者分享了其核心配置文件config.toml。他坚持“保持简单(KISS)”原则,首选gpt-5.2-codex high模型,认为更高的推理级别只会拖慢速度而无实益。作者详细解释了配置参数,特别是调整了令牌限制以避免模型“静默失败”,并启用unified_exec取代了旧的 tmux 脚本。有趣的是,作者认为上下文的“压缩”(Compaction)过程并非坏事,反而像是一次代码审查,能帮助模型发现 Bug。文末,作者以“构建东西太有趣了”作为结语,表达了对 AI 编程时代的无限热情。
[原文] [Peter Steinberger]: My go-to model is gpt-5.2-codex high.
[译文] [Peter Steinberger]: 我首选的模型是 gpt-5.2-codex high。
[原文] [Peter Steinberger]: Again, KISS.
[译文] [Peter Steinberger]: 再说一次,KISS(保持简单)。
[原文] [Peter Steinberger]: There’s very little benefit to xhigh other than it being far slower, and I don’t wanna spend time thinking about different modes or “ultrathink”.
[译文] [Peter Steinberger]: 除了速度慢得多之外,xhigh(超高推理模式)几乎没有什么好处,而且我不想花时间去考虑不同的模式或“超级思考”。
[原文] [Peter Steinberger]: So pretty much everything runs on high.
[译文] [Peter Steinberger]: 所以几乎所有东西都在 high(高推理模式)下运行。
[原文] [Peter Steinberger]: GPT 5.2 and codex are close enough that changing models makes no sense, so I just use that.
[译文] [Peter Steinberger]: GPT 5.2 和 Codex 非常接近,切换模型毫无意义,所以我只用那个。
[原文] [Peter Steinberger]: This is my ~/.codex/config.toml :
[译文] [Peter Steinberger]: 这是我的 ~/.codex/config.toml 配置文件:
[原文] [Peter Steinberger]:
model = "gpt-5.2-codex"
model_reasoning_effort = "high"
tool_output_token_limit = 25000
# Leave room for native compaction near the 272–273k context window.
# Formula: 273000 - (tool_output_token_limit + 15000)
# With tool_output_token_limit=25000 ⇒ 273000 - (25000 + 15000) = 233000
model_auto_compact_token_limit = 233000
[features]
ghost_commit = false
unified_exec = true
apply_patch_freeform = true
web_search_request = true
skills = true
shell_snapshot = true
[projects."/Users/steipete/Projects"]
trust_level = "trusted"[译文] [Peter Steinberger]:
model = "gpt-5.2-codex"
model_reasoning_effort = "high"
tool_output_token_limit = 25000
# 在 272–273k 上下文窗口附近为原生压缩留出空间。
# 公式: 273000 - (tool_output_token_limit + 15000)
# 当 tool_output_token_limit=25000 时 ⇒ 273000 - (25000 + 15000) = 233000
model_auto_compact_token_limit = 233000
[features]
ghost_commit = false
unified_exec = true
apply_patch_freeform = true
web_search_request = true
skills = true
shell_snapshot = true
[projects."/Users/steipete/Projects"]
trust_level = "trusted"[原文] [Peter Steinberger]: This allows the model to read more in one go, the defaults are a bit small and can limit what it sees.
[译文] [Peter Steinberger]: 这允许模型一次性读取更多内容,默认值有点小,会限制它所看到的内容。
[原文] [Peter Steinberger]: It fails silently, which is a pain and something they’ll eventually fix.
[译文] [Peter Steinberger]: 它会静默失败,这很痛苦,也是他们最终会修复的问题。
[原文] [Peter Steinberger]: Also, web search is still not on by default?
[译文] [Peter Steinberger]: 另外,网络搜索居然还不是默认开启的?
[原文] [Peter Steinberger]: unified_exec replaced tmux and my old runner script, rest’s neat too.
[译文] [Peter Steinberger]: unified_exec 取代了 tmux 和我旧的 runner 脚本,其余的也很整洁。
[原文] [Peter Steinberger]: And don’t be scared about compaction, ever since OpenAI switched to their new /compact endpoint, this works well enough that tasks can run across many compacts and will be finished.
[译文] [Peter Steinberger]: 不要害怕压缩(Compaction),自从 OpenAI 切换到他们新的 /compact 端点后,这就运作得足够好了,任务可以在多次压缩中运行并最终完成。
[原文] [Peter Steinberger]: It’ll make things slower, but often acts like a review, and the model will find bugs when it looks at code again.
[译文] [Peter Steinberger]: 这会让事情变慢,但通常起到了审查的作用,模型在再次查看代码时会发现 Bug。
[原文] [Peter Steinberger]: That’s it, for now.
[译文] [Peter Steinberger]: 暂时就这些了。
[原文] [Peter Steinberger]: I plan on writing more again and have quite a backlog on ideas in my head, just having too much fun building things.
[译文] [Peter Steinberger]: 我计划再多写点东西,脑子里积压了很多想法,只是 构建东西太有趣了(没顾上写)。
[原文] [Peter Steinberger]: If you wanna hear more ramblings and ideas how to build in this new world, follow me on Twitter.
[译文] [Peter Steinberger]: 如果你想听更多关于如何在这个新世界中进行构建的碎碎念和想法,请在 Twitter 上关注我。
[原文] [Peter Steinberger]: New posts, shipping stories, and nerdy links straight to your inbox.
[译文] [Peter Steinberger]: 新文章、发布故事和极客链接直接发送到你的收件箱。
[原文] [Peter Steinberger]: 2× per month, pure signal, zero fluff.
[译文] [Peter Steinberger]: 每月两次,纯干货,无废话。