lubu labs
Back to Blog
Codex

Codex CLI Is Growing Up Fast. These recent 7 Changes Matter Most.

Codex CLI 0.128.0–0.130.0 added long-running goals, remote control, Vim mode, stronger hooks, better plugin workflows, and safer sandboxing.

Simon Budziak
Simon Budziak
CTO
Codex CLI Is Growing Up Fast. These recent 7 Changes Matter Most.

Recently I wrote a similar post about Claude Code updates you cannot miss. That piece did well, and it also made one thing obvious: if you care about AI coding assistants, you now need to watch both Claude Code and Codex closely.

They are the two leading products in this category right now. They ship fast, and the changelogs are dense enough that it is easy to miss what actually changes day-to-day work.

So I applied the same lens to Codex CLI.

Not to summarize every release note, but to pull out the updates that change how the product feels in real use.

Between April 30 and May 8, 2026, OpenAI shipped three CLI releases: 0.128.0, 0.129.0, and 0.130.0. Read straight through, the changelog looks like a lot of terminal polish and internal cleanup. Read more carefully, and a clearer pattern emerges: Codex CLI is becoming more agentic, more terminal-native, and more team-operable.

TL;DR

  • Codex CLI now has more serious support for long-running agent workflows, not just one-shot terminal prompting.
  • The TUI is improving in the ways that matter to heavy users: Vim mode, better resume/fork flows, raw scrollback, and workspace-aware diffs.
  • Plugin, hook, and permission changes point to a product that is being shaped for repeatable team workflows, not just solo experimentation.
  • codex remote-control is one of the strongest signals in this batch: OpenAI is clearly investing in headless and remotely controlled Codex workflows.

What changed in just over a week

This was not one oversized release. It was a fast sequence:

  • 0.128.0 (April 30, 2026) — goals, codex update, permission profiles, marketplace/plugin workflow upgrades
  • 0.129.0 (May 7, 2026) — Vim mode, better resume/fork UX, /hooks, workspace sharing, sandbox reliability
  • 0.130.0 (May 8, 2026) — codex remote-control, richer plugin metadata, better thread/config handling

The interesting part is not any one bullet. It is the shape of the product these releases point to.

1. Codex is becoming more agentic

Persisted /goal workflows are the real headline of 0.128.0

The biggest conceptual change in this release set is the new persisted /goal workflows.

OpenAI did not ship this as a tiny convenience feature. The release notes tie goals to app-server APIs, model tools, runtime continuation, and TUI controls for create, pause, resume, and clear. That is a much bigger step than "you can save a prompt now."

It means Codex is moving further away from the old mental model of run a command, get an answer, start over.

Instead, Codex is being shaped around longer-lived work:

  • start a goal
  • let it run
  • pause it
  • resume it later
  • continue it across sessions

If you believe AI coding assistants are evolving into semi-autonomous systems rather than smart autocomplete with a shell, this is the update to pay attention to.

codex remote-control makes headless Codex much more interesting

0.130.0 added codex remote-control as a simpler entrypoint for starting a headless, remotely controllable app-server.

That may not sound flashy if you only use Codex interactively in a local terminal. But it is exactly the kind of release note that matters if you care about:

  • remote execution
  • scripted orchestration
  • internal tooling
  • automation layers on top of Codex

Why this matters: a coding assistant becomes much more valuable once it can be controlled as infrastructure, not just as an interactive terminal session.

Taken together with persisted goals, this points in the same direction: Codex is becoming easier to run, resume, and control as a system.

2. Codex is becoming a better full-time CLI

Vim mode is not cosmetic

0.129.0 added modal Vim editing in the composer, including /vim, default-mode config, and Vim-specific keymap contexts.

This matters for a simple reason: terminal-native power users do not want a half-terminal experience. If a CLI product expects to stay open for hours, input ergonomics matter.

Adding Vim mode signals that OpenAI is taking the Codex TUI seriously as a place where people will do real work, not just issue short commands.

Resume, fork, scrollback, and /diff all got more practical

The same 0.129.0 release also improved several workflow details that matter once you use Codex heavily:

  • redesigned resume/fork picker
  • raw scrollback mode
  • /ide context injection
  • workspace-aware /diff

None of these features makes for a flashy product announcement on its own. Together, they reduce friction in exactly the places that experienced users feel it first:

  • finding the right prior session
  • copying useful output
  • understanding what changed
  • moving between related workstreams

This is the kind of product work that usually shows up after a tool already has serious daily users.

codex update is small, but it removes unnecessary friction

0.128.0 also introduced codex update.

This is not the deepest change in the batch, but it is the kind of quality-of-life improvement that makes a CLI feel mature. When a tool is shipping this quickly, an easier update path matters more than usual.

It lowers the friction between there is a fix I want and I am actually running it.

3. Codex is becoming safer and more team-ready

Permission profiles and sandbox controls are becoming first-class

One of the stronger signals in 0.128.0 was the expansion of permission profiles:

  • built-in defaults
  • sandbox CLI profile selection
  • cwd controls
  • active-profile metadata for clients

Then 0.129.0 reinforced that direction with sandbox reliability fixes across Linux and Windows, including older bwrap environments, shared /tmp setups, named pipes, ConPTY teardown, Git safety handling, and related execution edge cases.

That combination matters.

There is a big difference between "this assistant can run commands" and "this assistant has clearer operational boundaries." The second one is what teams need before they trust a tool in more serious workflows.

Hooks are becoming operationally useful

0.129.0 also made hooks much more visible and manageable:

  • browse and toggle hooks from /hooks
  • run hooks before or after compaction
  • add PreToolUse context

This is the kind of capability that becomes valuable once you start treating the assistant as part of a workflow, not just a chat interface. Hooks are where teams begin to inject policy, metadata, sanitation, or routing behavior around the model.

The important change here is not just raw capability. It is discoverability and control. A feature becomes far more useful once users can actually inspect and manage it from the interface.

Plugin workflows are starting to look team-friendly

Across all three releases, plugin workflow improvements kept showing up:

  • marketplace installation
  • remote bundle caching and sync
  • plugin-bundled hooks
  • workspace sharing
  • share access controls
  • source filtering
  • discoverability controls
  • richer plugin details, including bundled hooks and link metadata

That is more than marketplace polish.

It suggests OpenAI is working toward a plugin model that supports distribution, governance, and collaboration, not just personal add-ons installed ad hoc on one machine.

If you are thinking about standardizing Codex workflows across a team, this is one of the most important patterns in the changelog.

Why this release streak matters

If you step back, the story here is not "Codex added seven nice features."

The story is that three fast releases pushed the product forward on three important fronts at the same time:

  1. Agent workflows: persisted goals, runtime continuation, remote control, better thread handling.
  2. Terminal ergonomics: Vim mode, stronger resume/fork UX, better scrollback, better diffing.
  3. Operational maturity: permission profiles, more reliable sandboxes, better hooks, more governable plugin workflows.

That combination is what makes a coding assistant feel less like a promising demo and more like a tool you can build habits around.

I still think Claude Code and Codex are the two most important products to watch in this category. But the Codex changelog from 0.128.0 to 0.130.0 makes one thing especially clear: OpenAI is investing heavily in Codex CLI as a serious operating surface, not just a companion interface.

If you want a complementary read, start with the recent Claude Code updates post, then compare where the two products seem to be placing their bets.


If you're evaluating Codex, Claude Code, or both for real engineering workflows, book a discovery call.

Sources & Further Reading

Ready to Transform Your Business?

Let's discuss how Lubu Labs can help you leverage AI to drive growth and efficiency.

Book a Call

Pick a time that works for you.

Or send us a message