Loved this post from Patrick Muindi on Subsctack about how you know the beauty of your thinking by writing. And this is such a powerful and relatable piece of text.
If you don't write, you'll never know how beautiful your thinking is. And you deserve to know, even if only you will ever see your writing.
I have also copied the same text above in case I lose the images for whatever reason.
The new GPT-Codex-5.3 model was released only a few days ago, and I'm already seeing people build cool applications by using the model. For example, Max built a native app for Google Messages on macOS, and this looks stunning. Just look at the screenshot Max shared:
It's an open-source app and he's still pushing changes to it. The app, currently, can:
pair with your Android phone
syncs Google Messages conversations in real-time
sends as well as receives messages
push notifications work
supports macOS, iOS, and even visionOS
I mean, this is fascinating, and I can and should actually pursue some incomplete ideas that I have.
PutOut is an open-source, self-hosted solution that turns your e-books into beautiful, responsive websites. It's created by me and I have just pushed the v2.0.0 live on GitHub.
I worked on this project after a long time, and there are tons of new features this time:
8 accent color palettes — emerald, indigo, rose, amber, blue, violet, teal, orange. Set one value in site.js and it brands your entire site via CSS custom properties
Reader-controlled dark mode — Light/dark/auto toggle in the footer with localStorage persistence and anti-FOUC script
Enhanced navigation — Keyboard shortcuts (arrow keys), swipe gestures on mobile, mobile bottom nav bar, and sidebar with focus trapping
Reading experience — Progress bar scoped to chapter content, reading time estimates, scroll-to-top button, and next-chapter prefetch at 50% scroll
SEO & structured data — JSON-LD schemas (Book + Article), Open Graph tags, Twitter Cards, XML sitemap, robots.txt, and canonical URLs
Accessibility — Skip-to-content link, focus-visible styles with accent color, keyboard navigation, noscript fallback, and print stylesheet
Custom 404 page — Styled error page with chapter directory
Chapter template — _chapter-template.md starter file for quick chapter creation
Comprehensive wiki — Documentation covering configuration, chapters, theming, SEO, accessibility, PDF/EPUB, and deployment
You can also take a look at this e-book that's published using the new v2 version of PutOut.
I am still working on it, and will keep improving this as much as possible. I know that the design, fonts, icons, color palettes, etc. still have rooms for improvements, so I am trying a bunch of things and will keep this taking forward.
I have tried using the CodexBar app previously, but it was constantly showing me annoying popups, so I didn't continue using it. But today I found out this tool called OpenUsage that does the same – tracks your token usage across multiple AI tools like Codex, Claude, Cursor, Copilot, and more.
Not to mention, it's an open-source project and there are a lot of people involved in the development as well. Overall, I liked it better than other such tools.
I just updated to the Codex macOS appVersion 260206.1448 (565) and now the keyboard shortcut for entering plan mode has changed. Earlier, it was shift + tab but now it's cmd + shift + p, as you see in the screenshot here.
I don't think this was needed to change, as earlier the shortcut was easier to enter the plan mode. Now, pressing shift + tab is highlighting clickable elements in the app, just like how it behaves on a webpage.
These days I am mainly using Claude Code or Codex CLI from inside VS Code for coding, and GitHub is sometimes too pushy towards using Copilot. I wanted to disable all AI features inside the IDE, and found a quick solution to do that:
As you can see in the screenshot here, you can just go to Settings, search for @id:chat.disableAIFeatures, and then check the checkbox to turn it off. Now, it doesn't automatically open or show the sidebar AI chat when I open VS Code.
If you want additional AI related settings, just search for ai and you will find other settings that you can control.
I first used Remotion almost a year ago to create some animated text videos, and it became my favorite way to create cool videos programmatically ever since. Recently, I came to know that Remotion is collecting some cool prompts that you can use to one-shot different style of quick videos.
You can visit Remotion prompt library and use these with Claude Code or any other AI model. They also have Agent Skills that are super helpful when creating these videos.
One cool thing I saw on X, where a person created really professional and cool product launch or announcement video using Claude Code. It will take me at least two hours to manually edit such a video. And just take a look at this stunning product launch video, can't believe it's creating using AI.
I am definitely exploring Remotion a lot more soon.
I started adding this instruction in my CLAUDE.md file to update itself after every major change and now it's more consistent and hallucinates less as well.
Self-Maintenance Rule
After every major change (new model, new page, new controller, route changes, migration changes, new test files, architectural shifts), update this CLAUDE.md file to reflect the current state. Specifically:
Add new models/controllers/pages/routes to the relevant tables below
Update test count if new tests are added
Add any new gotchas or patterns to the "Gotchas & Pitfalls" section
Update the "Current State" section if the status changes
Keep this file as the single source of truth for AI sessions working on this project
I am working in Laravel, so this rule has significantly improved the quality of the code and the model is now a lot more consistent. And this works great with the newly launched Claude Opus 4.6 as well.
OpenAI launched the new model just after 30 minutes of when Anthropic launched the new model. It seems they had some insider knowledge about the launch. It's crazy amount of cutthroat competition going on in the AI industry, especially between these two giants.
Today, Anthropic released the new Claude Opus 4.6 model, and it's already available in the Claude Code CLI. I have also posted about the same on X, with a screenshot. Yesterday, I posted about the possibility of Opus 4.6 launching soon, by quoting a Reddit post.
As claimed in their launch post, Opus 4.6 is only slightly higher than GPT Codex 5.2 but considerably higher than Opus 4.5. And here is what their API pricing looks like:
I am still looking into it, and will keep this page updated with new learnings.
Also, OpenAI's GPT-Codex-5.3 was also launched today.
Update:
I saw a person talking about this on X, and then this is in their launch post:
Looking more into it.
Apart from this, I just came across the above banner of Claude offering $50 worth of extra usage for testing and exploring the newly launched model. I saw this on the Claude usage page.
OpenAI claims for the new GPT-Codex-5.3 model to be a lot better than the previous 5.2 model. As you can see in the above screenshot and can also read what they have to explain here:
GPT‑5.3-Codex also better understands your intent when you ask it to make day-to-day websites, compared to GPT‑5.2-Codex.
At the time of writing this, there's no mention of the new model on their models page.
I'm still trying this out, and will keep this page updated with new info.
Runs can go 8+ hours... and I come back to working code + live deployments.
Very interesting.
Unable to see the new model in the Codex CLI?
I see some people complaining about not seeing the GPT-Codex-5.3 model in the Codex CLI, so first, you need to update the CLI tool by running the following command:
npm i -g @openai/codex
And then the new model will automatically start showing when you run /model in Codex CLI.
Saw multiple posts talking about possibilities of Anthropic soon planning to launch the Claude Opus 4.6 model. One person on Reddit posted the following:
The leaks are real and we are getting Opus 4.6 and Sonnet 5.0 soon, could be today, tomorrow or even the next week?
Just to verify the findings of the Macintoch from X I created a new project enabled Vertex AI and verified the results myself, without using the project credentials I was getting 403 but after using my project credentials the results varied real models would return 200 (expected) but opus-4-6 and sonnet-5 returned 403 which means I am not authorized and unsurprizingly completely made up models returned 404.
This is great time for them to launch, because I have subscribed for Claude Code Max time. Haha.
Came across this interesting use case of Tailscale installed on a AppleTV with the exit node to a router in a different country, while the AppleTV itself stays in a different country.
Andrej Karpathy shared this post on X saying nowadays he is spending a lot more time reading long-form articles via RSS/Atom feeds. He says, "... a lot less slop intended to provoke". He also shared a list of 92 top blogs which are popular on HackerNews. It's in .opml format can be directly imported to most RSS feed readers like NetNewsWire (I use this app already).
I absolutely love this technology of RSS feeds and have been using it for, at least, a few years now. And it feels great to see people going back to the basics and RSS becoming cool again.
VS Code plays a sound when you run a command, send a chat request, edits undone, etc. and sometimes it's just annoying. These sounds are called signals and while you can go to VS Code settings and mark all these signals as never, it's a tedious task as there are 10s of different items that you'd need to change.
So... what's the easiest way then?
Yes, set the volume to 0. For this, go to settings, search for signals volume, and set it to zero or maybe reduce to a smaller number so it's not annoying anymore. It was set to 70 in my case, I think by default.
OpenAI launched a new macOS app for Codex that is more feature rich than using Codex in the terminal. You can watch their quick demo on X and here are some highlights from the demo:
run and manage multiple agents for multiple projects at one place
in-built git worktrees to simultaneously work on different features without conflicts
easier and better way to use and manage skills, usage can be more than coding like writing, etc.
generate and edit images inside the app to use in websites (this is a cool skill)
pre-built skills for creating documents like PDFs, spreadsheets, and docs
/personality command to choose an interaction style you like
I just installed the app, and here are my first impressions:
The app looks good, love the light mode with system translucency.
I didn't need to log in, as I was already logged in to Codex CLI. It's already showing my MCPs installed and other settings. When I opened a project, it already showed my previous chats with Codex CLI in the folder. Cool!
I changed a few settings:
Turned on: Require ⌘ + enter to send long prompts
Turned on: Use pointer cursors
Selected the option to show commands and expand output
The Skills library is good, you can just click on any skill to install in seconds.
It has an in-built terminal as well. I can also run some commands from inside the app.
The dictation feature is useful and very accurate.
I passed a few commands in a project and can already say that the experience is much better than using Codex CLI. In the CLI, it felt a lot of work to install and use a skill, but it's quicker here.
The slash / command in the chat shows some useful options. Running /status or /mcp doesn't disturb the running thread or doesn't wait for the execution to complete, but immediately shows the results in a popup above the chat input box.
Update
Just learned that OpenAI has launched a promo offer for Codex allowing 2x usage limits for all paid plans for 2 months. And now even ChatGPT Free and Go users have 1-month access to Codex. This announcement wasn't in the main OpenAI post, but posted by Alexander Embiricos from their team:
To celebrate the Codex app, we're launching a promo:
Doubled Codex rate limits for all paid plans (2 months)
Came across this post from Wes McKinney talking about the new tool he's building as the local-first email archive with a terminal UI and MCP server. It's called msgvault and it's powered by DuckDB and comes as a single Go binary. Wes has written more about the new tool in this announcement post on his blog, explaining how it works and what's the future goal for the tool. He writes:
Fundamentally, this is my data, and I should be able to search it in milliseconds, pull up old emails and attachments in a few keystrokes, and query my history with natural language privately and securely. Finally, in 2026, there is no reason that I should not be able to solve this problem. So I did.
And it does make sense.
Basically, you will be able to chat with all your emails via the inbuilt MCP server and search through your entire archive within milliseconds. The tool currently works for Gmail using the OAuth API archiving everything in a local SQLite database. There's a command msgvault delete-staged that also deletes the syncs emails from Gmail.
In future, the tool will also have .mbox import feature and support for other email services. And can even support import for WhatsApp, iMessage, and SMS archives.
Umami Analytics released v3.0.3 more than a month ago with the patch for the recent Next.js security issue, but I upgraded to the version only recently. To upgrade, I followed the article that I wrote a while for upgrading from Umami v2 to v3.
It was a straightforward process, but I did take the Postgres database backup before the upgrade, in case something goes wrong. Ran the following command for the backup:
Codex CLI related the new v0.93.0 and it comes with the most awaited feature Plan Mode, as explained in the release notes.
Plan mode now streams proposed plans into a dedicated TUI view, plus a feature-gated /plan shortcut for quick mode switching.
But the issue is, the /plan command didn't initially appear for me even after upgrading to the 0.93.0 version. It turned out I had to turn on the experimental collaboration mode by adding the following in the Codex config.toml file:
[features]
collaboration_modes = true
And after this I had to again installation command npm i -g @openai/codex that output "1 file changed" and then the plan mode was available to use. I could either type /type or press shift + tab and the plan mode gets activated immediately.
I also replied to some folks who were having issues with seeing /plan even after updating to the latest version.
Claude Code creator Boris shared some tips and best practices to using Claude Code, and it seems actually helpful. I am yet to get a Claude Code subscription but noting this down here for the future:
always start in plan mode, unless you're doing something very simple, and you can even ask another Claude instance to review and modify the plan and then only implement
keep your CLAUDE.md file updated, end your prompts with "update your CLAUDE.md file so you do not make that mistake again"
for each repetitive tasks, create a skill and can re-use across projects, can setup /slash commands to quickly refer to skills
get Claude to fix the bug by itself: enable Slack MCP, paste a Slack bug thread into Claude and just say "fix"
use prompts like:
grill me on these changes and don't make a PR until I pass your test
While looking for a lightweight GUI code editor or text editor, I found a few good editors for macOS (most of these are open-source) that I am going to list here:
I haven't used these all, of course, but have listed them all here in case someday I need these. And, someday, I will definitely get bored of my current setup and start searching for another editor.
I was using Antigravity as my main AI-assisted IDE for more than a month now, but now it's unusable as they have enforced really strict limits. The Opus 4.5 limit gets exhausted with just 1-2 prompts, and they I have to wait for 2-3 days for it to reset.
What now?
I am looking for alternative and as I am working on a few serious projects now, I want an IDE with higher limits. I was looking to decide between Claude Code Max ($100/mo) and Cursor Pro+ ($60) plans, so I asked for suggestions and most people recommended Claude Code Max as it offers better value for money.
One person on Reddit also recommended GitHub Copilot as it gives you almost 500 Opus 4.5 requests each month in their Pro+ plan. But it turns out that it's not true, as sometimes, if the response is getting longer, it asks you to approve 3 more credits.
Update: I learned that the limit can be increased by configuring the max requests as a person pointed out on Reddit.
And it that case it will become even more costlier than Cursor or Claude Code itself, so this is ruled out.
I am leaning more towards Claude Code Max, it's costly but offers better value for money. And I am coding a lot these days so it will make more sense.
However, I am still waiting for 1-2 days before getting the subscription and evaluate each option correctly.
I earlier wrote about Clawdbot renaming to Moltbot and later renaming to OpenClaw, and there has been significant development in the space ever since. The OpenClaw GitHub repo has over 123k stars at the time of writing this note, and people are using it for crazy things.
Here I will list some crazy things people are using OpenClaw for:
Bhanu is using the bot for doing marketing for his SaaS SiteGPT, as I have talked about it here. He recently posted on X saying, "10 AI agents working 24x7 for a single mission of getting SiteGPT to $1M ARR" and another recently tweet about the same.
Ollama now has an official integration to connect OpenClaw with local models. Obviously, it won't be as good as using Claude models, but it's good for simple tasks.
I will keep adding more stuff to this page, as I discover them.
Earlier I started showing some random stats about my blog on the about page, but now I have also started to show the total words and total posts and notes in the footer section as well, as you see in the screenshot here.
It's totally not relevant in any way, but it's super cool to dynamically have this shown in the footer.
Earlier, I was calculating these stats separately at both /about page and footer section, and that increase the build time by ~15 seconds. But I now have a better setup and as I am doing the calculations only once and then showing the data at both places.
In the future, I will spend more time on the stats thing and probably even have a separate /stats page showing much more relevant data about my blog.
And they now have a new domain name openclaw.ai. I remember saying that I liked Moltbot better than Clawbot in my earlier post, but I definitely like OpenClaw better than both earlier names.