Photo of DeepakNess DeepakNess

Raw notes

(398 notes)

Short notes, links, and thoughts – shared as I go through my day.


arXiv now has a MCP server

arXiv has a new MCP server that you can connect to your AI applications, agents, and workflows to search papers, analyze PDFs, explore codebases, and synthesize research insights. From their docs, below are the available tools in the MCP:

  1. Embedding similarity search
  2. Full-text papers search
  3. Agentic paper retrieval
  4. Get paper content
  5. Answer PDF queries
  6. Read files from paper's GitHub repo

I think, this can be important for researchers and engineers, as their AI agent will now have access to all relevant research papers. Earlier, agents could search for papers via search engines, but I guess that had lots of errors and sometimes even returned broken URLs. And that wouldn't be the case with the MCP server.

I installed this to Claude Code by running the following command:

claude mcp add --transport http alphaxiv https://api.alphaxiv.org/mcp/v1

And I had to authenticate by running the /mcp command inside Claude, that looked like the below screenshot:

Installing arXiv new MCP server to Claude Code

And then it was working as expected.

I'm still exploring this, and will share more if I learn something noteworthy.


OpenAI launches GPT-5.4 mini and nano

After 2 weeks of the GPT-5.4 launch, OpenAI just launched the mini and nano versions of GPT-5.4, and it's already available in ChatGPT, Codex, and via the API. And as self-reported by OpenAI, these GPT-5.4 mini and nano models are very close to the larger GPT-5.4 model.

Evals - GPT-5.4 vs mini vs nano

I was reading Simon's post about the new models, he ran some experiments about describing images using the GPT-5.4 nano model and gives this estimate:

[...] describing every single photo in my 76,000 photo collection would cost around $52.44.

And from my tests as well, the models are fairly capable for simple repetitive tasks. But as compared to previous GPT-5 mini and nano models, they are costlier. You can check out the below table for comparison:

Model Input Price (per 1M tokens) Output Price (per 1M tokens)
GPT-5 mini $0.25 $2
GPT-5 nano $0.05 $0.4
GPT-5.4 mini $0.75 3x costlier $4.50 2.25x costlier
GPT-5.4 nano $0.20 4x costlier $2.25 5.6x costlier

But not to mention that the new mini and nano models are far more capable than the previous mini and nano models.


Just ask and Codex can now spin up subagents

I tried the OpenAI's newly launched feature subagents for Codex and it's awesome. I tried it via the Codex app on macOS, and the UI looks good as well.

A screenshot of multiple subagents working in Codex app

Although, spawning subagents consumes much more tokens as compared to a single agent for the same task, it does work faster as multiple agents are working on different asked tasks in parallel. By the way, the agent only spawns subagent(s) when you specifically ask it to. Codex docs have much more info about this and other related things.

I also learned from the Simon's post that Codex also lets you define and use custom agents:

Codex also lets you define custom agents as TOML files in ~/.codex/agents/. These can have custom instructions and be assigned to use specific models - including gpt-5.3-codex-spark if you want some raw speed.

For your info, the subagents feature is only new to Codex as it's already available in Claude Code, Gemini, Cursor, OpenCode, etc.


Pi powered via the Wi-Fi Router

Wifi Router powers Raspberry Pi

I was using a power adapter to power my Raspberry Pi 4B device, but then I noticed a USB port on my Wi-Fi router and then connected the Pi directly to it. And it's working as expected.

I will also soon connect the Pi via a LAN cable to the wifi router for faster internet.


WordPress 7 is coming with new features

WordPress 7 is about to be released on April 9, 2026, and it's coming with some interesting features. From this post on X, I learned that the new v7 will have optional Google Docs style editing options as you see below:

WordPress 7 real-time collaboration

Discussions about real-time collaboration are still ongoing to whether enable this by default or just keep optional. But most likely, it will be turned off by default.

WordPress 7 AI connectors page

Apart from this, WordPress 7 will have a new page for AI connectors as you see in the screenshot. As specified, all your API keys and credentials are stored here and shared across plugins. And I think, this is a good option.

Sometimes, I did have second thoughts about keeping my blog with 11ty or going back to WordPress as the Netlify build time of growing a lot. But then I started doing image optimizations locally and started hosting them externally on Cloudflare R2, this issue is now resolved. But I will keep an eye on WordPress, as an option for my personal site.


Recordly: open-source screen recorder for macOS

I have been using screen.studio to record videos for my channel for over 2 years now, and just learned that there is an open-source alternative to the app called Recordly. It helps you create similarly styled videos for free, and the app is available for macOS, Windows, and Linux as well.

Recordly.dev website

You can see Recordly's source on GitHub, and it's a fork of another open-source project called OpenScreen, but with some additional features. The creator mentions the following on the forked repo:

FAQ: What are the changes between this and Openscreen? A: Recordly adds a full cursor animation/rendering pipeline, native macOS screen capture, zoom animations faithful to Screen Studio, smoother panning behaviour, and more major tweaks.

This fork exists because the original maintainer does not wish implementing the architectural changes that make some of these features possible i.e. different recording pipeline.

By the way, I learned about Recordly from this post on X.


Claude Code wiped entire production database

Came across this post on X of an AI horror, that read:

Claude Code wiped our production database with a Terraform command.

It took down the DataTalksClub course platform and 2.5 years of submissions: homework, projects, and leaderboards.

Automated snapshots were gone too.

In the newsletter, I wrote the full timeline + what I changed so this doesn't happen again.

If you use Terraform (or let agents touch infra), this is a good story for you to read.

They have also written about the incident in more detail in this blog post.


Check if Codex limit is reset today

Whenever there is a bug or Codex isn't properly usable, they reset the weekly limit for everyone, and it's been happening for almost 2 weeks now. Today, I came across this fun app hascodexratelimitreset.today which tracks and shows if they have reset Codex limits today.

Check Codex limits reset

The app is simple, but I love the look and feel of it as it's very well-designed, and matches the aesthetics of OpenAI design.

You might already know that I love simple single-function tools like these, Dhuni being my recent find.


Cloudflare /crawl endpoint is just 'business'

Cloudflare has multiple tools like scrape shield, bot protection, and even CAPTCHA that made it extremely difficult for bots to scrape the sites using these. And now Cloudflare has launched a new /crawl endpoint that helps you crawl the entire website with just a single API call. The endpoint is offered as a new tool in their existing browser rendering service.

This is business, meme

And it's a business, somewhat questionable, but still a business tactic commonly used. They, first, created the demand by letting people use their service to protect their websites against scrapers, and now they themselves started offering another service that makes scraping easy.


Codex is still unusable for me

Previous month I used Claude, but now I am using Codex as I found it to be a bit better and that it introduces fewer errors in the code. But for more than a day, it's completely unusable as it keeps showing "reconnecting 1/5..." and so on. I checked their status page, and this is an acknowledged issue:

Codex is unstable

I have been getting these issues for a long time now, and it has become very frustrating to use the app. But as per the recent post by Tibo from OpenAI, it's expected to be resolved soon. Let's see how it works after today.

Apart from this, the only thing I love is if Codex doesn't work properly, they reset the limits for everyone. I think, it's going to happen again.


Listening to the Dhuni radio while working

I love simple, opinionated, single-function tools like Dhuni – a 24x7 radio that plays Indian classical and instrumental music. It was recently created by Amrith, and I love it as I have been listening to it the whole day today while working.

Dhuni radio screenshot

There are multiple stations and each station has their own season, raga, and mood. I loved this Grishma Dopahar station that you see in the above screenshot.


Exercise and sleep 7+ hours

Came across this post about how exercising regularly slows down your biological aging but only if you sleep more than 7 hours a day. If you're exercising and sleeping less than 7 hours a day, it's actually speeding your biological aging.

Exercise vs. Sleep Chart

It's taken from the research paper titled "Inverted U-shaped relationship between sleep duration and phenotypic age in US adults: a population-based study" and the study was done in the US. And the paper clearly mentions, "[...] sleep duration may vary across different countries and regions [...]", but the importance of sleep can't be denied.


NAS is not for everyone

I keep seeing lots and lots of "get a NAS" videos on YouTube and most influencers are just misleading viewers by not tell them everything and not talking about the nuances. I posted about the same on Threads and had discussions with a lot of folks, but then also decided to write about it, so here we are. By the way, if you don't know:

A Network Attached Storage (NAS) is a dedicated storage device connected to your local network that allows multiple users and devices to store, share, and access files from a central location.

Just to be clear here, my problem here isn't with NAS itself but with influencers misleading non-tech-savvy people and presenting it in a way as if it's the solution to all their problems. Most of the time, they only tell the half story and withhold important information because that's what helps them persuade people to buy those costly NAS devices. Because that's their goal as most of these videos are sponsored by companies like UGreen and Synology.

Some of the things that these influencers say or withheld to mislead you, are:

1. NAS is the ultimate replacement for entertainment platforms like Netflix and Prime Video.

No, it's NOT a replacement. I mean, how do you get the upcoming The Boys Season 5 to your local NAS as soon as it arrives? You can't, unless you download them illegally from somewhere. And once you watched a show or a movie, why would want to keep it, would you watch the same show again and again?

I agree, though, if you have some owned media from the past which isn't available anywhere else, you can store them and watch them.

But most people don't do this.

2. It's a complete Google Drive/Dropbox/iCloud replacement.

Yes, but with nuances.

Having everything on a NAS isn't recommended because there needs to at least another copy of the same data somewhere. You need to have a backup, and NAS is not a backup solution but a storage solution; so either you get another NAS on a different solution or choose cloud backup to solutions like BackBlaze or both (see the 3-2-1 backup rule).

But that adds to the cost, and most non-tech people don't realize this before getting one.

3. Hard Drives are prone to fail after 5-7 years.

Yes, you might have a hard drive running for 10-12 years, but that's just luck. As per multiple discussions online, most people have to replace their NAS hard-drives every 5-7 years to avoid any data loss.

And this is not talked about by YouTubers making sponsored videos about NAS.

4. SSDs are not optimized for storing data for a long time.

I have also seen some videos showcasing NAS devices with NVMe SSDs.

Yes, SSDs are fast and silent, but SSDs in general are not recommended for long-term data storage. Because they store data as electric charges that leak over time, potentially leading to data loss within 1-3 years if left disconnected.

Again, you will never see them talk about this.

I love NAS, but don't like how these YouTube influencers are misleading viewers by withholding and not telling them crucial information. And I hope this changes, eventually.

Also, I would like to give a huge shout out and say thanks to influencers who do not exploit their viewers.


Received appreciation for my blog

Saw a tweet from Ralf yesterday when I opened X, and this is what the post read:

Appreciation post for my blog

I was the happiest, because it made me feel good about what I do.

The post was a reply to another post by Suganthan where he set up his new personal website using Astro, ditching WordPress – I love the site. I was then reading Suganthan's blog post about how he set up the new site and was happily surprised to see myself mentioned.

A few days ago he messaged me that he likes the design of my website and wants to take inspiration from it, and I obviously said yes. And now Suganthan's site is live and it already looks great.


Worldwide AI users in Feb 2026

Came across this illustration from this post on X, where it shows how many users AI does actually have worldwide and puts everything into a context, so you understand how great of a bubble you live in.

AI users in Feb 2026

I mean, just look at the screenshot:

  • ~6.8 billion people have never used AI (84%)
  • ~1.3 billion people are free chatbot users (16%)
  • ~15-25 million people pay $20/mo for AI (~0.3%)
  • ~2-5 million people use AI coding scaffolds (~0.04%)

But when you're on socials, it feels overwhelming. You do not clearly see a path forward for yourself, and you stay anxious and panicked all the time. I think, things are designed in this way to keep us panicked. And looking at the above illustration gives me a slight relief.

I believe this was the original post, but not sure.


OpenAI's GPT-5.4 is here

OpenAI has a new model GPT-5.4, and it's live in ChatGPT, Codex, and via the API. They explain the model as:

It incorporates the industry-leading coding capabilities of GPT‑5.3‑Codex⁠ while improving how the model works across tools, software environments, and professional tasks involving spreadsheets, presentations, and documents.

I used the model inside Codex CLI and Codex app, and GPT-5.4 seemed slightly better than GPT-5.3-Codex. I asked both models to update the pricing for GPT-5.4 API in this pricing calculator. And GPT-5.4 added both gpt-5.4 & gpt-5.4-pro models and also updated all required descriptions on the page, but GPT-5.3-Codex just added gpt-5.4 pricing and nothing else.

Also, the model seemed better at tool calling than all previous models.

I loved this post from Ethan Mollick comparing the capabilities of GPT-4 and GPT-5.4, and it shows how far the model has come in the last 1 year. Then there's this post from Simon as well, featuring the pelicans drawn by the models.

I am still exploring the model, and will update this page if new findings come up.


Know when smart glasses are spying

Yves Jeanrenaud created an app that detects smart glasses nearby, and then it sends you a notification on your phone. Obviously, you can't stop the person from recording in a public place, but at least you know that might be being recorded.

The app is not very accurate as it uses company identificators in the Bluetooth data sent out by these smart glasses. But I like the concept either way.


Supabase is blocked in India

For unknown reasons, Supabase is suddenly blocked by all major ISP providers in India, following a government order. For most users, the main website is accessible, but the underlying developer infrastructure is still inaccessible. People who were already using Supabase for their apps in production, the app is broken, and users aren't able to log in or sign up.

But till now, we have not heard anything from any government officials or even from major internet providers like Jio and Airtel. For your information, the site was blocked under Section 69A of the Information Technology Act, 2000 and for completely unknown reasons.

This is sad. Very sad.

And I think, this is one reason to donated to the Internet Freedom Foundation in India which defends online freedom, privacy and innovation in India. In their own words:

Born out of the SaveTheInternet.in movement for net neutrality, IFF works on a range of issues including net neutrality, free expression, privacy and innovation.

IFF donation email

I just donated, again. And you should too. I remember reading that Kailash Nadh also donates and recommends donating to the foundation.

Supabase about the block

Supabase has published a new update that they are still talking to the authorities in India and the issue is still unresolved.

Meanwhile, awesome people have built temporary solutions to tackle this situation. One such solution is JioBase that helps you unblock Supabase for you (I haven't used it, though), if you're on Jio internet.

Another person Karan Saini has published a research on 43k+ domains being blocked in India, by 6 different ISPs. It's done to examine the scale of DNS censorship in India.

I hope the issue gets resolved soon by the government of India.


Getting a Raspberry Pi 4B with 1GB RAM

I ordered a Raspberry Pi 4B with 1GB RAM to play with nanobot and a bunch of other cool things. It will still take a few days to arrive, and I am still listing out things to do with it.

Raspberry Pi 4B with 1GB RAM

I also found this cool experiment with Pi Zero 2W by installing PicoClaw and I am excited to do experiments with mine as well.

Once I do anything meaningful with the new device, I will write a longer blog post about it. By the way, the reason I am getting the 1GB model because I want to test the lighter bots (unlike OpenClaw).


Prevent AI Agents from auto-merging PRs

Came across this post from Elvis containing a helpful tip to prevent your AI agent from accidentally auto-merging PRs when using the --YOLO or --dangerously-skip-permissions mode. It blocks AI agents like Claude, Codex, or OpenCode from merging Pull Requests accidentally.

Prevent AI agents from auto-merging PRs

In simple language, this setup checks when your AI agent runs a command containing pr merge and:

  • if it does match: blocks the action and prints and error message
  • if it does not match: allows running it without errors

This reduces the risks when vibe-coding in YOLO mode, and here's how I have set this up:

# make the hidden folder
mkdir -p ~/.local/bin

# create and open new file
nano ~/.local/bin/gh

And then paste the following in the file:

#!/bin/bash
# Wrapper to block AI agents from merging PRs

if [[ "$*" == *"pr merge"* ]]; then
  echo "🚫 Blocked: merging requires human approval"
  exit 1
fi

exec /opt/homebrew/bin/gh "$@"

Now, make the script runnable:

chmod +x ~/.local/bin/gh

Then I added the following in my .zshrc file:

echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc

Reloaded settings using the following command:

source ~/.zshrc

And it was ready.

It will now block commands containing pr merge in them, in case my AI agent starts to go rogue.


Claude has an import memory feature

Claude has this interesting way to import memory from other AI providers like ChatGPT or Gemini. The intention here is, if you're moving to Claude then you can bring all your earlier saved info from other providers to Claude, and now it instantly knows everything about you. They define this as:

Bring your preferences and context from other AI providers to Claude. With one copy-paste, Claude updates its memory and picks up right where you left off. Memory is available on all paid plans.

The process is, they give you the following prompt that you can copy-paste to your existing AI provider to extract all it knows about you.

I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it.

Format each entry as: [date saved, if available] - memory content.

Make sure to cover all of the following —  preserve my words verbatim where possible:
- Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). 
- Personal details: name, location, job, family, interests. 
- Projects, goals, and recurring topics. 
- Tools, languages, and frameworks I use. 
- Preferences and corrections I've made to your behavior. 
- Any other stored context not covered above. Do not summarize, group, or omit any entries. 

After the code block, confirm whether that is the complete set or if any remain.

And then you have to paste the output into Claude's memory settings, as you see in the screenshot below.

Import memory feature in Claude

I tried running this prompt in Gemini, and it did give me a lot of memories that I didn't even expect. For example, take a look at some interesting memories:

[2026-03-01] - You use a Mac for your work.
[2026-03-01] - You own a Raspberry Pi 4 1GB model and accessories for it.
[2026-02-15] - You use n8n for automation (asked for help with expressions).

I mean, I didn't think Gemini saves this info about users from chats. It's actually good.

Update: March 3, 2026

Claude has now enabled the memory feature on free plans as well, as announced in this post on X.