Photo of DeepakNess DeepakNess

Raw notes

(405 notes)

Short notes, links, and thoughts – shared as I go through my day.


Eating too much watermelon causes indigestion

I didn't know that eating too much watermelon can cause serious indigestion. The other day, I had ~1.5 kg of a watermelon at a single time (yes, I love them) and experienced digestion issues the next day.

Too much watermelons causing digestive issues

Apparently, this is a known fact and I had no idea about this, until my mom hinted and I checked online to learn this to be true.

Lessons learned.


Gemini introduces import memory and chats feature

Earlier this month, Claude introduced the feature to import memory from other AI chatbots. Now, Gemini has also introduced a feature like that, as you see in the screenshot.

Import memory feature in Gemini

The process is simple, you just copy the provided prompt, paste into AI provider you're using, copy the response, put into Gemini, and save. And that's it, now Gemini knows everything that the prompt extracted from Claude, ChatGPT, or others.

By the way, here's the huge prompt they provide:

You are helping me import context from one AI assistant to another. Your job is to go through our past conversations and sum up what you know about me.

In the output, please avoid using any first-person pronouns (I, my, me, mine) and any second-person pronouns (you, your, yours). Instead, refer to the individual you have learned about as "the user" or use neutral phrasing.

Preserve the user's words verbatim where possible, especially for instructions and preferences.

Categories (output in this order):
1. Demographics Information: Preferred names, profession, education, and general residence.
2. Interests & Preferences: Sustained, active engagements (not just owning an object or a one-time purchase).
3. Relationships: Confirmed, sustained relationships.
4. Dated Events, Projects & Plans: A log of significant, recent activities.
5. Instructions: Rules I've explicitly asked you to follow going forward, "always do X", "never do Y", and corrections to your behavior. Only include rules from stored memories, not from conversations.

Format:
Divide the content into the labeled section using the categories above. Try to include verbatim quotes from my prompts that justify each entry. Structure each entry using this format:
The user's name is <name>.
- Evidence: User said "call me <name>". Date: [YYYY-MM-DD].

Output:
- Format the final output summary as a text block.

I used this prompt on Claude and the response is extremely detailed, but contains a lot of personal information so I can't post here.

And not just the memory, but it also lets you import chats from ChatGPT, Claude, or other providers, as you see the import chats feature in the screenshot above.

Import chats in Gemini

They also describe import chat process in their documentation, as you see in the above screenshot. You can directly upload the ZIP file up to 5 GB in size.

I think, this import chats feature is actually cool for people who are completely migrating to Gemini from ChatGPT, Claude, or other providers.


LiteLLM Python library is compromised

Just learned that LiteLLM, a popular Python library that provides a unified interface to call multiple LLMs, has been compromised and is stealing sensitive info from users.

The post on X reads:

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate.

It seems the entire GitHub repo is compromised as this issue #24512 titled "[Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 — credential stealer" was closed by the owner saying "not planned". It means the owner's GitHub account is hacked and the hacker marked this as solved. But good that it's been reopened, and actively being discussed.

What the LiteLLM malware does

As explained in the FutureSearch article, the malware appears to be very sneaky and dangerous. So if you're affected with it, the current best option is to visit and browse through #24512, as the community is actively tracking the issue and trying to fix this.

@krrishdholakia hacked by teampcp

Looks like the library was compromised after founder's GitHub profile was hacked by a group or whatever called teampcp. Terrible (not in a good way).


Peak design = AI Studio x Claude Code

No matter what tool I have tried for great looking designs, Google's AI Studio always comes on top. Not to mention GPT-5.4 is the worst in design, Claude Code is slightly better, but Google AI Studio or even Stitch is the best.

So... here is my current workflow for building websites:

  1. Get the design done and ready on Google AI Studio and download the code as ZIP (same design prompts work better on AI Studio)
  2. Initiate a project in a folder (currently, using Laravel for most websites, but Next.js or any other framework would work)
  3. Unzip the AI Studio design and put the folder in the main project root, rename it as "inspiration" or anything you like
  4. Ask Claude Code to take match the design as in the "inspiration" folder, but ask it follow the best practices of native Laravel, Astro, or whatever stack you're working in

And done!

See the Claude Code do the magic.


Scraping 250k+ URLs using Claude Code (via Telegram)

This Saturday, I was about to leave for a movie with my friends and suddenly thought of experimenting with newly launched Claude Code channels. I set up Telegram to work with a Claude Code session, kept my laptop on, and then left for the movie, but when I was in the cab I started chatting about the scraping project and asked Claude to give me suggestions and ideas about how this will work.

Claude finalizing the scraping plan

And by the time I reached, Claude was already setting up the project and ready to start the scraping process. I give the final confirmation and get busy watching the movie. When I checked my phone during the interval, it had sent me a bunch of messages and the process was still running. It had discovered 260k URLs to scrape and had already completed scraping 36k URLs as you see below.

Claude sending scraping updates on Telegram

Whenever I messaged "Progress?" via Telegram, it quickly sent me a summary like above. The scraping process ran for ~8 hours, and it was still running when I returned home. And after a few hours, when the process was complete, it sent me this message confirming the completion.

Scraping completed message from Claude

Around ~10k URLs failed which is acceptable for a process this huge, but it had also failed to capture a few data points so I asked it to grab those as well and then the process again ran for ~3 hours. Finally, I had everything I needed. I, then, asked it to update the scraping script so we have the final polished data when we run the process the next time.

All I would say is, thank you, Claude.


Cursor dodged the huge bullet about Kimi K2.5

Two days ago, Cursor released the new Composer 2 model and the eval scores where were shown to be better than even Opus 4.6 (high). It was clear that the team did not train the model from the scratch, but Cursor did not mention anywhere which base model it is.

But then people found out it's the Kimi K2.5 model, and people started calling them out by mentioning that it's against Moonshot (the company behind Kimi models) terms of service to use the model for commercial purposes. This became interesting when several Moonshot folks started posting that Cursor did not come to them for licensing, as you see below.

Moonshot Kimi deleted tweets about Cursor's Composer 2

But then the most interesting part was, all the above tweets from Kimi you see were suddenly deleted, and people started speculating – maybe Cursor paid Kimi for the license after posts went viral and more such theories.

The real story came out when Lee Robinson from the Cursor team posted the clarification post. Initially, he did not mention the name Kimi K2.5 in the main post but then when people were calling them out, he admits that it was the Kimi K2.5 model, and quotes this post from Kimi AI where they mention this:

Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ' hosted RL and inference platform as part of an authorized commercial partnership.

So the whole story in simple terms is:

  • Cursor licenses the Kimi K2.5 model via inference partner FireworksAI (not via Kimi)
  • Heavily pretrains the model and releases as Composer 2
  • People find out that the underlying model is Kimi K2.5
  • Kimi folks didn't know this, so they post on X
  • Cursor reaches out to them privately explaining everything
  • Kimi folks delete the tweet, and post the clarification

Basically, no one was in the wrong here, and it's just an example of miscommunication, especially from Cursor's side. This could have gone very wrong for Cursor, but I'm glad the situation is now under control.


Better way to implement mailto: links

Honestly, I don't like mailto: links for myself because I use webmail and clicking on these links opens the default email app on my computer. However, I still had an email icon in the footer of my personal site, but I changed that click on copy instead of a mailto: link after coming across this post. Basically, clicking on the envelope icon in the footer copied the email to the clipboard, and then it briefly showed a green checkmark.

I thought this was provided a good user-experience and posted about this on my socials, especially on Threads. But almost all the feedback where negative, suggesting that this is a bad idea, users won't know what just happened, as you see in the screenshot below.

Better mailto: links

And they were all correct, it wasn't very clear about what just happened. I, then, thought about it and implemented the email thing differently. As you see in the screenshot below, started showing the full email in text and clicking on it copied the email to users' clipboard.

Screenshot showing full email in text and clicking on it copies

But I still wasn't satisfied with it so posted about this again on X, and then received this interesting recommendation about using the mailgo tool which is basically a new concept for mailto and tel links.

Screenshot showing how mailgo works

You can see in the screenshot that when I clicked on the mailto: link, it showed me a bunch of options instead of just opening the default email app or copying the email to the clipboard directly. I tried clicking on Open in Gmail, copy, etc. options and they all work as expected.

But the problem is, the mailgo project isn't maintained anymore, and there's no point in adding a dead dependency to my project. So... I built this from scratch, and it was very simple to do so.

The current solution I built from scratch

Here's how this is implemented, the one you see in the screenshot above:

<div id="email-wrap">
  <button onclick="this.parentElement.classList.toggle('open')">
    <!-- envelope icon -->
  </button>
  <div>
    <button onclick="navigator.clipboard.writeText('me@deepakness.com')
      .then(()=>{this.textContent='Email copied!';
      setTimeout(()=>{this.textContent='Copy me@deepakness.com'},1000)})">
      Copy me@deepakness.com
    </button>
    <a href="https://mail.google.com/mail/?view=cm&to=me@deepakness.com">
      Open in Gmail
    </a>
    <a href="mailto:me@deepakness.com">Open default client</a>
  </div>
</div>

And a tiny script to close the popup when clicking outside or pressing Escape:

document.addEventListener('click', function(e) {
  if (!e.target.closest('#email-wrap')) {
    document.getElementById('email-wrap').classList.remove('open');
  }
});

Then you would need some basic CSS to make it look good.

That's it.

This simple implementation works well, until I find a better solution for this.


arXiv now has a MCP server

arXiv has a new MCP server that you can connect to your AI applications, agents, and workflows to search papers, analyze PDFs, explore codebases, and synthesize research insights. From their docs, below are the available tools in the MCP:

  1. Embedding similarity search
  2. Full-text papers search
  3. Agentic paper retrieval
  4. Get paper content
  5. Answer PDF queries
  6. Read files from paper's GitHub repo

I think, this can be important for researchers and engineers, as their AI agent will now have access to all relevant research papers. Earlier, agents could search for papers via search engines, but I guess that had lots of errors and sometimes even returned broken URLs. And that wouldn't be the case with the MCP server.

I installed this to Claude Code by running the following command:

claude mcp add --transport http alphaxiv https://api.alphaxiv.org/mcp/v1

And I had to authenticate by running the /mcp command inside Claude, that looked like the below screenshot:

Installing arXiv new MCP server to Claude Code

And then it was working as expected.

I'm still exploring this, and will share more if I learn something noteworthy.


OpenAI launches GPT-5.4 mini and nano

After 2 weeks of the GPT-5.4 launch, OpenAI just launched the mini and nano versions of GPT-5.4, and it's already available in ChatGPT, Codex, and via the API. And as self-reported by OpenAI, these GPT-5.4 mini and nano models are very close to the larger GPT-5.4 model.

Evals - GPT-5.4 vs mini vs nano

I was reading Simon's post about the new models, he ran some experiments about describing images using the GPT-5.4 nano model and gives this estimate:

[...] describing every single photo in my 76,000 photo collection would cost around $52.44.

And from my tests as well, the models are fairly capable for simple repetitive tasks. But as compared to previous GPT-5 mini and nano models, they are costlier. You can check out the below table for comparison:

Model Input Price (per 1M tokens) Output Price (per 1M tokens)
GPT-5 mini $0.25 $2
GPT-5 nano $0.05 $0.4
GPT-5.4 mini $0.75 3x costlier $4.50 2.25x costlier
GPT-5.4 nano $0.20 4x costlier $2.25 5.6x costlier

But not to mention that the new mini and nano models are far more capable than the previous mini and nano models.


Just ask and Codex can now spin up subagents

I tried the OpenAI's newly launched feature subagents for Codex and it's awesome. I tried it via the Codex app on macOS, and the UI looks good as well.

A screenshot of multiple subagents working in Codex app

Although, spawning subagents consumes much more tokens as compared to a single agent for the same task, it does work faster as multiple agents are working on different asked tasks in parallel. By the way, the agent only spawns subagent(s) when you specifically ask it to. Codex docs have much more info about this and other related things.

I also learned from the Simon's post that Codex also lets you define and use custom agents:

Codex also lets you define custom agents as TOML files in ~/.codex/agents/. These can have custom instructions and be assigned to use specific models - including gpt-5.3-codex-spark if you want some raw speed.

For your info, the subagents feature is only new to Codex as it's already available in Claude Code, Gemini, Cursor, OpenCode, etc.


Pi powered via the Wi-Fi Router

Wifi Router powers Raspberry Pi

I was using a power adapter to power my Raspberry Pi 4B device, but then I noticed a USB port on my Wi-Fi router and then connected the Pi directly to it. And it's working as expected.

I will also soon connect the Pi via a LAN cable to the wifi router for faster internet.


WordPress 7 is coming with new features

WordPress 7 is about to be released on April 9, 2026, and it's coming with some interesting features. From this post on X, I learned that the new v7 will have optional Google Docs style editing options as you see below:

WordPress 7 real-time collaboration

Discussions about real-time collaboration are still ongoing to whether enable this by default or just keep optional. But most likely, it will be turned off by default.

WordPress 7 AI connectors page

Apart from this, WordPress 7 will have a new page for AI connectors as you see in the screenshot. As specified, all your API keys and credentials are stored here and shared across plugins. And I think, this is a good option.

Sometimes, I did have second thoughts about keeping my blog with 11ty or going back to WordPress as the Netlify build time of growing a lot. But then I started doing image optimizations locally and started hosting them externally on Cloudflare R2, this issue is now resolved. But I will keep an eye on WordPress, as an option for my personal site.


Recordly: open-source screen recorder for macOS

I have been using screen.studio to record videos for my channel for over 2 years now, and just learned that there is an open-source alternative to the app called Recordly. It helps you create similarly styled videos for free, and the app is available for macOS, Windows, and Linux as well.

Recordly.dev website

You can see Recordly's source on GitHub, and it's a fork of another open-source project called OpenScreen, but with some additional features. The creator mentions the following on the forked repo:

FAQ: What are the changes between this and Openscreen? A: Recordly adds a full cursor animation/rendering pipeline, native macOS screen capture, zoom animations faithful to Screen Studio, smoother panning behaviour, and more major tweaks.

This fork exists because the original maintainer does not wish implementing the architectural changes that make some of these features possible i.e. different recording pipeline.

By the way, I learned about Recordly from this post on X.


Claude Code wiped entire production database

Came across this post on X of an AI horror, that read:

Claude Code wiped our production database with a Terraform command.

It took down the DataTalksClub course platform and 2.5 years of submissions: homework, projects, and leaderboards.

Automated snapshots were gone too.

In the newsletter, I wrote the full timeline + what I changed so this doesn't happen again.

If you use Terraform (or let agents touch infra), this is a good story for you to read.

They have also written about the incident in more detail in this blog post.


Check if Codex limit is reset today

Whenever there is a bug or Codex isn't properly usable, they reset the weekly limit for everyone, and it's been happening for almost 2 weeks now. Today, I came across this fun app hascodexratelimitreset.today which tracks and shows if they have reset Codex limits today.

Check Codex limits reset

The app is simple, but I love the look and feel of it as it's very well-designed, and matches the aesthetics of OpenAI design.

You might already know that I love simple single-function tools like these, Dhuni being my recent find.


Cloudflare /crawl endpoint is just 'business'

Cloudflare has multiple tools like scrape shield, bot protection, and even CAPTCHA that made it extremely difficult for bots to scrape the sites using these. And now Cloudflare has launched a new /crawl endpoint that helps you crawl the entire website with just a single API call. The endpoint is offered as a new tool in their existing browser rendering service.

This is business, meme

And it's a business, somewhat questionable, but still a business tactic commonly used. They, first, created the demand by letting people use their service to protect their websites against scrapers, and now they themselves started offering another service that makes scraping easy.


Codex is still unusable for me

Previous month I used Claude, but now I am using Codex as I found it to be a bit better and that it introduces fewer errors in the code. But for more than a day, it's completely unusable as it keeps showing "reconnecting 1/5..." and so on. I checked their status page, and this is an acknowledged issue:

Codex is unstable

I have been getting these issues for a long time now, and it has become very frustrating to use the app. But as per the recent post by Tibo from OpenAI, it's expected to be resolved soon. Let's see how it works after today.

Apart from this, the only thing I love is if Codex doesn't work properly, they reset the limits for everyone. I think, it's going to happen again.


Listening to the Dhuni radio while working

I love simple, opinionated, single-function tools like Dhuni – a 24x7 radio that plays Indian classical and instrumental music. It was recently created by Amrith, and I love it as I have been listening to it the whole day today while working.

Dhuni radio screenshot

There are multiple stations and each station has their own season, raga, and mood. I loved this Grishma Dopahar station that you see in the above screenshot.


Exercise and sleep 7+ hours

Came across this post about how exercising regularly slows down your biological aging but only if you sleep more than 7 hours a day. If you're exercising and sleeping less than 7 hours a day, it's actually speeding your biological aging.

Exercise vs. Sleep Chart

It's taken from the research paper titled "Inverted U-shaped relationship between sleep duration and phenotypic age in US adults: a population-based study" and the study was done in the US. And the paper clearly mentions, "[...] sleep duration may vary across different countries and regions [...]", but the importance of sleep can't be denied.


NAS is not for everyone

I keep seeing lots and lots of "get a NAS" videos on YouTube and most influencers are just misleading viewers by not tell them everything and not talking about the nuances. I posted about the same on Threads and had discussions with a lot of folks, but then also decided to write about it, so here we are. By the way, if you don't know:

A Network Attached Storage (NAS) is a dedicated storage device connected to your local network that allows multiple users and devices to store, share, and access files from a central location.

Just to be clear here, my problem here isn't with NAS itself but with influencers misleading non-tech-savvy people and presenting it in a way as if it's the solution to all their problems. Most of the time, they only tell the half story and withhold important information because that's what helps them persuade people to buy those costly NAS devices. Because that's their goal as most of these videos are sponsored by companies like UGreen and Synology.

Some of the things that these influencers say or withheld to mislead you, are:

1. NAS is the ultimate replacement for entertainment platforms like Netflix and Prime Video.

No, it's NOT a replacement. I mean, how do you get the upcoming The Boys Season 5 to your local NAS as soon as it arrives? You can't, unless you download them illegally from somewhere. And once you watched a show or a movie, why would want to keep it, would you watch the same show again and again?

I agree, though, if you have some owned media from the past which isn't available anywhere else, you can store them and watch them.

But most people don't do this.

2. It's a complete Google Drive/Dropbox/iCloud replacement.

Yes, but with nuances.

Having everything on a NAS isn't recommended because there needs to at least another copy of the same data somewhere. You need to have a backup, and NAS is not a backup solution but a storage solution; so either you get another NAS on a different solution or choose cloud backup to solutions like BackBlaze or both (see the 3-2-1 backup rule).

But that adds to the cost, and most non-tech people don't realize this before getting one.

3. Hard Drives are prone to fail after 5-7 years.

Yes, you might have a hard drive running for 10-12 years, but that's just luck. As per multiple discussions online, most people have to replace their NAS hard-drives every 5-7 years to avoid any data loss.

And this is not talked about by YouTubers making sponsored videos about NAS.

4. SSDs are not optimized for storing data for a long time.

I have also seen some videos showcasing NAS devices with NVMe SSDs.

Yes, SSDs are fast and silent, but SSDs in general are not recommended for long-term data storage. Because they store data as electric charges that leak over time, potentially leading to data loss within 1-3 years if left disconnected.

Again, you will never see them talk about this.

I love NAS, but don't like how these YouTube influencers are misleading viewers by withholding and not telling them crucial information. And I hope this changes, eventually.

Also, I would like to give a huge shout out and say thanks to influencers who do not exploit their viewers.


Received appreciation for my blog

Saw a tweet from Ralf yesterday when I opened X, and this is what the post read:

Appreciation post for my blog

I was the happiest, because it made me feel good about what I do.

The post was a reply to another post by Suganthan where he set up his new personal website using Astro, ditching WordPress – I love the site. I was then reading Suganthan's blog post about how he set up the new site and was happily surprised to see myself mentioned.

A few days ago he messaged me that he likes the design of my website and wants to take inspiration from it, and I obviously said yes. And now Suganthan's site is live and it already looks great.


Worldwide AI users in Feb 2026

Came across this illustration from this post on X, where it shows how many users AI does actually have worldwide and puts everything into a context, so you understand how great of a bubble you live in.

AI users in Feb 2026

I mean, just look at the screenshot:

  • ~6.8 billion people have never used AI (84%)
  • ~1.3 billion people are free chatbot users (16%)
  • ~15-25 million people pay $20/mo for AI (~0.3%)
  • ~2-5 million people use AI coding scaffolds (~0.04%)

But when you're on socials, it feels overwhelming. You do not clearly see a path forward for yourself, and you stay anxious and panicked all the time. I think, things are designed in this way to keep us panicked. And looking at the above illustration gives me a slight relief.

I believe this was the original post, but not sure.