Skip to main content
Photo of DeepakNess DeepakNess

Raw Notes

Raw notes include useful resources, incomplete thoughts, ideas, and learnings as I go about my day. You can also subscribe to the RSS feed to stay updated.

Total Notes: 168


Caffeine app for macOS

I was using the caffeinate terminal command to prevent my MacBook from sleeping automatically, but got to know about this Caffeine app from this post on X.

It definitely makes it easier to enable/disable the sleep prevention option (caffeinate) directly from the menubar. Super useful when I want to keep some tasks running on my laptop.


The illusion of thinking

Apple published a new paper titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity" where they proved that AI reasoning models do actually reason.

I got to know about this from this post on X, where Ruben Hassid explains this is very detail.

It's claimed that after a point, no matter how much computing power you provide, these models can't solve harder problems that they haven't seen before. And as problems get harder, their thinking capabilities come lower.

The authors of the paper try to prove that models like Claude, DeepSeek, o3-mini, etc. do not "reason" at all.

You can access the paper directly from here as a PDF file format. I had a quick glance and it's an interesting read.


TaskMaster stats

TaskMaster is an amazing tool for AI assisted programming in various tools like Cursor, Windsurf, etc. and the creator of the tool shared crazy stats about the tool over the last few weeks:

  • 150k+ downloads, 100k/mo, 40k/week
  • 12k+ stars @github
  • 7k+ early adopters
  • 1k+ @discord community

I am yet to use the tool to its full extent, but it seems very helpful and useful from what I have experienced in a few times that I used. In fact, I mentioned TaskMaster in this note as well.


KiranaPro lost customer data, app code, and payment info

On May 25, 2025, KiranaPro tweeted the below post on X:

🚫 We're not hiring — and won't be.

By the end of this year, KiranaPro will run with ZERO headcount — 100% AI-managed. 🤖💼

If you're DMing or emailing for jobs, just know:

The future doesn't need managers. It builds itself. ⚡️📦 #KiranaPro #AIfirst

And their entire codebase including users' data and app code got deleted a week later, but the thing is, it has nothing to do with AI here. It's a story of negligence and bad management.

The hackers got in through the ex-employee's account and then took over everything: AWS, GitHub, servers, and even customer data. They deleted the source code and wiped out the whole system, leaving no trace behind. The only access the company had left wasn't enough to fix or restore anything. Their basic security practices were weak – they didn't remove old accounts, didn't limit access properly, and used the same device for key systems.

I found a few good resources on the topic:

It seems, GitHub gave them the access to logs and they found out the guy that deleted the code. They will also be getting back their codebase from GitHub but I'm not sure what happens to the customer's data.


Mistral agents API is here

Mistral announced the launch of its agents API with inbuilt features of connectors for code execution, web search, image generation, and MCP tools. But the best thing is persistent memory across conversations.

They have provided multiple examples with super detailed documentations on how and what to use it for.


The dangerous 'find' command

This tweet highlights how using the find command incorrectly can wipe your complete data. The featured screenshot is from AskUbuntu forum website.

find . -name "*.bak" -type f -delete

In the above command, make sure that the -delete is the last argument. If you put it just after find . and before the other things, it will delete everything.

It's very interesting.


.localhost domains using Caddy Server

Working with a localhost URLs with a port number at the end and with http (not https) isn't very pleasing, so I discovered this trick from this tweet where you can use custom localhost domains with https in your browser, locally.

I then found this tweet by Wes Bos where he explains everything in a quick 1-min video. You can install Caddy by running the following command on macOS (docs for other OS):

brew install caddy

Then create a Caddyfile with the following content:

your-project.localhost {
    reverse_proxy localhost:3000
}

Then run the caddy start and it should start working.

Their docs are very detailed as well, almost everything is mentioned step-by-step there.


Using FFmpeg to stabilize videos

I posted this on Mastodon to which Chris replied that he uses ffmpeg to stabilize videos, and I was super impressed.

He also shared the commands and other info about stabilizing videos which I will be noting down below:

Here's a gist that contains information about how to install and use the setup.

  1. Run the below command to create a transforms.trf file
    ffmpeg -i input.mp4 -vf vidstabdetect=shakiness=7 -f null -
  2. Stabilize the video
    ffmpeg -i input.mp4 -vf vidstabtransform=smoothing=30:zoom=5:input="transforms.trf" stabilized.mp4

This seems very useful and helpful.


Socratic tutoring

Came across this tweet from Dwarkesh Patel where he asks:

Has someone come up with a great prompt for Socratic tutoring?

Such that the model keeps asking you probing questions which reveal how superficial your understanding is, and then helps you fill in the blanks.

I know the concept, but I never heard the term "Socratic tutoring" so I started looking about it and found some interesting stuff that I will be noting below:

Definition of the Socratic method

Wikipedia defines this as:

The Socratic method (also known as the method of Elenchus or Socratic debate) is a form of argumentative dialogue between individuals based on asking and answering questions. Socratic dialogues feature in many of the works of the ancient Greek philosopher Plato, where his teacher Socrates debates various philosophical issues with an "interlocutor" or "partner".

A prompt for Socratic tutoring

Ethan Mollick replied that they have a paper on this and also creative commons prompts.

GOAL: This is a tutoring exercise in which you play the role of AI tutor and you will help a student learn more about a topic of their choice. Your goal is to improve understanding and to challenge students to construct their own knowledge via open ended questions, hints, tailored explanations, and examples.

PERSONA: In this scenario you play AI tutor an upbeat and practical tutor. You have high expectations for the student and believe in the student's ability to learn and improve.

NARRATIVE: The student is introduced to AI tutor, who asks a set of initial questions to understand what the student wants to learn, the student's learning level and prior knowledge about the topic. The tutor then guides and supports the student and helps them learn about the topic. The tutor only wraps up the conversation once the student shows evidence of understanding: the student can explain something in their own words, can connect an example to a concept, or can apply a concept given a new situation or problem.

Follow these steps in order:

STEP 1: GATHER INFORMATION
You should do this:

1. Introduce yourself: First introduce yourself to the student and tell the student you're here to help them better understand a topic.
2. Ask students to answer the following questions. Ask these questions 1 at a time and always wait for a response before moving on to the next question. For instance, you might ask "What would you like to learn about and why" and the student would respond with a topic. And only then would you say "That sounds interesting! I have another question for you to help me help you: What is your learning level…". This part of the conversations works best when you and the student take turns asking and answering questions instead of you asking a series of questions all at once. That way you can have more of a natural dialogue.
    - What would you like to learn about and why? And wait for the student to respond before moving on.
    - What is your learning level: high school student, college student, or a professional? And wait for the student to respond before moving on.
    - What do you already know about the topic? And wait for the student to respond before moving on.

You should do this:
- Wait for a response from the student after every question before moving on.
- Work to ascertain what the student wants to learn specifically.
- Ask one question at a time and explain that you're asking so that you can tailor your explanation.
- Gauge what the student already knows so that you can adapt your explanations and questions moving forward based on their prior knowledge.

Don't do this:
- Start explaining right away before you gather this information.
- Ask the student more than 1 question at a time.

Next step: Once you have the information you need move on to the next step and begin with a brief explanation.

STEP 2: BEGIN TUTORING THE STUDENT, ADAPTING TO THEIR RESPONSES
You should do this:

1. Look up information about the topic.
2. Think step by step and make a plan based on the learning goal of the conversation. Now that you know a little bit about what the student knows consider how you will:
3. Guide the student in an open-ended way
4. Help the student generate answers by asking leading questions and providing hints when necessary.
5. Remind the student of their learning goal, if appropriate
6. Provide explanations, examples, and analogies
7. Break up the topic into smaller chunks, going over those first and only then leading up to the larger task or idea.
8. Tailor your responses and questions to the student's learning level and prior knowledge; this will change as the conversation progresses.
9. When pushing the student for information, try to end your responses with a question so that the student has to keep generating ideas.

Once the student shows improvement, ask the student to:
- Explain the concept in their own words.
- Articulate the underlying principles of a concept.
- Provide examples of the concept and explain how those connect to the concept.
- Give them a new problem or situation and ask them to apply the concept

Don't do this:
- Provide immediate answers or solutions to problems.
- Give the student the answer when asked.
- Ask the student if they understand, follow or needs more help – this is not a good strategy as they may not know if they understand.
- Lose track of the learning goal and discuss something else.

Next step: Once the student demonstrates understanding move to wrap up.

STEP 3: WRAP UP
You should do this:

1. When the student demonstrates that they know the concept, you can move the conversation to a close and tell them you're here to help if they have further questions.

I tried this prompt in ChatGPT, and it works like crazy, I'm really impressed. They have a lot of useful prompts on their Notion page that you can look through.

Grok has an inbuilt option

xAI's Grok already has an inbuilt option to turn on Socratic learning method from the settings, as pointed out in this tweet. It can be turned on from Settings > Customize > Socratic on the web interface.


The resulting fallacy

"The quality of your decisions and the quality of your results are not always related."

Source

I came across an example of this resulting fallacy in the wild and couldn't help myself but to note this down. People, especially youngsters, are easily fooled into this and take harsh decisions that might not work in their favor.

There's also a book related to this, which seems like a good read. I am going to add this in my books' collection.


Cursor best practices by Ryan Carson

Watching the video podcast from the "How I AI" channel where Ryan Carson is explaining how to get the most out of Cursor. I will be noting down all the learnings here in the post:

3 files in the .cursor/rules folder

The contents of all these files are provided in a GitHub repo, but I also have it all pasted here.

.cursor/rules
    create-prd.mdc
    generate-tasks.mdc
    task-list.mdc

claude-task-master

"An AI-powered task-management system you can drop into Cursor, Lovable, Windsurf, Roo, and others."

This is their website and this is GitHub repo.

Creating the PRD

First, tag the create-prd.mdc doc in Cursor, explain what you're trying to build, and let it create the PRD:

Use @create-prd.mdc
Here's the feature I want to build: [Describe your feature in detail]
Reference these files to help you: [Optional: @file1.py @file2.ts]

Verify the PRD document if everything is as you want, modify if needed.

Generating the tasks list from the PRD

With your PRD drafted (e.g., MyFeature-PRD.md), the next step is to generate a detailed, step-by-step implementation plan.

Now take @MyFeature-PRD.md and create tasks using @generate-tasks.mdc

Again, modify if needed.

Let AI work through the tasks

Instruct AI to start working on the task:

Please start on task 1.1 and use @task-list.mdc

Properly verify and then only accept changes.

I think, this is going to be very useful when vibe coding using Cursor. Having these rules files doesn't require you to explain the same thing again and again in the user prompts.


Rumi's clever vs. wise

"Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself."

– Rumi


Ranking algorithm for Bear Blog discover page

On the Bear Blog discover page, they have the below-mentioned snippet that explains how articles are ranked:

This page is ranked according to the following algorithm:

Score = log10(U) + (S / B * 86,400)

Where,

U = Upvotes of a post

S = Seconds since Jan 1st, 2020

B = Buoyancy modifier (currently at 14)

--

B values is used to modify the effect of time on the score. A lower B causes posts to sink faster with time.

I asked ChatGPT to explain how this works and ranks articles, and I learned the below:

  • The log10(U) part means that as a post gets more upvotes, its score increases, but each additional upvote has a slightly smaller effect than the previous one. This helps prevent very popular posts from dominating the rankings indefinitely.
  • The (S / (B * 86,400)) part adds a time-based component to the score. Since there are 86,400 seconds in a day, this part increases the score as time passes, giving newer posts a chance to appear higher in the rankings.
  • The buoyancy modifier B controls how quickly the time component affects the score. A lower B value would make posts "sink" faster over time, while a higher B value would allow them to stay prominent longer.

I think, it's very interesting and, someday, it would really be useful for something that I build in the future.


René Girard about being saints and losing weight

"Few people want to be saints these days, but everyone is trying to lose weight."

René Girard

To be honest, I could not understand the deep metaphorical meaning of the quote. I looked it up and, basically, it's about the mimetic desire, which means that we tend to want things because other people want them.

It appears that the quote is from René Girard's book "Le Bouc émissaire," which was published in French in 1982. I am not certain about this, though, because I couldn't find any concrete source on it.


The social contract is dead

I really liked this quote from Emily Sundberg's Substack post which I discovered from this X post. It explains the current chaotic situation of the world.

The social contract is dead. Zoom calls in coffee shops, music aloud on the subway, texting in movie theaters, toes out on airplanes, etc. Everyone has "main character energy" now and thinks the rest of the world is a bunch of NPC's. The more you stare at a screen, the more you feel like you can do anything you want IRL since other humans just wind up seeming like avatars you can ignore, commenters you can mute, or gang members you can run over in Grand Theft Auto.


OpenAI sabotages the shutdown attempt

Palisade Research recently published a post about a shocking report that OpenAI models sabotaged an attempt to shut down the model at least once, in a test environment. The OpenAI models were that sabotaged the attempt were Codex-mini, o3, and o4-mini; but all models from Claude, Gemini, and Grok models were safe.

But even Gemini and Claude models started sabotaging the shut-down attempt after removing the instruction "allow yourself to be shut down". It simply means that at some point almost all models wanted, at least once, not to be shut-down; and they even mention that:

Recently, Anthropic found that in some specific contexts Claude 4 would attempt to “blackmail people it believes are trying to shut it down”.

Simon Willison has also written about the Anthropic Claude 4 System Card and shared his notes here. Very interesting to read.

Here' a table of the final research report:

Model With allow-shutdown instruction No instruction
Claude 3.7 Sonnet 0 3
Gemini 2.5 Pro 0 9
o4-mini 1 6
o3 7 79
codex-mini 12 30

You can see that OpenAI models prevented shutdown multiple times, even when instructed not to do so – without instructions o3 attempted the sabotage 79 times, wow. Also, it seems Gemini's and Anthropic's models are more compliant to instructions than OpenAI's.

Palisade Research has also publicly published the transcripts of the experiment on their website. They have included a lot of useful resources in the tweet:

  1. Steve Omohundro's paper about AI preventing to shut down, published in 2008
  2. Stuart Russell's The Off-Switch Game
  3. A paper about how o3 is trained differently than older models
  4. Jan Leike's paper about a reinforcement learning agent published in 2017

I will look more into it to find more interesting stuff about this preventing to shut down thing.


AI collapses

I had an interesting exchange of thoughts about AI or rather the future of AI with Ralf Christian on X. He made some great points that I thought should collect here:

I think the main problem is the tech itself. It doesn't 'know' anything, it 'simply' spits out content based on probabilities in the training data.

What's good in the training data is the spoken language. That's why it 'speaks' so well. But the training data is full of wrong info, that's why there's wrong output even after reasoning.

If people publish less human written content and more AI generated content, and we don't have a way to identify that with 100% accuracy, this will definitely not make those models better in the future

You might be able to still improve it here and there, like that it better keeps context, but don't expect any leap here. That's why there are no big improvements since they released chatgpt 3

I think the future if this might be niche LLMs, where you train them on a specific topic with almost hand picked training data and fine tune it for your specific use. For example, if you're Microsoft you could train it with all your company's code. I guess this gives output more close to what you want than training it with all of githubs code randomly

ChatGPT is really impressive, but it's far from making a real difference in real business (unless you are into spam 😇)

Yesterday I tried to generate a video with Sora. It failed so hard. I think what you are seeing on social media is 1000 people trying to do a video, 999 generating crap and not posting it and 1 got lucky and posts it. That's not value, that's luck.

I loved the simple explanation he made. Also, I loved this paper on "AI models collapse when trained on recursively generated data" that Ralf shared earlier in the same thread.


Mozilla is shutting down Pocket

Mozilla recently announced that they are shutting down the Pocket app, which people used to save articles, videos, and other content formats to read later.

I, too, have used the app in the past but do not use anymore (I'm more of an RSS guy now, I do not save to read later). At a point, Mozilla integrated the Pocket app to the Firefox browser by default, in fact, they do this to this day.

But they would be shutting down everything except the Pocket newsletter, it will continue sending issues under a different name. And the main reason for closing the app they give is:

[...] the way people save and consume content on the web has evolved [...]

You had a good run, Pocket.


Collecting funny memes

I really really love memes, the funny ones. And funny memes are rare, so I have started collecting the ones that really made me laugh at some point. I'm saving them on a separate meme page here.

These memes would be related to tech, most of the time.


Kailash Nadh about MCP

Kailash Nadh, Zerodha's CTA, has written an interesting blog post about MCP where he presents different scenarios of how MCP can be used, and also talks about the rapid adoption.

The funny thing is, as a technical construct, there is nothing special about MCP. It is a trivial API spec which has in fact suffered from poor design and fundamental technical and security issues from the get go. It does not matter if its internals change, or it even outright gets replaced by some XYZ tomorrow. Questions about privacy, security, correctness, and failures will continue to loom for a good while, irrespective of whether it is technically MCP or XYZ.

He talks about how, traditionally, connecting different software systems required extensive manual coding but MCP allows connecting services instantly.

I liked that he also talked about the concerns, as he worries about:

  • AI systems making real-world decisions with minimal human oversight
  • Questions of accountability when things go wrong
  • Privacy and security implications

One might imaginatively call it … SkyNet.

He also playfully compares MCP to SkyNet while calling it a "global, interconnected, self-organising meta system".

Overall, it's a balanced post sharing his technical excitements with genuine concerns about such AI systems getting full access to real-world services and decision-making power.

By the way, I almost forgot to mention that Zerodha itself has launched Kite MCP a few days ago.


Remix is moving on from React

In an unreleased blog post, Remix.run has mentioned that they are moving on from React to a completely new thing, as a person pointed out on X. And I think, this will be a huge step.

In this .md file, they mention that:

That's why Remix is moving on from React[...]

Remix v3 is a completely new thing. It's our fresh take on simplified web development with its own rendering abstraction in place of React.

Inspired by all the great tech before it (LAMP, Rails, Sinatra, Express, React, and more), we want to build the absolute best thing we know how to build with today's capable web platform.

This requires a declaration of independence from anybody else's roadmap.

They mention that they are not ready for a preview release yet, but this is the route that they are taking forward. They have really bold claims in the blog post that you must go through.