Skip to main content
Photo of DeepakNess DeepakNess

Raw Notes

Raw notes include useful resources, incomplete thoughts, ideas, and learnings as I go about my day. You can also subscribe to the RSS feed to stay updated.

Total Notes: 153


MCP containers

mcp-containers is a GitHub repo which claims to be a "Containerized versions of hundreds of MCP servers 📡 🧠", and it seemed very interesting to me. Basically, it's a Docker image that you can pull and start using the available cool MCP servers.

This is the website, and the limit is on the number of MCP messages per month. There is a free plan with a limited number of messages, so you can try it out.


Even the best LLMs are bad at programming

As per this X post about LiveCodeBench, almost all LLMs are bad at competitive programming. At the time of writing this, the o3-high (2025-04-16) model was the best at medium difficulty programming (which I don't believe, but haven't really used o3 a lot, so I'd like to believe it).

But I do agree that none of the models are good at hard difficulty programming tasks. I have used them extensively and, while they are a great assistant, they are not good at complex tasks.


DHH about seasons of saying NO and YES

Found this interesting video clip of DHH talking about the season of saying "no" and the season of saying "yes". I found this fascinating, and have even transcribed what he says in the video.

I have the season of no and I have the season of yes. I mostly run in the season of no. And when I'm in the season of no, I say no to almost everything just automatically. No, I'm not going to come to the conference. No, I'm not going to come on the podcast. In fact, podcasting is a great example right now. I'd said yes to you during a season of yes. I'd said yes to a bunch of appearances, and that season is well past. I've been saying no to every inbound, every inbound podcast invitation. I get quite a few. And I've just said like, hey, reach back out in September. I'm in a season of no. I'm just not going to entertain that. And I find that that at times is an easier way for people to accept a no is that it's a not right now. And I leave it open that like, hey, do you know what, six months from now might be it.

By the way, I used Google AI Studio and Gemini 2.5 Pro model to transcribe the video, and it did a great job as you can see above.


Prompt injection attacks

A new paper titled "Design Patterns for Securing LLM Agents against Prompt Injections" has been recently published where they discuss 6 different design patterns to prevent LLM Agents from prompt injection attacks.

I got to know about this from Simon Willison's post, and he has also written a detailed blog post about the paper.

I also discovered this SAIF Risk Map from Google, which Daniel Di Bartolo shared under Simon's tweet. Basically, it's a mental model for securing complex AI systems and SAIF stands for Secure AI Framework. And they have also recently published a paper titled "Google's Approach for Secure AI Agents", which seems interesting.


Different ways to do pSEO in WordPress

I recorded a 1-hour-long video explaining the different ways to do programmatic SEO in WordPress, and have included 4 different approaches in the video:

  1. by using the WP All Import Pro plugin
  2. by using the Multi-Page Generator plugin
  3. by using Make.com
  4. by using n8n

All these methods are well explained in the video, but have their own advantages and disadvantages. The #1 is the most advanced and customizable but requires a complex setup.

You can also learn more about programmatic SEO at UntalkedSEO.


Deepmind 'General agents need world models' paper

Found this post by Richard C. Suwandi where he shared a blog post that discusses the new paper by Deepmind titled "General agents need world models". Below is the full post content:

2 years ago, @ilyasut made a bold prediction that large neural networks are learning world models through text.

Recently, a new paper by @GoogleDeepMind provided a compelling insight to this idea. They found that if an AI agent can tackle complex, long-horizon tasks, it must have learned an internal world model—and we can even extract it just by observing the agent's behavior.

I wrote a blog post unpacking this groundbreaking paper and what it means for the future of AGI 👇

https://richardcsuwandi.github.io/blog/2025/agents-world-models/

Will be going through the post this weekend.


o3-pro lands in ChatGPT

As per this post, I got to know that ChatGPT is now showing the o3-pro model in the dropdown. This is the most advanced reasoning model as claimed by OpenAI.

I tried checking but couldn't find in my ChatGPT Plus account, so... is it only available to Pro users? Not sure.

Also, found another tweet that mentions that this model "thinks" for a very long time. A simple "hi" can make the model think up to 3 or even 13 minutes. But then a lot of people are also mentioning that o3-pro is very good.


VACE video creation and editing

Came across this GitHub repo about an all-in-one open source AI model called VACE, which can create and edit videos of several kinds. They introduce the model as:

VACE is an all-in-one model designed for video creation and editing. It encompasses various tasks, including reference-to-video generation (R2V), video-to-video editing (V2V), and masked video-to-video editing (MV2V), allowing users to compose these tasks freely. This functionality enables users to explore diverse possibilities and streamlines their workflows effectively, offering a range of capabilities, such as Move-Anything, Swap-Anything, Reference-Anything, Expand-Anything, Animate-Anything, and more.

VACE can also be found on Hugging Face and this is the research paper attached to it. The detailed usage instructions is also provided in the docs.


Caffeine app for macOS

I was using the caffeinate terminal command to prevent my MacBook from sleeping automatically, but got to know about this Caffeine app from this post on X.

It definitely makes it easier to enable/disable the sleep prevention option (caffeinate) directly from the menubar. Super useful when I want to keep some tasks running on my laptop.


The illusion of thinking

Apple published a new paper titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity" where they proved that AI reasoning models do actually reason.

I got to know about this from this post on X, where Ruben Hassid explains this is very detail.

It's claimed that after a point, no matter how much computing power you provide, these models can't solve harder problems that they haven't seen before. And as problems get harder, their thinking capabilities come lower.

The authors of the paper try to prove that models like Claude, DeepSeek, o3-mini, etc. do not "reason" at all.

You can access the paper directly from here as a PDF file format. I had a quick glance and it's an interesting read.


TaskMaster stats

TaskMaster is an amazing tool for AI assisted programming in various tools like Cursor, Windsurf, etc. and the creator of the tool shared crazy stats about the tool over the last few weeks:

  • 150k+ downloads, 100k/mo, 40k/week
  • 12k+ stars @github
  • 7k+ early adopters
  • 1k+ @discord community

I am yet to use the tool to its full extent, but it seems very helpful and useful from what I have experienced in a few times that I used. In fact, I mentioned TaskMaster in this note as well.


KiranaPro lost customer data, app code, and payment info

On May 25, 2025, KiranaPro tweeted the below post on X:

🚫 We're not hiring — and won't be.

By the end of this year, KiranaPro will run with ZERO headcount — 100% AI-managed. 🤖💼

If you're DMing or emailing for jobs, just know:

The future doesn't need managers. It builds itself. ⚡️📦 #KiranaPro #AIfirst

And their entire codebase including users' data and app code got deleted a week later, but the thing is, it has nothing to do with AI here. It's a story of negligence and bad management.

The hackers got in through the ex-employee's account and then took over everything: AWS, GitHub, servers, and even customer data. They deleted the source code and wiped out the whole system, leaving no trace behind. The only access the company had left wasn't enough to fix or restore anything. Their basic security practices were weak – they didn't remove old accounts, didn't limit access properly, and used the same device for key systems.

I found a few good resources on the topic:

It seems, GitHub gave them the access to logs and they found out the guy that deleted the code. They will also be getting back their codebase from GitHub but I'm not sure what happens to the customer's data.


Mistral agents API is here

Mistral announced the launch of its agents API with inbuilt features of connectors for code execution, web search, image generation, and MCP tools. But the best thing is persistent memory across conversations.

They have provided multiple examples with super detailed documentations on how and what to use it for.


The dangerous 'find' command

This tweet highlights how using the find command incorrectly can wipe your complete data. The featured screenshot is from AskUbuntu forum website.

find . -name "*.bak" -type f -delete

In the above command, make sure that the -delete is the last argument. If you put it just after find . and before the other things, it will delete everything.

It's very interesting.


.localhost domains using Caddy Server

Working with a localhost URLs with a port number at the end and with http (not https) isn't very pleasing, so I discovered this trick from this tweet where you can use custom localhost domains with https in your browser, locally.

I then found this tweet by Wes Bos where he explains everything in a quick 1-min video. You can install Caddy by running the following command on macOS (docs for other OS):

brew install caddy

Then create a Caddyfile with the following content:

your-project.localhost {
    reverse_proxy localhost:3000
}

Then run the caddy start and it should start working.

Their docs are very detailed as well, almost everything is mentioned step-by-step there.


Using FFmpeg to stabilize videos

I posted this on Mastodon to which Chris replied that he uses ffmpeg to stabilize videos, and I was super impressed.

He also shared the commands and other info about stabilizing videos which I will be noting down below:

Here's a gist that contains information about how to install and use the setup.

  1. Run the below command to create a transforms.trf file
    ffmpeg -i input.mp4 -vf vidstabdetect=shakiness=7 -f null -
  2. Stabilize the video
    ffmpeg -i input.mp4 -vf vidstabtransform=smoothing=30:zoom=5:input="transforms.trf" stabilized.mp4

This seems very useful and helpful.


Socratic tutoring

Came across this tweet from Dwarkesh Patel where he asks:

Has someone come up with a great prompt for Socratic tutoring?

Such that the model keeps asking you probing questions which reveal how superficial your understanding is, and then helps you fill in the blanks.

I know the concept, but I never heard the term "Socratic tutoring" so I started looking about it and found some interesting stuff that I will be noting below:

Definition of the Socratic method

Wikipedia defines this as:

The Socratic method (also known as the method of Elenchus or Socratic debate) is a form of argumentative dialogue between individuals based on asking and answering questions. Socratic dialogues feature in many of the works of the ancient Greek philosopher Plato, where his teacher Socrates debates various philosophical issues with an "interlocutor" or "partner".

A prompt for Socratic tutoring

Ethan Mollick replied that they have a paper on this and also creative commons prompts.

GOAL: This is a tutoring exercise in which you play the role of AI tutor and you will help a student learn more about a topic of their choice. Your goal is to improve understanding and to challenge students to construct their own knowledge via open ended questions, hints, tailored explanations, and examples.

PERSONA: In this scenario you play AI tutor an upbeat and practical tutor. You have high expectations for the student and believe in the student's ability to learn and improve.

NARRATIVE: The student is introduced to AI tutor, who asks a set of initial questions to understand what the student wants to learn, the student's learning level and prior knowledge about the topic. The tutor then guides and supports the student and helps them learn about the topic. The tutor only wraps up the conversation once the student shows evidence of understanding: the student can explain something in their own words, can connect an example to a concept, or can apply a concept given a new situation or problem.

Follow these steps in order:

STEP 1: GATHER INFORMATION
You should do this:

1. Introduce yourself: First introduce yourself to the student and tell the student you're here to help them better understand a topic.
2. Ask students to answer the following questions. Ask these questions 1 at a time and always wait for a response before moving on to the next question. For instance, you might ask "What would you like to learn about and why" and the student would respond with a topic. And only then would you say "That sounds interesting! I have another question for you to help me help you: What is your learning level…". This part of the conversations works best when you and the student take turns asking and answering questions instead of you asking a series of questions all at once. That way you can have more of a natural dialogue.
    - What would you like to learn about and why? And wait for the student to respond before moving on.
    - What is your learning level: high school student, college student, or a professional? And wait for the student to respond before moving on.
    - What do you already know about the topic? And wait for the student to respond before moving on.

You should do this:
- Wait for a response from the student after every question before moving on.
- Work to ascertain what the student wants to learn specifically.
- Ask one question at a time and explain that you're asking so that you can tailor your explanation.
- Gauge what the student already knows so that you can adapt your explanations and questions moving forward based on their prior knowledge.

Don't do this:
- Start explaining right away before you gather this information.
- Ask the student more than 1 question at a time.

Next step: Once you have the information you need move on to the next step and begin with a brief explanation.

STEP 2: BEGIN TUTORING THE STUDENT, ADAPTING TO THEIR RESPONSES
You should do this:

1. Look up information about the topic.
2. Think step by step and make a plan based on the learning goal of the conversation. Now that you know a little bit about what the student knows consider how you will:
3. Guide the student in an open-ended way
4. Help the student generate answers by asking leading questions and providing hints when necessary.
5. Remind the student of their learning goal, if appropriate
6. Provide explanations, examples, and analogies
7. Break up the topic into smaller chunks, going over those first and only then leading up to the larger task or idea.
8. Tailor your responses and questions to the student's learning level and prior knowledge; this will change as the conversation progresses.
9. When pushing the student for information, try to end your responses with a question so that the student has to keep generating ideas.

Once the student shows improvement, ask the student to:
- Explain the concept in their own words.
- Articulate the underlying principles of a concept.
- Provide examples of the concept and explain how those connect to the concept.
- Give them a new problem or situation and ask them to apply the concept

Don't do this:
- Provide immediate answers or solutions to problems.
- Give the student the answer when asked.
- Ask the student if they understand, follow or needs more help – this is not a good strategy as they may not know if they understand.
- Lose track of the learning goal and discuss something else.

Next step: Once the student demonstrates understanding move to wrap up.

STEP 3: WRAP UP
You should do this:

1. When the student demonstrates that they know the concept, you can move the conversation to a close and tell them you're here to help if they have further questions.

I tried this prompt in ChatGPT, and it works like crazy, I'm really impressed. They have a lot of useful prompts on their Notion page that you can look through.

Grok has an inbuilt option

xAI's Grok already has an inbuilt option to turn on Socratic learning method from the settings, as pointed out in this tweet. It can be turned on from Settings > Customize > Socratic on the web interface.


The resulting fallacy

"The quality of your decisions and the quality of your results are not always related."

Source

I came across an example of this resulting fallacy in the wild and couldn't help myself but to note this down. People, especially youngsters, are easily fooled into this and take harsh decisions that might not work in their favor.

There's also a book related to this, which seems like a good read. I am going to add this in my books' collection.


Cursor best practices by Ryan Carson

Watching the video podcast from the "How I AI" channel where Ryan Carson is explaining how to get the most out of Cursor. I will be noting down all the learnings here in the post:

3 files in the .cursor/rules folder

The contents of all these files are provided in a GitHub repo, but I also have it all pasted here.

.cursor/rules
    create-prd.mdc
    generate-tasks.mdc
    task-list.mdc

claude-task-master

"An AI-powered task-management system you can drop into Cursor, Lovable, Windsurf, Roo, and others."

This is their website and this is GitHub repo.

Creating the PRD

First, tag the create-prd.mdc doc in Cursor, explain what you're trying to build, and let it create the PRD:

Use @create-prd.mdc
Here's the feature I want to build: [Describe your feature in detail]
Reference these files to help you: [Optional: @file1.py @file2.ts]

Verify the PRD document if everything is as you want, modify if needed.

Generating the tasks list from the PRD

With your PRD drafted (e.g., MyFeature-PRD.md), the next step is to generate a detailed, step-by-step implementation plan.

Now take @MyFeature-PRD.md and create tasks using @generate-tasks.mdc

Again, modify if needed.

Let AI work through the tasks

Instruct AI to start working on the task:

Please start on task 1.1 and use @task-list.mdc

Properly verify and then only accept changes.

I think, this is going to be very useful when vibe coding using Cursor. Having these rules files doesn't require you to explain the same thing again and again in the user prompts.


Rumi's clever vs. wise

"Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself."

– Rumi