Skip to main content
Photo of DeepakNess DeepakNess

Raw Notes

Raw notes include useful resources, incomplete thoughts, ideas, and learnings as I go about my day. You can also subscribe to the RSS feed to stay updated.

Total Notes: 153


Important technologies with boring websites

I’ve compiled a list of websites for important web technologies that are likely to have old but functional designs. These are fundamental tools for the internet, often open-source, and their websites prioritize functionality over aesthetics, reflecting their long-standing nature.

FFmpeg

A multimedia framework for transcoding, streaming, and playing various media formats.

SQLite

A self-contained, serverless, SQL database engine widely used in applications.

Apache HTTP Server

An open-source HTTP server that powers a significant portion of the web.

Nginx

A high-performance HTTP server and reverse proxy used for web serving and load balancing.

PostgreSQL

A powerful, open-source object-relational database system used for data storage and management.

MySQL

An open-source relational database management system widely used in web applications.

Python

An interpreted, high-level, general-purpose programming language used extensively in web development.

Ruby

A dynamic, open-source programming language known for its simplicity and productivity.

Git

A distributed version control system essential for managing source code in software development.

Linux Kernel

The core of the Linux operating system, providing essential services for computing systems.

GNU Project

A collection of free software, including the GNU operating system, which is Unix-like but free.

TeX

A typesetting system that is the standard for creating books and articles with complex mathematics.

Vim

A highly configurable text editor built for efficient text editing, especially for developers.

Emacs

An extensible, customizable text editor that also serves as a development environment.

Perl

A high-level, general-purpose, interpreted programming language used for text processing.

Tcl

A scripting language with a simple API for embedding into C/C++ applications.

OpenSSH

A suite of secure networking utilities based on the SSH protocol for secure remote access.

OpenSSL

A software library for applications that secure communications over computer networks.

BIND

The most widely used Domain Name System (DNS) software on the internet.

I will keep updating this list as I discover more such websites.


WhatsApp AI chatbot in Python

I came across a GitHub repo containing the complete Python code host and run a WhatsApp AI chatbot. I have forked the repo as I am thinking of making such a chatbot for myself. The requirements are mentioned as:

  • WaSenderAPI: Only $6/month for WhatsApp integration
  • Gemini AI: Free tier with 1500 requests/month
  • Hosting: Run locally or on low-cost cloud options
  • No WhatsApp Business API fees: Uses WaSenderAPI as an affordable alternative

I will learn more about the WhatsApp business API and how it can be used to create a WhatsApp chatbot for specific topics that people can interact with. And then how it can all be monetized.


Stripe's new transfer-based model

Stripe has developed a new approach to analyze transactions using a new transformer-based foundation model. Earlier, they relied on a traditional machine learning model but these models had limitations, but the new model is supposed to even increase the conversion even more and significantly decrease the fraudulent transactions.

Gautam Kedia, an AI/ML engineer at Stripe, explained this in a detailed X post. He mentions:

So we built a payments foundation model—a self-supervised network that learns dense, general-purpose vectors for every transaction, much like a language model embeds words. Trained on tens of billions of transactions, it distills each charge’s key signals into a single, versatile embedding.

This approach improved our detection rate for card-testing attacks on large users from 59% to 97% overnight.

While I did have a loose knowledge of what a transformer is, I looked up its definition again to understand it better in the context of payments:

A Transformer is a type of neural network architecture that has revolutionized natural language processing (NLP) and is now being applied to other domains, as seen in the Stripe example. Its key innovation is the attention mechanism.

The attention mechanism allows the model to weigh the importance of different parts of the input sequence when processing any single part.

Further, I asked Gemini to explain this entire thing to me in a simpler words and here's how it explained:

Think of it like reading a book. An older model might read word by word and only remember the last few words. A Transformer, with its attention mechanism, can look back at earlier parts of the book to understand the meaning of the current sentence in the broader context. In the payment world, this means understanding the significance of a transaction not just in isolation, but in the context of previous transactions.

Very cool.


MCP has a new problem

Someone added more than 81 MCP tools to their Cursor IDE and it started showing a warning saying "too many tools can degrade performance" and it suggested to use less than 40 tools.

Cursor CEO replied the following:

you'll be able to disable individual tools in 0.50 :)

But the problem still remains, if MCPs are the future, there has to be a way that they are automatically managed and I do not need manually enable or disable tools.


Firefox moves to GitHub

I came across this post on HackerNews that discussed that Firefox has moved its repo to GitHub for the first time and it's a huge deal, as a person mentioned on X.

I don't know how this changes things for Firefox, but there must be some reason to it. A person, who works at Mozilla, commented:

The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.

On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.

But this comment made the most sense for me:

I think it's actually an understandable strategical move from Mozilla. They might loose some income from Google and probably have to cut the staff. But to keep the development of Firefox running they want to involve more people from the community and GitHub is the tool that brings most visibility on the market right now and is known by many developers. So the hurdle getting involved is much lower.

I think you can dislike the general move to a service like GitHub instead of GitLab (or something else). But I think we all benefit from the fact that Firefox's development continues and that we have a competing engine on the market.

Some folks seemed excited about the migration whereas some are upset about the move to the closed-source platform, GitHub. But if this really makes the browser better, I am excited for the move.


Real-time webcam video analysis using AI

Xuan-Son Nguyen shared a video on X where he is analyzing his webcam video feed in real-time by using local LLaMA model via ggml and Huggingface SmolVLM.

Real-time webcam demo with @huggingface SmolVLM and @ggml_org llama.cpp server.

All running locally on a Macbook M3

He also shared the GitHub repo containing the instructions on how to do it. The steps are:

  1. Install llama.cpp
  2. Run llama-server -hf ggml-org/SmolVLM-500M-Instruct-GGUF
    Note: you may need to add -ngl 99 to enable GPU (if you are using NVidia/AMD/Intel GPU)
    Note (2): You can also try other models here
  3. Open index.html
  4. Optionally change the instruction (for example, make it returns JSON)
  5. Click on "Start" and enjoy

Definitely worth trying.


Cursor codebase indexing

I came across an article that deep-dives in the technology behind fast codebase indexing in Cursor AI:

  1. Code chunking and processing
  2. Merkle tree construction and synchronization
  3. Embedding generation
  4. Storage and indexing
  5. Periodic updates using Merkle trees

I also came across this post from Simon that talks about the same thing. Very interesting to read.


MCP to control LEDs

A person on Reddit created a MCP to control a single LED bulb via natural language – it does look like an overkill, but that's not the point. The person asks it to blink the LED twice, and it does that. Beautiful.

The tech used are:

  • Board/SoC: Raspberry Pi CM5 (a beast)
  • Model: Qwen-2.5-3B (Qwen-3 l'm working on it)
  • Perf: ~5 tokens/s, ~4-5 GB RAM

And the control pipeline is explained as:

MCP-server + LLM + Whisper (All on CM5) → RP2040 over UART → WS2812 LED

And not to mention that everything runs locally on the Raspberry Pi CM5 device, and here's the entire code on GitHub that one can use.


Removing supervisor password from Thinkpad P53

Came across a Reddit post where the person bought a second-hand Lenovo Thinkpad P53 for €150 and successfully removed the supervisor password from it.

I found it very cool how the person unlocked the BIOS, so saving this post for future references, in case I decide to get something like this for myself. There are some additional resources also shared for the same - like this forum post and this YouTube video.


A timeline of history of Pizza

The history of flatbread goes back to 550 BC when Persian soldiers used to bake this and the first mention of the word "pizza" was recorded in AD 997 in Italy.

Here's a cool timeline for the history of pizza that you can refer to. It has multiple major events listed from 550 BC till 2020 - very interesting to go through.


Google Drive asks to upgrade its desktop client

My Google Drive desktop client doesn't automatically open when my computer start, I only start it when I have something to sync and I am not overly dependent on it and also have other means of backup already set up.

So when I started the Google Drive client today on my macOS device, it started showing a notification to "upgrade" my desktop client as it will be deprecated in ~19 days, as you can see in the screenshot here. I tried updating it from the Settings, but there were no updates available.

So I downloaded the new "upgraded" desktop client and re-installed it, and then the notice was gone. Good thing, I didn't have to re-login to my two Google accounts that were connected to Google Drive. After installing, I tried looking through the settings and other options in the client, but couldn't find anything new or modified - everything looks exactly the same as how it looked on the old client.

Tried looking it up on Google, but couldn't find anything about it either.


Replit partners with Notion

Replit has partnered with Notion where Notion can now be used to host content when building apps using Replit – think of Notion as being used for the backend for your apps now. They also have a quick YouTube video explaining how it works.

I think, this will be a great tech-stack for people who love Notion and want to start a blog using that. They can write their post in Notion and it will be live on the website.

I tried searching about it Google to see if someone has created something interesting using the setup, but couldn't find anything as of yet. But I'm sure we'll be seeing some cool use-cases in the next few weeks, as more people learn about it.


A poll about Windsurf, Cursor, and VS Code

Theo Browne ran polls asking "Which IDE do you use primarily?" on X (Twitter), LinkedIn, and YouTube and the results are really interesting.

Below are the results from all these platforms, at the time of writing this post. There are still a few hours left for the polls to complete, but I'm sure that shares are not going to change drastically.

Platform Total Votes Windsurf Cursor VS Code Others
X 43,523 4.7% 30.5% 30.6% 34.1%
LinkedIn 4,172 4% 28% 47% 21%
YouTube 18,000 2% 18% 50% 30%

The most interesting thing is, VS Code is winning in all 3 polls, Cursor is at the second position and then Windsurf is at the last. Also, interesting is that thousands of people are still not using any of these three IDEs.

Also, from huge VS Code shares, I can interpret that there must be some people using GitHub Copilot, or Cline, or other AI assistants and then there would also be some people who are not using AI at all.


Cloud computing – own nothing and be happy

I really love this post from DHH talking about how cloud computing makes you renters for life. You actually own nothing and still be happy. To quote DHH exactly:

Cloud computing is the most successful "you will own nothing and be happy" psyop in history. We gave up on DARPA's beautifully, decentralized design for the internet to become renters for life. Tragic.

While this totally makes sense, I think, we don't have easily-digestible information about "self-hosting" on the internet. I mean, I haven't looked about it, but I still think that this should be more normalized among the dev peeps.

In fact, a person did point this out:

Someone should disrupt the setup/ops by making it actually EASY to learn.


Unsloth AI makes fine-tuning easier

Was reading about Unsloth AI and how it can be used to fine-tune open-source models like qwen3, llama 4, gemma 3, phi-4, etc. faster with 70% less memory. It's open-source and supports LoRA and QLoRA fine-tuning methods.

I got to know about this from this Andrea Volpini post about a fine-tuned reasoning model that thinks like an SEO. They created SEOcrate_4B_grpo_new_01 by using Unsloth AI for fine-tuning the Gemma 3 4B model via Unsloth.

Unsloth AI has a free plan that supports fine-tuning Mistal, Gemma, and LLaMA models that you can run on your computer. And they also have multiple beginner-friendly Google Colab notebooks available for different models that you want to train.

I have yet to try this, but will be going through their documentation and try to fine-tune for some data.


From RSS feed to Bluesky and Mastodon via n8n

I am consistently taking notes in the raw section of my blog and wanted to keep posting new posts on Mastodon and Bluesky as URLs. I used n8n to achieve this automation successfully – even though n8n doesn't have an official Mastodon and Bluesky node.

In this post, I will explain how I set this up:

At first, I used the RSS Feed Trigger node in n8n so that it auto-triggers every time a new raw note is published. It gives me the most recent post in the below format, and I can use the data from here to publish on both platforms.

[
  {
    "title": "",
    "link": "",
    "pubDate": "",
    "content": "",
    "contentSnippet": "",
    "id": "",
    "isoDate": ""
  }
]

From RSS feed to Mastodon

I needed the following 3 things for this automation via n8n:

  1. RSS feed URL: it's deepakness.com/feed/raw.xml in my case
  2. Mastodon URL instance: my account is at mastodon.social
  3. Your Mastodon access token: visited [INSTANCE_URL]/settings/applications, created a new application with full write scope, and got the access token

After the previous RSS Feed Trigger node in n8n, I created another HTTP Request node, and entered the following information:

  1. Authentication: None
  2. Request Method: POST
  3. URL:
    https://[INSTANCE_URL]/api/v1/statuses?access_token=[ACCESS_TOKEN]
  4. Ignore SSL Issues (Insecure): OFF
  5. Response Format: JSON
  6. JSON/RAW Parameters: OFF
  7. Options: Nothing
  8. Body Parameters: Nothing
  9. Headers: Nothing
  10. Query Parameters:
    1. Name: status
    2. Value: [POST_CONTENT] from previous nodes

And this simply worked, I didn't have to do anything else at all. If you're interested, you can learn more by going through their documentation.

From RSS feed to Bluesky

First, I needed to create a Bluesky Session and then only I was able to publish. For this, I needed the following things:

  1. App password: Created a new app and got the app password from bsky.app/settings/app-passwords page
  2. Profile identifier: Your profile identifier [username].bsky.social

Node 1: HTTP Request

First, you need to create a HTTP Request node to get the Session ID. Fill in the following info:

  1. Authentication: None
  2. Request Method: POST
  3. URL:
    https://bsky.social/xrpc/com.atproto.server.createSession
  4. Ignore SSL Issues (Insecure): OFF
  5. Response Format: JSON
  6. JSON/RAW Parameters: OFF
  7. Options
  8. Body Parameters:
    1. Name: identifier
    2. Value: deepakness.bsky.social
    3. Name: password
    4. Value: [APP_PASSWORD]
  9. Headers: Nothing
  10. Query Parameters: Nothing

From here, you need to get the accessJwt token in the next node to be able to publish the post.

Node 2: Date & Time

Yes, the date parameter is required in the Bluesky API so you need to add the Date & Time node and get the current time:

  • Operation: Get Current Date
  • Include Current Time: ON
  • Output Field Name: currentDate
  • Options: Nothing

Node 3: HTTP Request

Now, I needed another HTTP Request node to be able to actually publish. Below are the options:

  1. Method: POST
  2. URL: https://bsky.social/xrpc/com.atproto.repo.createRecord
  3. Authentication: None
  4. Send Query Parameters: OFF
  5. Send Headers: ON
    1. Specify Headers: Using Fields Below
    2. Header Parameters:
      1. Name: Authorization
      2. Value: Bearer [accessJwt_from_previous_node]
  6. Send Body: ON
    1. Body Content Type: JSON
    2. Specify Body: Using JSON
    3. JSON: see here
    4. Options: Nothing

So far, it's working for me but I might further improve it in the future.


You're an average of five closest companions

Came across this thought-provoking post from my friend Rohit, that forced me to think about the subconscious impact of staying surrounded by AI.

I am the average of my five closest people. What if three of them are AI?

Recently, I have been ‘conversing’ a lot with AI models. In fact, the time I spend with AI is about to exceed the time I spend with a lot of good friends.

The problem is, I don't know how it is shaping my personality. I don’t know what quirks I am imbibing from these models.

I know what I learn from AI consciously – the answers these AI models give. I don’t know what I am learning from them subconsciously.

Definitely some food for thought.


About alcohol

I saw Pieter Levels posting about alcohol and how it badly affects the person and the people around, and I couldn't help myself but to write this note. I completely agree with the points he makes, as I have seen families get destroyed due to excessive drinking problems in my village.

I am ~29 years old and I have never had even a single sip of alcohol in my entire life, in fact, no one in my close family drinks, but many of my friends drink. However, I don't have a problem with that as I often hang out a lot with my friends. But I am glad that I still didn't pick up the habit.

Not sure what the future holds, but I am very sure that I will have absolutely no reasons to drink.


OpenAI acquires Windsurf

OpenAI finally acquired Windsurf for $3 billion. It was in the news for some time now, and I am excited about the acquisition as I am more interested in what OpenAI actually does to improve Windsurf.

Currently, I am mainly using Cursor AI for coding but if Windsurf actually gets improved after the acquisition, I would be very happy to switch. However, I do have Windsurf installed and I do sometimes use it for smaller tasks.

I want OpenAI to make Windsurf better and it shouldn't just keep pushing OpenAI models, but includes all other models that are best for coding. Because at the moment, Claude 3.7 Sonnet and Gemini 2.5 Pro are the best coding models. I have tried OpenAI's GPT-4.1 and O3 but they're not up to the mark as of now.

Also, I learned that OpenAI tried buying Cursor twice before they acquired Windsurf but, I think, Cursor guys didn't want to sell.


Setting up MastoFeed

I publish a lot in my /raw notes section, so I thought about connecting my raw feed to my Mastodon account so that whenever I publish a new note, it automatically gets published on the platform.

I was thinking about writing some Python or Node.js script to automate this, but then I looked up about it online and found this tool called MastoFeed that does precisely what I wanted. It's completely free to use and one can connect multiple feeds that gets published to the same account. So then I dropped my idea of creating a custom script and set up MastoFeed instead, and this is the very first post that I am doing after that.

Another option for me was, by using Typefully. I use Typefully to post and schedule posts on X, Mastodon, Threads, etc. platforms, so I could have used the Typefully API and set up an n8n automation for this. But I didn't want to publish a lot of posts on X, so the MastoFeed set up works the best for me.

Update:

I waited for more than an hour but it didn't get published, and I also noticed other people discussing about the same. So I have removed the set up for now and have already found a better solution using n8n.


iOS outside-app payments with Stripe

The US court gave its decision in the favor of Epic games and now Apple iOS is forced to allow payments outside of the app. Earlier, Apple used to charge 30% commission on revenue from in-app purchases and subscriptions.

And now that it's no longer the case, Stripe enabled iOS developers to accept payments outside of the app via Stripe, without heavy commissions. It's super fast implementation from Stripe, as they have also put together a detailed documentation about the same.

If you want some background about the Apple issue, here's a detailed article that explains how Apple is forced to accept external payments now.


Gemini 2.5 Pro plays Pokemon

They made Gemini 2.5 Pro model play the Pokemon game which is successfully completed, but it took the model 37 long days to finish the game. It's continuously being livestreamed on Twitch, in case you're interested.

About the same, Sundar Pichai tweeted this:

What a finish! Gemini 2.5 Pro just completed Pokémon Blue!

Special thanks to @TheCodeOfJoel for creating and running the livestream, and to everyone who cheered Gem on along the way.

Very impressive.


How to be a good technical writer

A person asked the question "i want to become insanely good at technical writing, any tips?" on X and received some good replies. Below I am collecting the good ones that I liked:

Arpit Bhayani replied:

write a lot, a lot.

I have been writing since 2012, and it is a form of meditation for me.

I write everyday, it makes me think harder, it helps me articulate my thoughts well, more importantly it makes me more empathetic.

Ayush Chaurasia replied:

Imho, The popular advise 'write more' improves only grammar and vocabulary. neither are important for great technical writing.

What works best for me is writing with either of these in mind:

  1. What if I were explaining it to my friends
  2. What if I were to learn this over again.

Prateek Joshi replied:

writing is a lagging indicator of observing + doing + thinking. if you keep up the activity, you’ll have a bunch of interesting stuff to say (which can be translated into the written form).

Prabhdeep replied:

I write technical articles on medium, here are a few tips...

  1. Explain the "why" and how things tie in together. Explain things intuitively instead of dropping plain facts.
  2. Storyboard. Write the subtitles first, and then fill in the information. That way, your writing won't be all over the place.
  3. Spend time explaining the foundations briefly at the start. That way, nobody will be discouraged to read.
  4. Emphasize the applications. Why is this technical thing important?

And most importantly, write a lot. Over time, you'll begin to develop an intuition on how to write better. That's how I improved.

Vincy replied:

Remember the phrase "MMM: Meat, Material, Medium" as a framework for good content production. By Gary Vaynerchuk (GaryVee).

Breakdown of MMM in Content Creation:

Meat: The core substance or valuable message of the content.

Material: The quality of the content (research, storytelling, production value).

Medium: The platform or format where the content is shared (video, podcast, blog, etc.).

Gary Vee emphasizes that great content must have strong substance (Meat), be well-produced (Material), and be optimized for the right platform (Medium).

boisselle replied:

read more technical papers, copy them, rearrange them, write your own.

Arunachalam replied:

I recommend you to follow technical style guide used by freCodeCamp or Digital ocean.

I'm personally a technical writer at freeCodeCamp. It's a well recognised platform

Here's the link

Apart from this, most of the people suggested that the more you write the better you become.

I think, the best way here would be to imitate others (who are good at it), and then innovate when writing for yourself.


The good AI

Came across this blog post that talks about what would good AI look like, and I really like the simple points that the author makes:

  1. trained on data which has been accessed with consent
  2. doesn't hallucinate like current models
  3. balanced sustainability and energy consumption
  4. actually open source
  5. should be led by the community
  6. available and accessible to all

Regularly updated personal blogs list

Found this interesting project blogroll.org which is basically a human-curated list of cool personal blogs which are regularly updated. You can filter blogs by topics and can also submit your own blog to the list.

ooh.directory is another such directory website containing over 2,000 blogs on different topics.