Skip to main content
Photo of DeepakNess DeepakNess

Raw Notes

Raw notes include useful resources, incomplete thoughts, ideas, and learnings as I go about my day. You can also subscribe to the RSS feed to stay updated.

Total Notes: 135


How to Connect ChatGPT to Airtable

Connecting ChatGPT to Airtable gives you the superpower to get answers to 100s of questions in no time. Here's how to do that:

You need the following things to be able to connect ChatGPT to Airtable:

  1. A paid Airtable account (the lowest plan is $24/month)
  2. OpenAI API key (you'll have to set up a payment method on OpenAI, here)
  3. The Scripting extension from Airtable (no additional cost), and
  4. A script to call the OpenAI API inside Airtable

And below is the function that you can use to call the OpenAI from inside the Airtable and get the output.

async function getGPTResponse() {
    const userInput = "why is the sky blue?";
    const maxTokens = 500;
    const temperature = 0.7;
    const model = "gpt-4.1";
    const systemPrompt = "be precise";

    const messages = [
        { role: "system", content: systemPrompt },
        { role: "user", content: userInput },
    ];

    const res = await fetch('https://api.openai.com/v1/chat/completions', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${openaiApiKey}`,
        },
        body: JSON.stringify({
            model,
            messages,
            max_tokens: maxTokens,
            temperature,
        }),
    });

    const data = await res.json();
    return data.choices?.[0]?.message?.content || null;
}

Here, userInput is the prompt that you give AI, maxTokens is the max tokens for the model, temperature is model temperature, and systemPrompt is the system prompt. The prompt here is hardcoded, but you can modify the script to dynamically fetch prompts from each row and then get the outputs accordingly.

ChatGPT is very good at doing this implementation as per your base data, you can just give the above script and other details in the prompt, and it will give you the final code that you can put inside the Scripting extension.

Also, there's a generic version of this script at InvertedStone that you can also get and use. You can generate almost any kind of content using this script, not just from ChatGPT but also from other AI models like Claude, Gemini, Perplexity, and more.


Donald Knuth about understanding

The ultimate test of whether I understand something is if I can explain it to a computer. I can say something to you and you’ll nod your head, but I’m not sure that I explained it well. But the computer doesn’t nod its head. It repeats back exactly what I tell it. In most of life, you can bluff, but not with computers.

– Donald Knuth


Google Docs new Markdown copy-paste options

Came to know that Google Docs now has a "Copy as Markdown" and "Paste from Markdown" option under the Edit menu at the top. You can select some text to highlight the copy option and then any Markdown is also pasted in the document with proper formatting.

Very cool!

By the way, Google Docs already had the option to download the entire document as a .md file, but these copy and paste options are even more user friendly.


About action and information

“When action grows unprofitable, gather information; when information grows unprofitable, sleep.”

― Ursula K. LeGuin, The Left Hand of Darkness


React Router inside Next.js

I saw a person using the React Router inside Next.js and I have so many questions. Like the navigation is visibly very fast, but my questions are:

  1. Is it good for public pages? Because I think, it will have same SEO issues as SPAs.
  2. Does it make the codebase more complicated?

Upon looking I found a detailed blog post on building a SPA using Next.js and React Router. It mentions the reason for not using the Next.js router:

Next.js is not as flexible as React Router! React Router lets you nest routers hierarchically in a flexible way. It's easy for any "parent" router to share data with all of its "child" routes. This is true for both top-level routes (e.g. /about and /team) and nested routes (e.g. /settings/team and /settings/user).

I do understand why someone would want to use Next.js but I have yet to learn more about this React Router thing.

BRB.

Update:

Josh has written a new short blog post about how he did it, definitely worth reading and understanding the process.


Best way to create CSS cards

Just noting this for myself for future reference that whenever I have to create cards, I must use this simpler method each time. If the HTML is like this:

<div class="card-container">
    <div class="card">
        <p>Card 1 content</p>
    </div>
    <div class="card">
        <p>Card 2 content</p>
    </div>
</div>

The CSS should be like this:

.card-container {
  display: grid;
  grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
  gap: 20px;
  margin: 0 auto;
}

/* and then whatever CSS for .card here */

It's clean and quick.


Important technologies with boring websites

I’ve compiled a list of websites for important web technologies that are likely to have old but functional designs. These are fundamental tools for the internet, often open-source, and their websites prioritize functionality over aesthetics, reflecting their long-standing nature.

FFmpeg

A multimedia framework for transcoding, streaming, and playing various media formats.

SQLite

A self-contained, serverless, SQL database engine widely used in applications.

Apache HTTP Server

An open-source HTTP server that powers a significant portion of the web.

Nginx

A high-performance HTTP server and reverse proxy used for web serving and load balancing.

PostgreSQL

A powerful, open-source object-relational database system used for data storage and management.

MySQL

An open-source relational database management system widely used in web applications.

Python

An interpreted, high-level, general-purpose programming language used extensively in web development.

Ruby

A dynamic, open-source programming language known for its simplicity and productivity.

Git

A distributed version control system essential for managing source code in software development.

Linux Kernel

The core of the Linux operating system, providing essential services for computing systems.

GNU Project

A collection of free software, including the GNU operating system, which is Unix-like but free.

TeX

A typesetting system that is the standard for creating books and articles with complex mathematics.

Vim

A highly configurable text editor built for efficient text editing, especially for developers.

Emacs

An extensible, customizable text editor that also serves as a development environment.

Perl

A high-level, general-purpose, interpreted programming language used for text processing.

Tcl

A scripting language with a simple API for embedding into C/C++ applications.

OpenSSH

A suite of secure networking utilities based on the SSH protocol for secure remote access.

OpenSSL

A software library for applications that secure communications over computer networks.

BIND

The most widely used Domain Name System (DNS) software on the internet.

I will keep updating this list as I discover more such websites.


WhatsApp AI chatbot in Python

I came across a GitHub repo containing the complete Python code host and run a WhatsApp AI chatbot. I have forked the repo as I am thinking of making such a chatbot for myself. The requirements are mentioned as:

  • WaSenderAPI: Only $6/month for WhatsApp integration
  • Gemini AI: Free tier with 1500 requests/month
  • Hosting: Run locally or on low-cost cloud options
  • No WhatsApp Business API fees: Uses WaSenderAPI as an affordable alternative

I will learn more about the WhatsApp business API and how it can be used to create a WhatsApp chatbot for specific topics that people can interact with. And then how it can all be monetized.


Stripe's new transfer-based model

Stripe has developed a new approach to analyze transactions using a new transformer-based foundation model. Earlier, they relied on a traditional machine learning model but these models had limitations, but the new model is supposed to even increase the conversion even more and significantly decrease the fraudulent transactions.

Gautam Kedia, an AI/ML engineer at Stripe, explained this in a detailed X post. He mentions:

So we built a payments foundation model—a self-supervised network that learns dense, general-purpose vectors for every transaction, much like a language model embeds words. Trained on tens of billions of transactions, it distills each charge’s key signals into a single, versatile embedding.

This approach improved our detection rate for card-testing attacks on large users from 59% to 97% overnight.

While I did have a loose knowledge of what a transformer is, I looked up its definition again to understand it better in the context of payments:

A Transformer is a type of neural network architecture that has revolutionized natural language processing (NLP) and is now being applied to other domains, as seen in the Stripe example. Its key innovation is the attention mechanism.

The attention mechanism allows the model to weigh the importance of different parts of the input sequence when processing any single part.

Further, I asked Gemini to explain this entire thing to me in a simpler words and here's how it explained:

Think of it like reading a book. An older model might read word by word and only remember the last few words. A Transformer, with its attention mechanism, can look back at earlier parts of the book to understand the meaning of the current sentence in the broader context. In the payment world, this means understanding the significance of a transaction not just in isolation, but in the context of previous transactions.

Very cool.


MCP has a new problem

Someone added more than 81 MCP tools to their Cursor IDE and it started showing a warning saying "too many tools can degrade performance" and it suggested to use less than 40 tools.

Cursor CEO replied the following:

you'll be able to disable individual tools in 0.50 :)

But the problem still remains, if MCPs are the future, there has to be a way that they are automatically managed and I do not need manually enable or disable tools.


Firefox moves to GitHub

I came across this post on HackerNews that discussed that Firefox has moved its repo to GitHub for the first time and it's a huge deal, as a person mentioned on X.

I don't know how this changes things for Firefox, but there must be some reason to it. A person, who works at Mozilla, commented:

The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.

On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.

But this comment made the most sense for me:

I think it's actually an understandable strategical move from Mozilla. They might loose some income from Google and probably have to cut the staff. But to keep the development of Firefox running they want to involve more people from the community and GitHub is the tool that brings most visibility on the market right now and is known by many developers. So the hurdle getting involved is much lower.

I think you can dislike the general move to a service like GitHub instead of GitLab (or something else). But I think we all benefit from the fact that Firefox's development continues and that we have a competing engine on the market.

Some folks seemed excited about the migration whereas some are upset about the move to the closed-source platform, GitHub. But if this really makes the browser better, I am excited for the move.


Real-time webcam video analysis using AI

Xuan-Son Nguyen shared a video on X where he is analyzing his webcam video feed in real-time by using local LLaMA model via ggml and Huggingface SmolVLM.

Real-time webcam demo with @huggingface SmolVLM and @ggml_org llama.cpp server.

All running locally on a Macbook M3

He also shared the GitHub repo containing the instructions on how to do it. The steps are:

  1. Install llama.cpp
  2. Run llama-server -hf ggml-org/SmolVLM-500M-Instruct-GGUF
    Note: you may need to add -ngl 99 to enable GPU (if you are using NVidia/AMD/Intel GPU)
    Note (2): You can also try other models here
  3. Open index.html
  4. Optionally change the instruction (for example, make it returns JSON)
  5. Click on "Start" and enjoy

Definitely worth trying.


Cursor codebase indexing

I came across an article that deep-dives in the technology behind fast codebase indexing in Cursor AI:

  1. Code chunking and processing
  2. Merkle tree construction and synchronization
  3. Embedding generation
  4. Storage and indexing
  5. Periodic updates using Merkle trees

I also came across this post from Simon that talks about the same thing. Very interesting to read.


MCP to control LEDs

A person on Reddit created a MCP to control a single LED bulb via natural language – it does look like an overkill, but that's not the point. The person asks it to blink the LED twice, and it does that. Beautiful.

The tech used are:

  • Board/SoC: Raspberry Pi CM5 (a beast)
  • Model: Qwen-2.5-3B (Qwen-3 l'm working on it)
  • Perf: ~5 tokens/s, ~4-5 GB RAM

And the control pipeline is explained as:

MCP-server + LLM + Whisper (All on CM5) → RP2040 over UART → WS2812 LED

And not to mention that everything runs locally on the Raspberry Pi CM5 device, and here's the entire code on GitHub that one can use.


Removing supervisor password from Thinkpad P53

Came across a Reddit post where the person bought a second-hand Lenovo Thinkpad P53 for €150 and successfully removed the supervisor password from it.

I found it very cool how the person unlocked the BIOS, so saving this post for future references, in case I decide to get something like this for myself. There are some additional resources also shared for the same - like this forum post and this YouTube video.


A timeline of history of Pizza

The history of flatbread goes back to 550 BC when Persian soldiers used to bake this and the first mention of the word "pizza" was recorded in AD 997 in Italy.

Here's a cool timeline for the history of pizza that you can refer to. It has multiple major events listed from 550 BC till 2020 - very interesting to go through.


Google Drive asks to upgrade its desktop client

My Google Drive desktop client doesn't automatically open when my computer start, I only start it when I have something to sync and I am not overly dependent on it and also have other means of backup already set up.

So when I started the Google Drive client today on my macOS device, it started showing a notification to "upgrade" my desktop client as it will be deprecated in ~19 days, as you can see in the screenshot here. I tried updating it from the Settings, but there were no updates available.

So I downloaded the new "upgraded" desktop client and re-installed it, and then the notice was gone. Good thing, I didn't have to re-login to my two Google accounts that were connected to Google Drive. After installing, I tried looking through the settings and other options in the client, but couldn't find anything new or modified - everything looks exactly the same as how it looked on the old client.

Tried looking it up on Google, but couldn't find anything about it either.


Replit partners with Notion

Replit has partnered with Notion where Notion can now be used to host content when building apps using Replit – think of Notion as being used for the backend for your apps now. They also have a quick YouTube video explaining how it works.

I think, this will be a great tech-stack for people who love Notion and want to start a blog using that. They can write their post in Notion and it will be live on the website.

I tried searching about it Google to see if someone has created something interesting using the setup, but couldn't find anything as of yet. But I'm sure we'll be seeing some cool use-cases in the next few weeks, as more people learn about it.


A poll about Windsurf, Cursor, and VS Code

Theo Browne ran polls asking "Which IDE do you use primarily?" on X (Twitter), LinkedIn, and YouTube and the results are really interesting.

Below are the results from all these platforms, at the time of writing this post. There are still a few hours left for the polls to complete, but I'm sure that shares are not going to change drastically.

Platform Total Votes Windsurf Cursor VS Code Others
X 43,523 4.7% 30.5% 30.6% 34.1%
LinkedIn 4,172 4% 28% 47% 21%
YouTube 18,000 2% 18% 50% 30%

The most interesting thing is, VS Code is winning in all 3 polls, Cursor is at the second position and then Windsurf is at the last. Also, interesting is that thousands of people are still not using any of these three IDEs.

Also, from huge VS Code shares, I can interpret that there must be some people using GitHub Copilot, or Cline, or other AI assistants and then there would also be some people who are not using AI at all.


Cloud computing – own nothing and be happy

I really love this post from DHH talking about how cloud computing makes you renters for life. You actually own nothing and still be happy. To quote DHH exactly:

Cloud computing is the most successful "you will own nothing and be happy" psyop in history. We gave up on DARPA's beautifully, decentralized design for the internet to become renters for life. Tragic.

While this totally makes sense, I think, we don't have easily-digestible information about "self-hosting" on the internet. I mean, I haven't looked about it, but I still think that this should be more normalized among the dev peeps.

In fact, a person did point this out:

Someone should disrupt the setup/ops by making it actually EASY to learn.


Unsloth AI makes fine-tuning easier

Was reading about Unsloth AI and how it can be used to fine-tune open-source models like qwen3, llama 4, gemma 3, phi-4, etc. faster with 70% less memory. It's open-source and supports LoRA and QLoRA fine-tuning methods.

I got to know about this from this Andrea Volpini post about a fine-tuned reasoning model that thinks like an SEO. They created SEOcrate_4B_grpo_new_01 by using Unsloth AI for fine-tuning the Gemma 3 4B model via Unsloth.

Unsloth AI has a free plan that supports fine-tuning Mistal, Gemma, and LLaMA models that you can run on your computer. And they also have multiple beginner-friendly Google Colab notebooks available for different models that you want to train.

I have yet to try this, but will be going through their documentation and try to fine-tune for some data.


From RSS feed to Bluesky and Mastodon via n8n

I am consistently taking notes in the raw section of my blog and wanted to keep posting new posts on Mastodon and Bluesky as URLs. I used n8n to achieve this automation successfully – even though n8n doesn't have an official Mastodon and Bluesky node.

In this post, I will explain how I set this up:

At first, I used the RSS Feed Trigger node in n8n so that it auto-triggers every time a new raw note is published. It gives me the most recent post in the below format, and I can use the data from here to publish on both platforms.

[
  {
    "title": "",
    "link": "",
    "pubDate": "",
    "content": "",
    "contentSnippet": "",
    "id": "",
    "isoDate": ""
  }
]

From RSS feed to Mastodon

I needed the following 3 things for this automation via n8n:

  1. RSS feed URL: it's deepakness.com/feed/raw.xml in my case
  2. Mastodon URL instance: my account is at mastodon.social
  3. Your Mastodon access token: visited [INSTANCE_URL]/settings/applications, created a new application with full write scope, and got the access token

After the previous RSS Feed Trigger node in n8n, I created another HTTP Request node, and entered the following information:

  1. Authentication: None
  2. Request Method: POST
  3. URL:
    https://[INSTANCE_URL]/api/v1/statuses?access_token=[ACCESS_TOKEN]
  4. Ignore SSL Issues (Insecure): OFF
  5. Response Format: JSON
  6. JSON/RAW Parameters: OFF
  7. Options: Nothing
  8. Body Parameters: Nothing
  9. Headers: Nothing
  10. Query Parameters:
    1. Name: status
    2. Value: [POST_CONTENT] from previous nodes

And this simply worked, I didn't have to do anything else at all. If you're interested, you can learn more by going through their documentation.

From RSS feed to Bluesky

First, I needed to create a Bluesky Session and then only I was able to publish. For this, I needed the following things:

  1. App password: Created a new app and got the app password from bsky.app/settings/app-passwords page
  2. Profile identifier: Your profile identifier [username].bsky.social

Node 1: HTTP Request

First, you need to create a HTTP Request node to get the Session ID. Fill in the following info:

  1. Authentication: None
  2. Request Method: POST
  3. URL:
    https://bsky.social/xrpc/com.atproto.server.createSession
  4. Ignore SSL Issues (Insecure): OFF
  5. Response Format: JSON
  6. JSON/RAW Parameters: OFF
  7. Options
  8. Body Parameters:
    1. Name: identifier
    2. Value: deepakness.bsky.social
    3. Name: password
    4. Value: [APP_PASSWORD]
  9. Headers: Nothing
  10. Query Parameters: Nothing

From here, you need to get the accessJwt token in the next node to be able to publish the post.

Node 2: Date & Time

Yes, the date parameter is required in the Bluesky API so you need to add the Date & Time node and get the current time:

  • Operation: Get Current Date
  • Include Current Time: ON
  • Output Field Name: currentDate
  • Options: Nothing

Node 3: HTTP Request

Now, I needed another HTTP Request node to be able to actually publish. Below are the options:

  1. Method: POST
  2. URL: https://bsky.social/xrpc/com.atproto.repo.createRecord
  3. Authentication: None
  4. Send Query Parameters: OFF
  5. Send Headers: ON
    1. Specify Headers: Using Fields Below
    2. Header Parameters:
      1. Name: Authorization
      2. Value: Bearer [accessJwt_from_previous_node]
  6. Send Body: ON
    1. Body Content Type: JSON
    2. Specify Body: Using JSON
    3. JSON: see here
    4. Options: Nothing

So far, it's working for me but I might further improve it in the future.


You're an average of five closest companions

Came across this thought-provoking post from my friend Rohit, that forced me to think about the subconscious impact of staying surrounded by AI.

I am the average of my five closest people. What if three of them are AI?

Recently, I have been ‘conversing’ a lot with AI models. In fact, the time I spend with AI is about to exceed the time I spend with a lot of good friends.

The problem is, I don't know how it is shaping my personality. I don’t know what quirks I am imbibing from these models.

I know what I learn from AI consciously – the answers these AI models give. I don’t know what I am learning from them subconsciously.

Definitely some food for thought.


About alcohol

I saw Pieter Levels posting about alcohol and how it badly affects the person and the people around, and I couldn't help myself but to write this note. I completely agree with the points he makes, as I have seen families get destroyed due to excessive drinking problems in my village.

I am ~29 years old and I have never had even a single sip of alcohol in my entire life, in fact, no one in my close family drinks, but many of my friends drink. However, I don't have a problem with that as I often hang out a lot with my friends. But I am glad that I still didn't pick up the habit.

Not sure what the future holds, but I am very sure that I will have absolutely no reasons to drink.