Skip to main content
Photo of DeepakNess DeepakNess

Raw notes

Raw notes include useful resources, incomplete thoughts, ideas, micro thoughts, and learnings as I go about my day. Below, you can also subscribe to the RSS feed to stay updated:

https://deepakness.com/feed/raw.xml

Total Notes: 309


Localflare: Local Cloudflare development dashboard

Discovered this cool tool called Localflare, created by Rohan Prasad, that works as a dashboard for local development with Cloudflare Workers, as mentioned on the website. Basically, it gives you a dashboard that keeps showing info about:

  • D1 Databases
  • KV Namespaces
  • R2 Buckets
  • Durable Objects
  • Queues
  • Tail Logs, etc.

Not to mention, it's an open-source project and here's the GitHub repo for this. I am yet to use this, but seems like a cool idea.


How to self-host videos with HLS

My YouTube channel got deleted over a month ago, and these days I am seeing more and more channels getting deleted. Recently, I learned that Ilias Ism's channel also got deleted and the appeal got rejected as well, so he decided to self-host all videos. And the similar thing happened with Pat from StarterStory as well, and now he's also self-hosting videos.

Ilias also wrote a detailed post about his setup of self-hosting videos, and I am going to copy all the text below, in case the X post is no longer live. And it seems like too valuable information to let go.

Yeah, very similar approach! Here's what I did:
- Setup: Hetzner dedicated server (EX63, Intel Core Ultra 7, 64GB RAM, 1TB NVMe)
- Coolify, on Docker -> Nginx -> Filesystem mount to SSD
- Nginx is serving HLS streams directly from disk
- FFmpeg is transcoding to 3 quality levels (360p, 720p, 1080p)
- Also making thumbnails
- On hover there is a preview sprite looping (example: https://video.seoroast.com/i-roasted-this-slack-app-s-seo-guinea-pig/preview-sprite.jpg)
- Cloudflare in front for caching/CDN

πŸ’° Cost: ~€80/month for the dedicated server (which also runs the main app, database, etc)

🧠 The process:
- Upload raw MP4s to server
- Batch convert to HLS with FFmpeg (adaptive bitrate)
- Nginx serves the .m3u8 playlists and .ts segments
- hls.js on the frontend for playback

It's all on the same server - no separate storage service.  The 1TB NVMe handles ~100 videos fine.

For thumbnails/sprites I generate those during conversion too.

Main win vs YouTube: No ads, full control over the player, videos load faster, and Google can't delete my channel again πŸ˜…

Ilias is encoding videos on the Hetzner server itself, so he doesn't need to keep encoding GBs of video files locally, which seems like a good idea. He converted all his 110 GB of videos to HLS.js, transcoded via FFmpeg, and then transcribed all 2,000 mins of footage with Whisper, as he mentions in this post.

But if you have a machine fast enough and don't have to transcode a lot of videos, you can also transcode your videos locally and then host them on Cloudflare R2. This method is explained in detail in this blog post.

I also found some cool blog posts on self-hosting videos that are worth-reading:

Also, I think the easiest option to self-host videos will be Cloudflare Stream service, a paid service where you can directly upload videos or via an API and then it automatically transcodes in the background. And from here you can embed the video anywhere or share as a URL.

Apart from this, Videopress by WordPress.com can be another good option, but I guess it works only with WordPress. Has a generous pricing tier though.

I also did a post about the same that might be a bit helpful.


Dual-boot Linux on Windows from an .exe file

Came across this weird project that lets you dual-boot Linux on a Windows machines, but via a .exe file. The project is called LinuxGate and here's the GitHub repo with more info on how it works. But before you try, just know that:

WARNING: This project is functional but NOT RECOMMENDED for production use.

It seems like an interesting idea, but I wouldn't try it because I'm perfectly versed with creating a bootable USB drive and then install whatever distro I want. But I guess, it should be very helpful for people who are not as tech-savvy.


Archive X (Twitter) bookmarks to Markdown

Came across this post on X and then learned about this tool called smaug that helps you archive your X (Twitter) bookmarks into Markdown files. The tool is described as:

Archive your Twitter/X bookmarks to markdown. Automatically. Like a dragon hoarding treasure, Smaug collects the valuable things you bookmark.

You need to either automatically give access to your X account, or manually copy-paste auth_token and ct0 that you get from visiting the Developer Tools β†’ Application β†’ Cookies in the web browser, as explained in the README.md file.

Only a few days ago, I posted about creating a tool for archiving tweets so that even if the original tweets get deleted, I still have the permanent links to use anywhere. I haven't started working on it yet, but this smaug tool will definitely give me ideas about how to approach this.


Use Antigravity via OpenCode

Found this plugin called opencode-antigravity-auth that you can use to authenticate OpenCode against Antigravity and the use models like gemini-3-pro-high and claude-opus-4-5-thinking models inside OpenCode. The plugin is described as:

Enable Opencode to authenticate against Antigravity (Google's IDE) via OAuth so you can use Antigravity rate limits and access models like gemini-3-pro-high and claude-opus-4-5-thinking with your Google credentials.

I have yet to try this, but it's great if it works. You can use the current top coding models for coding inside OpenCode.

Got to know about this from this discussion on GitHub.


MonoURL – stores text in the URL

I was going through Simon Willison's blog and came across a post that featured this GitHub repo, and I was intrigued with the simple yet brilliant idea. I used textarea.my for a bit and then decided to build a better version for personal use.

And I did create one and hosted it on a subdomain text.deepakness.com where you can try it, and you can also see the GitHub repo since it's open-source. Basically, it's a simple index.html file that contains HTML, CSS, and JavaScript and that makes everything possible.

Some of the features I created in my version are:

  1. Has light and dark mode theme options that can be switched between by using the keyboard shortcut ctrl/cmd + shift + L
  2. Spellcheck can also be toggled off or on by pressing ctrl/cmd + shift + K
  3. The note can be downloaded by pressing ctrl/cmd + S or by clicking on the download icon at the top
  4. The share URL can be copied by clicking on the copy icon at the top
  5. Saves the note locally, so just visiting the main URL loads the text that you last wrote
  6. Also shows the number of words and characters at the left-bottom side

To give you a better example, here is the note containing the entire 750+ lines of index.html file saved in the URL itself. Try opening it, the URL is too long, but it works.


DIY bird feeder camera using ESP32

Came across this cool post on X where Clayton created a DIY bird feeder by using the ESP32 camera, only for $30. It does motion detection as well as automatically classifies the birds that come in front of the camera. He must have used an ESP32 cam like this one (it contains both the camera and programming module), and then used Claude Code for writing the code, including writing the firmware as well other deployment related code.

For the case, he mentions using a scrap Airpods paper box and I think that's how these simpler DIY projects should be done. Honestly, I wouldn't be so interested in this, if he were 3D printing the case.

Clayton hasn't shared any more info about this specific project, but I am keeping an eye and will update this post if I find more info about this.

Also, I found this cool video which creates a DIY security camera using the ESP32-CAM.


Generating missing meta descriptions using AI

On deepakness.com, I had 100+ blog posts that didn't have meta descriptions because for some reason descriptions didn't come back when I migrated my site from WordPress to 11ty. It wasn't possible to manually add "description" properties in the frontmatter to all those .md files for blog posts, so I created this Python script that automatically does that.

The script checks each markdown file in the /blog folder one-by-one, if the "description" property exists, it skips them and if there's no "description" then reads the entire blog post, sends a prompt to OpenAI's GPT-5 and then creates the "description".

Here's the entire script:

#!/usr/bin/env python3
"""
Script to add descriptions to blog posts that don't have them.
Uses OpenAI GPT-5 API to generate descriptions under 140 characters.
"""

import os
import re
import requests
from pathlib import Path

# Hardcode your OpenAI API key here
OPENAI_API_KEY = "YOUR_OPENAI_API_KEY"

BLOG_DIR = Path("content/blog")
API_URL = "https://api.openai.com/v1/responses"


def get_frontmatter_and_content(file_path: Path) -> tuple[str, str, str]:
    """
    Parse a markdown file and return (frontmatter, content, full_text).
    """
    with open(file_path, "r", encoding="utf-8") as f:
        text = f.read()

    # Match frontmatter between --- delimiters
    match = re.match(r'^---\n(.*?)\n---\n(.*)$', text, re.DOTALL)
    if match:
        return match.group(1), match.group(2), text
    return "", text, text


def has_description(frontmatter: str) -> bool:
    """
    Check if frontmatter contains a description field.
    """
    # Look for description: at the start of a line
    return bool(re.search(r'^description:', frontmatter, re.MULTILINE))


def generate_description(content: str, title: str) -> str:
    """
    Use OpenAI GPT-5 to generate a description under 140 characters.
    """
    # Truncate content to avoid token limits (first 2000 chars should be enough)
    truncated_content = content[:2000]

    prompt = f"""Read the following blog post and create a brief description for it.
The description MUST be under 140 characters.
Do not use quotes around the description.
Just return the description text, nothing else.

Title: {title}

Content:
{truncated_content}"""

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {OPENAI_API_KEY}"
    }

    payload = {
        "model": "gpt-5",
        "input": prompt
    }

    try:
        response = requests.post(API_URL, headers=headers, json=payload)
        response.raise_for_status()
        data = response.json()

        # Extract the output text from response
        # The output is a list of output objects
        output = data.get("output", [])
        description = ""

        for item in output:
            if item.get("type") == "message":
                content = item.get("content", [])
                for content_item in content:
                    if content_item.get("type") == "output_text":
                        description = content_item.get("text", "").strip()
                        break
                if description:
                    break

        # Ensure it's under 140 characters
        if len(description) > 140:
            description = description[:137] + "..."

        return description
    except requests.exceptions.RequestException as e:
        print(f"  API Error: {e}")
        return ""
    except (KeyError, IndexError, TypeError) as e:
        print(f"  Parse Error: {e}")
        return ""


def get_title_from_frontmatter(frontmatter: str) -> str:
    """
    Extract title from frontmatter.
    """
    match = re.search(r'^title:\s*["\']?(.+?)["\']?\s*$', frontmatter, re.MULTILINE)
    if match:
        return match.group(1).strip('"\'')
    return "Untitled"


def add_description_to_frontmatter(frontmatter: str, description: str) -> str:
    """
    Add description field to frontmatter after the title field.
    """
    lines = frontmatter.split('\n')
    new_lines = []
    description_added = False

    for line in lines:
        new_lines.append(line)
        # Add description after title line
        if line.startswith('title:') and not description_added:
            # Escape any quotes in description
            escaped_desc = description.replace('"', '\\"')
            new_lines.append(f'description: "{escaped_desc}"')
            description_added = True

    return '\n'.join(new_lines)


def update_file(file_path: Path, frontmatter: str, content: str, new_frontmatter: str):
    """
    Write updated content back to file.
    """
    new_text = f"---\n{new_frontmatter}\n---\n{content}"
    with open(file_path, "w", encoding="utf-8") as f:
        f.write(new_text)


def main():
    if OPENAI_API_KEY == "your-openai-api-key-here":
        print("⚠️  Please set your OpenAI API key in the script!")
        print("   Edit add_descriptions.py and replace 'your-openai-api-key-here'")
        return

    # Find all blog post index.md files
    blog_posts = list(BLOG_DIR.glob("*/index.md"))

    print(f"Found {len(blog_posts)} blog posts")
    print("-" * 50)

    posts_updated = 0
    posts_skipped = 0
    posts_failed = 0

    for post_path in sorted(blog_posts):
        slug = post_path.parent.name
        print(f"\nProcessing: {slug}")

        frontmatter, content, _ = get_frontmatter_and_content(post_path)

        if has_description(frontmatter):
            print(f"  βœ“ Already has description, skipping")
            posts_skipped += 1
            continue

        title = get_title_from_frontmatter(frontmatter)
        print(f"  β†’ No description found, generating...")

        description = generate_description(content, title)

        if not description:
            print(f"  βœ— Failed to generate description")
            posts_failed += 1
            continue

        print(f"  β†’ Generated: {description}")

        new_frontmatter = add_description_to_frontmatter(frontmatter, description)
        update_file(post_path, frontmatter, content, new_frontmatter)

        print(f"  βœ“ Updated successfully")
        posts_updated += 1

    print("\n" + "=" * 50)
    print(f"Summary:")
    print(f"  Updated: {posts_updated}")
    print(f"  Skipped (already had description): {posts_skipped}")
    print(f"  Failed: {posts_failed}")


if __name__ == "__main__":
    main()

You might need to tweak the script a bit to make it work for you, and also replace YOUR_OPENAI_API_KEY with your OpenAI API key.


Cloudflare Workers free vs paid

Came across this post from Naoki Otsu that shares his story of moving only the serverless functions from from Vercel to Cloudflare and how this is super economical for him.

  • Cloudflare free plan: 100,000 requests per day
  • Cloudflare paid plan ($5/mo): 10,000,000 requests per day

And this is crazy.

I haven't ever used Cloudflare Workers, but will definitely give it a try for my next project.


macOS apps I use as an internet generalist

As an internet generalist, I use very limited apps on my Macbook Air M2, and here's the entire list of apps that are installed. I will list everything, but add links only to the uncommon ones.

Not to mention, most of these apps are completely free to use.

  1. VS Code: Mostly for taking plaintext notes, and writing posts for deepakness.com
  2. Cursor: IDE for coding, use less these days
  3. Antigravity: Google's IDE for coding, use a lot these days
  4. Google Chrome: The browser
  5. Helium browser: Mostly use this browser, best alternative for the Brave browser
  6. Brave browser: Used this as the main browser for years, feels very cluttered now, but still have this installed
  7. Handy.computer: The best offline speech-to-text app, uses local LLMs
  8. Caffeine: Menubar app to prevent your computer from sleeping
  9. AlDente: Prevent your device from charging over certain percentage
  10. Shottr: The best screenshotting tool, have also bought the license for this (unlocks some extra features)
  11. Cryptomator: For encrypting some folders with password
  12. DB Browser for SQLite: Sometimes use SQLite for simple local projects, so the app helps a lot
  13. Docker: Rarely use it, but using it for locally running a Screaming Frog alternative tool called SEOnaut
  14. Ice: For hiding too many menu bar items under a small dot
  15. Localsend: For sending files from my Macbook to Google Pixel 7, and vice versa
  16. LM Studio: For running some local LLMs, sometimes
  17. Ollama: Again for running local LLMs, but occasionally
  18. Maccy: The ultimate clipboard manager, haven't used any other, but love it
  19. NetNewsWire: The best RSS feed reader for my device
  20. OnlyOffice: The best office suite, free, and compatible with MS Office
  21. Obsidian: Used to use it a lot, now not a lot (mostly use VS Code itslef for taking notes)
  22. Screen Studio: For recording myself and my computer screen when explaining something, all my YouTube videos are recorded using this app
  23. Signal: For communicating with just one friend who uses Signal
  24. Telegram: For communicating with my colleagues, especially with Bikash
  25. Transmission: The simplest torrent client, that I only need sometimes
  26. VLC: The best video or rather the best media player
  27. Dropbox: Sometimes use it for saving things that I don't ever want deleted
  28. WhatsApp: Rarely use it, for communicating with a few people

That's it.

And hopefully I will keep updating this list in the future.


An app to clean your macOS

Came across this post on X and then discovered this cool terminal-style macOS app called Mole that deep cleans your computer. It contains all the features of CleanMyMac, AppCleaner, DaisyDisk, and iStat apps, all combined in a single binary.

JUST WOW!

It can either be installed via a curl command or via Homebrew by running the following command:

brew install tw93/tap/mole

I absolutely love it, I mean just look at the terminal text when trying to uninstall apps via this app:

$ mo uninstall

Select Apps to Remove
═══════════════════════════
β–Ά β˜‘ Adobe Creative Cloud      (9.4G) | Old
  ☐ WeChat                    (2.1G) | Recent
  ☐ Final Cut Pro             (3.8G) | Recent

Uninstalling: Adobe Creative Cloud

  βœ“ Removed application
  βœ“ Cleaned 52 related files across 12 locations
    - Application Support, Caches, Preferences
    - Logs, WebKit storage, Cookies
    - Extensions, Plugins, Launch daemons

====================================================================
Space freed: 12.8GB
====================================================================

By the way, the app is built using 77% shell scripts and 23% Go language. And the app is created by @HiTw93, who is from HangZhou, China.

I will definitely be using this.


Should you create a website for your mobile app?

I came across a post on X that showed Google Search Console graphs for a website that the person created for their mobile app, and now it's getting significant amount of views.

As far as I understood from reading the post and the replies, the person has a mobile app, and he created a website and a bunch of pages targeting important keywords by using the techniques of programmatic SEO. And now he's getting the benefits.


RSS feed to Typefully API via n8n

I use Typefully as the only social media tool for scheduling content for X (Twitter), Threads, Mastodon, Bluesky, and LinkedIn at once. And I used their launched v2 API to automate the publishing of my blog posts to Threads, Mastodon, and Bluesky via a simple n8n workflow.

As you can see the screenshot of n8n workflow in this post on X, here, I will explain the Typefully POST node that you see here. First, I selected an HTTP node in n8n and below are the different fields' data I have:

HTTP Node Fields Value
Method POST
URL https://api.typefully.com/v2/social-sets/{social_set_id}/drafts
Authentication Generic Credential Type
Generic Auth Type Header Auth
Header Auth Header Auth Account (Name: Authorization, Value: Typefully API Key), Nothing else changes
Send Query Parameters OFF
Send Headers OFF
Send Body ON
Body Content Type JSON
Specify Body Using JSON
JSON JSON provided below
Options No properties

I needed to replace {social_set_id} with my actual social set ID, and then it was done. Also, here's the body JSON in the HTTP node.

{
  "platforms": {
    "mastodon": {
      "enabled": true,
      "posts": [
        {
          "text": "πŸŽ‰ New post: ARTICLE_TITLE ARTICLE_LINK"
        }
      ]
    },
    "threads": {
      "enabled": true,
      "posts": [
        {
          "text": "πŸŽ‰ New post: ARTICLE_TITLE ARTICLE_LINK"
        }
      ]
    },
    "bluesky": {
      "enabled": true,
      "posts": [
        {
          "text": "πŸŽ‰ New post: ARTICLE_TITLE ARTICLE_LINK"
        }
      ]
    }
  },
  "draft_title": "SOMETHING_HERE",
  "share": false,
  "publish_at": "now"
}

It's very simple and straightforward, and it just works as expected. As soon as a blog post goes live on my site, the n8n workflow executes and the article's link gets published on Threads, Mastodon, and Bluesky.

And the reason I don't publish these on X and LinkedIn because I use these two platform a bit differently.


Chrome extension No Thanks, ChatGPT v2.0.0 is here

No Thanks, ChatGPT is an open-source Chrome extension that automatically dismisses all annoying popups on ChatGPT when you're not logged in. The extension instantly focuses the prompt box, so you can start typing your questions right away without any extra clicks.

The v1 only dismissed the login popup, but the v2.0.0 now automatically handles the following:

  1. Dismisses "Try Go, Free" upsell popup by clicking "Maybe later"
  2. Reject cookie consent banner automatically
  3. Remove promotional cards above the input box that appears after 2-3 messages
  4. Continuous monitoring for promotional cards that may reappear

The extension can be installed directly via Chrome Webstore (might take a few days before it gets approved by Chrome team) or can also be installed via the GitHub repo as well (latest v2.0.0), as it's open-source.


Entire Chromium source code as context

I came across this post on X that announced that the Nozomio AI team has created a tool by indexing the entire 1 billion tokens of the Chromium browser codebase that anyone can talk and ask questions to. Here's the post content:

Introducing Chromium Agent.

You can now semantically and directly search across Chromium’s 1 billion token source code and technical documentation using @nozomioai API.

It's also free and open source.

Not to mention, they have made the ChromAgent tool open-source and the source code is available here.


The recent Mintlify vulnerability

Found this interesting articles about the recent Mintlify critical security vulnerabilities. For your info, Mintlify is used by companies Discord and Vercel for hosting docs.

I will add more articles as I discover.


Uploading to Cloudinary from Google Sheets

If you want to do an unsigned image upload from Google Sheets to Cloudinary, here's the correct Apps Script snippet that does that:

function uploadImageToCloudinary() {
  var formData = {
    file: "<file_path>", // Replace with the file URL or base64 string
    upload_preset: "<upload_preset>" // Replace with your upload preset
  };

  var options = {
    method: "post",
    payload: formData
  };

  var response = UrlFetchApp.fetch(
    "https://api.cloudinary.com/v1_1/<cloud_name>/image/upload/",
    options
  );

  Logger.log(response.getContentText()); // Logs the Cloudinary response
}

I have used this on the Multi-AI Script product that I have. Earlier, I was using the default unsigned upload script and it wasn't working properly as it should, but now it's seamless.


Can a tool detect AI generated text?

Google has created an amazing tool called @SynthID inside Gemini to detect whether an image is AI generated or not, and it works great most of the time. It works for images generated by different image models by Google like nano-banana-pro.

But is there any tool that can correctly detect AI generated text?

I have come across a bunch of tools and most don't work as expected, for example, you might have seen people saying that some tools flag the constitution as AI generated and that says a lot about these tools. But sometimes, some tools do work as expected. For example, I used this tool called Undetectable that correctly detected my own writings as not AI generated and then ChatGPT generated text as AI generated. But it wasn't accurate all the time at all, only sometimes, it made some mistakes.

Some AI detection tools that I like are:

  1. Undetectable
  2. Grammarly AI Detector
  3. ZeroGPT
  4. Originality AI

And consider the output from these tools in this order – from top to bottom with the top being the best AI text detection tool.


How Cursor AI migrated away from Sanity CMS

I came across this post on X from Lee Rob where he talked about migrating Cursor docs/blog away from a CMS to raw code and Markdown. He didn't directly mention the Sanity CMS but the head of developer from Sanity himself did a post commenting on this, and also has a blog post about the same.

Lee Rob also wrote a detailed post on his personal website explaining the entire process and thinking behind it. I liked the post as it touches all different questions and doubts that you might have before making such a move. The blog post covers topics like:

  • the issue with the old system with Sanity CMS
  • the thinking behind migrating to raw code and Markdown
  • issues and accidents they encountered when migrating, and
  • how exactly it was done by using Cursor AI

As mentioned in the post, it took them only 3 days, $260.32, and 297.4M tokens to migrate away the entire website. Also, they removed 322k lines of code and added 43k lines.

It's an interesting read.


Google Antigravity stage, commit, and sync

When you're not working on a very serious project and just vibe coding, imagine just pressing a simple keyboard shortcut to stage all changes, commit with AI-generated commit message, and also sync with your git provider like GitHub. I set up this cmd + enter keybinding in my Google Antigravity IDE that immediately stages all changes, commits using AI-generated messages, and then syncs with my GitHub repo.

For this, you need to open your Antigravity IDE, press cmd + shift + p and search for "Preferences: Open Keyboard Shortcuts (JSON)" and then copy-paste the below keybinding JSON.

[
    {
        "key":"ctrl+enter",
        "command":"runCommands",
        "args":{
            "commands":[
            {
                "command":"git.stageAll"
            },
            {
                "command":"antigravity.generateCommitMessage"
            },
            {
                "command":"git.commitAll"
            },
            {
                "command":"git.sync"
            }
            ]
        }
    }
]

I have another post about doing the same in Cursor and VS Code where you can set up the same keybindings. Most of the thing are done the same way, and only the command for generating commit messages are different.


Best laptops for running Linux in India

Most of the cool laptop brands like Framework, System76, Bee-Link, Razer, etc. are not available in India. And that led me to finding some other cool laptops that are available in India and that can be used to run different Linux distributions without any issues. Here are a few laptops that I would recommend for running Linux:

  1. Lenovo ThinkPad E14 Intel Core i7 13th Gen 14 (16GB RAM/1TB SSD): This comes with Windows 11 pre-installed but you can later easily install any Linux distro like Omarchy or others (by the way, I have a resource website about Omarchy).
  2. Lenovo ThinkPad E14 AMD Ryzen 5 7530U 14" (16GB RAM/512GB SSD): A cheaper option with a bit better value for money, also comes with Windows 11 preinstalled.
  3. HP 15 Laptop, AMD Ryzen 7 7730U 15.6" (16GB DDR4,512GB SSD): This is another good option in mid-range that also comes with Windows 11 preinstalled but can Linux be installed later.

Apart from this, you can also directly build your PC on Lenovo's website where you can select the processor, RAM and other configurations as per your requirements.