Photo of DeepakNess DeepakNess

Raw notes

(426 notes)

Short notes, links, and thoughts – shared as I go through my day.


Compact Tabs are back in the Safari browser

I love using the Safari web browser on my MacBook, especially when I'm traveling because it consumes less battery on the laptop. But with the macOS Tahoe 26 update, the Compact tabs feature was removed from the browser, and I was sad about it.

And I'm just learning that with the new macOS Tahoe 26.4.1 update, the compact tabs feature is back in the Safari browser, as you see in the screenshot here.

Compact tabs feature in the Safari browser.

It's good already, but I think it behaves a bit differently than it did in the macOS Sequoia or Sonoma, when it was still available. But it's still good, and I'm using it for a day and love it.

Thank you, Apple.


How to best spend $50 on AI coding tools

I have been spending over $100 every month on Claude Max and Codex Pro plans as I use the latest models from these providers for coding, but now I don't think that's the best way to do it anymore.

The current best option I think is, instead of getting higher plan for any one AI tool, get smaller plans for multiple providers. And the best combo I can think of is the following.

  1. Claude Pro – $20/mo
  2. Codex Plus – $20/mo
  3. OpenCode Go – $10/mo

Getting all these will only cost $50, but this is more than enough for my requirements. Here I will save money and also have access to top models from multiple providers. The different models can be used for different tasks that they're actually good at, for example:

  • for all design and UI related work – use the latest Claude models
  • for complex logic and programming – use the latest Codex models
  • for smaller tasks and everything else – use Kimi, GLM, or Qwen models via OpenCode

And not just me but several other people are also starting to realize this.

Currently, I am subscribed to Codex $100 plan that ends next month, and also to OpenCode Go plan. But I'm going to downgrade Codex to $20 plan and then also subscribe to Claude $20 plan and keep my OpenCode Go for $10.

That should be more than enough for three full-time projects I am currently working on.


My experience with the new GPT-5.5 model

Yesterday, OpenAI launched the newest model GPT-5.5 which I was too excited about for the last few days. And here are my experiences so far with using the new model inside the Codex app.

  1. GPT-5.5 is far better in frontend design than other OpenAI models like GPT-5.4 or 5.3-Codex. But it's still not at the level of Opus 4.7 of 4.6 when it comes to UI design.
  2. Even though it was claimed by Sam Altman that the new model uses fewer tokens per task than previous models, it's not the case when I use it. In my case, it used way more tokens than GPT-5.4 because earlier I could never finish the 5-hour limit on the $100 plan, but today I did, as you see below.

GPT-5.5 token usage

  1. The new model, GPT-5.5, definitely feels slightly faster than the previous, GPT-5.4, model in the standard mode. I think, both are equally faster in the speed/fast mode.
  2. I didn't notice any significant improvement in the programming though with the GPT-5.5 model, it feels similarly capable as GPT-5.4 model. Or, maybe I haven't tested this enough yet.

Also, the new Codex app feels snappier and the UI and UX is slightly improved now.

Apart from this, I can still not rely completely on the Codex subscription itself. I can use OpenCode with models like GLM-5.1 and Kimi-K2.6 for frontend design and then GPT-5.5 for everything else. The OpenCode Go plan is very generous at $10/mo and has crazy limits as well. So if I had to reduce my monthly spending, I would go with OpenCode Go ($10/mo) + ChatGPT Plus ($20/mo) and maybe $20-30 extra on OpenCode, and that should cover all my requirements.

I'm still playing with the GPT-5.5 model so if I find something new worth sharing, I will keep this post updated.

Update: Apr 25, 2026

I posted about limits draining too fast on X and many people recommended that I should rarely use Extra High effort for my tasks, and suggested using either Medium or High for most tasks. So... I gave the "High" effort a try, and it's good, I am satisfied with it so far.


SharePDF reaches 100 users

The app to share PDFs, SharePDF.app, that I created early this year has now reached the milestone of 100 users and I'm so happy for it. It took around 4 months to reach this milestone and I don't know what, but it makes me super happy.

I have been documenting most of the things on this page and reading previous logs takes me through the journey of things I did in the past few months – how I finalized which tech-stack to use, how I decided different features the app has, and more.

SharePDF is running stable and I don't need to do much on it now. But I still have 1-2 features or rather enhancements planned for near future. Also, I will be working on improving the landing pages as well, and maybe even create a few new ones targeting keywords that get searched a lot.


Codex is smarter than Claude Code

After seriously using Codex for a few days, it definitely seems smarter than Claude Code. But Codex is just awful at design. I have tried to make good designs using Codex multiple times, and it just doesn't work. In fact, I tried creating a few new pages for my Vemgram project that I am mainly working on these days, and even though the site has a design system set up, Codex is still unable to create designs that matches the aesthetics of the rest of the website.

I am very much disappointed for that.

For this reason, nowadays, I have started using OpenCode whenever I need to design something. I love the newest launched models like Kimi K2.5, Qwen 3.6 Plus, and GLM-5.1 models inside OpenCode. These open-source models are almost as good as Opus 4.6/4.7 when it comes to design.

Apart from this, I am hearing rumors that OpenAI is about to launch GPT-5.5 and also that it's great at design. If that's the case, I won't regret subscribing to Codex Pro at all.


After Claude, now Codex can't keep up with the demand

It's been almost an hour since OpenAI's Codex is completely down, as you see in the screenshot below. I recently moved away from Claude for this reason, and now I am facing the same issues here as well.

Codex is down

Codex still hasn't become as bad as Claude was a week ago and before that, but it seems like it's getting there. And it's surprising to me that if these AI companies can't keep up with the rising demands, why don't they just pause new signups temporarily? I mean, it will be better for everyone, no? – for users and then for their own brand image as well.

This is not at all fair for users. They're paying the premium to just experience degraded performance all the time. For example, just take a look at Claude's status chart for the last 90 days and you'll see lots of yellows and reds.

Claude status for the last 90 days

I, myself, had been paying at least $100 each month to Claude and now to Codex, but getting the same degraded performance. Honestly, I don't blame OpenAI or Anthropic for that they can't keep up with the demand, but I blame them because why are they still allowing signups and ruining the experience for everyone? They should just pause signups for some time till they get more compute, and if required pause again for some time till they get more compute, and so on.

I also run a SaaS, and I will feel very bad if I keep charging a fee to my customers and if my app is consistently down. I will even refund them and pause new signups till I get things sorted.

That's it.

Sorry for the rant, really.


I like Claude Code CLI's no flicker mode

I just started using Claude Code's new no-flicker or full-screen mode for the CLI and I like it better than earlier. When enabled, it just sends the text input field at the bottom of the screen and makes it sticky. Just like how you see in the screenshot here:

Claude Code CLI no-flicker mode

I like it because it doesn't disturb me when Claude is generating a large output, then I can keep reading from the top while it outputs at the bottom, without disturbing my flow.


Mistakes were made at Google Search Console

Today, I suddenly started receiving emails like below from Google Search Console for all my sites already added. These emails are normally sent when a new site is added to the GSC for tracking the search performance, but mistakes were made, for sure.

Repeated emails from Google Search Console

I have at least 8-10 sites added to this Gmail account and I received emails for all those in a 3-5 hour span. I also posted about this on X and several people confirmed that they have also received the same emails.

GSC email detail

And if you're curious about the contents of these emails, see the screenshot above. It's the same that Google sends when you add a new site to GSC.

I hope they fix it soon and do not annoy me, again.

Update: Apr 15, 2026

It was just a "glitch" as everyone suspected, as confirmed by John Muller from Google. Learned about this from this post on X by Gagan Ghotra.


Introducing SharePDF app

Introducing SharePDF – a web-app to host PDFs, share them as URLs, track their view analytics, and do much more. It's live and working, and already has more than 80 users at the time of writing this post.

I had this idea to create an app that makes sharing PDFs easier for at least 2 years, but I never worked on it. However, somehow I started working on it from January this year, i.e. 2026. I haven't done any kind of marketing as of now, and still regularly getting signups from sources I wouldn't even have guessed – Instagram and ChatGPT. Yes, someone must have created some content that people are discovering SharePDF from.

SharePDF X intro

Today, I have also posted on X for the first time about the app. Apart from this, I have also been sharing build logs on this page on my website – it just helps me keep myself in the loop.

Currently, I am not working on new features for the app but mainly working on improving the landing page, copy, and user experience.

By the way, I also launched Vemgram, a platform to connect Indian manufacturers with buyers, a few days ago. So, now, I have two full time projects that I am working on these days.


Better utilize Claude Code limits

Came across this post on X that shares a hack, a way to maximum utilize your Claude Code usage limits by setting up scheduled tasks. It seems silly at first, but it's actually helpful.

Claude Code reset Cron

As explained in the post, I set up a cron that sends "hi" every 5 hours starting from 8AM IST so that as soon as the limit resets, it immediately starts a new cycle so that you have clean time blocks.

30 2,7,12,17 * * *

And the best thing is, you don't need any extra tool to use for this.


Meta's new Muse Spark model is here

Meta just launched a new AI model called Muse Spark which is now available to use via meta.ai and the Meta app. It's not available via the API, yet.

As of now, it looks good on the evals as you see in the screenshot here. In categories, it's shown to be even better than Opus 4.6 and GPT-5.4.

Muse Spark model looks good on evals

I tried the model via meta.ai web interface with my Facebook login, and I must say that it has a better sense of design than GPT-5.4. For example, I just asked it to create a minimal personal website and this was the result.

Muse Spark at web design

I am still testing this, and will keep this post updated.

Also, I must say that I liked the meta.ai web interface and it feels fast and snappy. It even let me see the website in a new tab.


Milla Jovovich's mempalace is nice, but...

Milla Jovovich launched an open-source AI memory system called mempalace with a bold claim saying "highest-scoring AI memory system ever benchmarked". She first announced about this on her Instagram and then it's picked by multiple folks from there.

And yes, that Milla Jovovich from movies like Resident Evil and The Fifth Element. She's an engineer and loves to code as well.

But then I came across this issue on GitHub that highlights multiple false claims, as you see in the screenshot.

Issues with mempalace by Milla Jovovich

As she claims in the video, it's still work in progress, and she might still be working on this, so there might be some gaps in the README vs the actual codebase. But the concept of the tool is still very good, and there is also lots of good information in the README file that you can explore.


Google AI Edge Gallery app is good

Google has a new app for Android and also for iOS called AI Edge Gallery that lets people explore and use local on-device LLMs. Currently, it's featuring the newly launched Gemma 4 family of models.

This is written in their app description on app stores:

AI Edge Gallery is the premier destination for running the world's most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast.

And to my surprise, the app itself is open-source. I love whatever is Google's game here.

I tried the app on my Android device, and even downloaded the smallest Gemma-4-E2B-it model and it worked fine. The app has multiple different options, as you see in the screenshots, and it looks stunning as well.

Google AI Edge Gallery app on Android

I am traveling next week via train and the network is not stable sometimes, so I can easily chat with these local models. I have tested for a bit, and they're good at non-coding stuff.


Blocking disposable email sign-ups in Laravel

I noticed some users signing up on a project with throwaway emails from services like yopmail, guerrillamail, and tempmail. They'd sign up, poke around, and create multiple accounts to use the service for free.

I researched solutions for it and found propaganistas/laravel-disposable-email, a popular Laravel package (~1.35M downloads) that gives you an indisposable validation rule. It pulls from a community-maintained list of ~72k known disposable domains.

But I didn't want another package dependency for something this simple. So I grabbed their JSON file with 72,000+ disposable domains, and wrote a tiny custom validation rule:

class NotDisposableEmail implements ValidationRule
{
    public function validate(string $attribute, mixed $value, Closure $fail): void
    {
        $domain = strtolower(substr(strrchr($value, '@'), 1));

        if (in_array($domain, $this->disposableDomains(), true)) {
            $fail('Disposable email addresses are not allowed. Please use a permanent email address.');
        }
    }

    private function disposableDomains(): array
    {
        return once(function (): array {
            $path = resource_path('data/disposable-domains.json');

            if (! file_exists($path)) {
                return [];
            }

            return json_decode(file_get_contents($path), true) ?? [];
        });
    }
}

Then added it to the registration validation in CreateNewUser.php:

'email' => [
    'required',
    'string',
    'email',
    'max:255',
    new NotDisposableEmail,
    Rule::unique(User::class),
],

That's it. No package, no API calls, no latency. The JSON file loads once per request (using Laravel's once() helper), and if the file is somehow missing, it silently passes so nothing breaks. It shows this when someone tries signing up using a disposable email:

Disposable email signup block

The only downside is that the list is static and new disposable services won't be caught until I update the JSON. But for now, it's good.

Update:

After many people pointed out on Thread and X, I have now removed the blocking for disposable domains and exploring better ways to filter out low-quality signups.


Backing up important GitHub repos

Lately, GitHub hasn't been very stable and I also read a story of GitHub blocking someone's account on Reddit, so I do not have a lot of trust to keep my code only on one platform. And I used my small Raspberry Pi 4B 1 GB device to keep additional copies of my important repos.

Here's the bash script to mirror specific GitHub repos locally (with optional GitLab push). Supports org repos, wikis, LFS, allowlist/blocklist filtering, and dry-run mode:

First, this backup_github_repos.sh file:

#!/usr/bin/env bash
set -Eeuo pipefail

usage() {
  cat <<'EOF'
Backup all owned GitHub repositories to local mirror clones and optionally push them to GitLab.

Usage:
  ./backup_github_repos.sh [path/to/config.env]

Prerequisites:
  - git
  - gh
  - jq
  - gh auth login
  - gh auth setup-git

Optional:
  - git-lfs (if FETCH_LFS=1)
  - GitLab SSH key (if ENABLE_GITLAB_PUSH=1)

This script:
  - mirrors all repos you own on GitHub
  - can also mirror selected GitHub org repos
  - can limit backups to an exact allowlist of repos
  - can back up wiki repos when they exist
  - can push matching repos to GitLab over SSH

It does NOT back up GitHub issues, pull requests, discussions, or release assets.
EOF
}

log() {
  printf '[%s] %s\n' "$(date '+%F %T')" "$*"
}

warn() {
  printf '[%s] WARNING: %s\n' "$(date '+%F %T')" "$*" >&2
}

die() {
  printf '[%s] ERROR: %s\n' "$(date '+%F %T')" "$*" >&2
  exit 1
}

run() {
  if [[ "${DRY_RUN}" == "1" ]]; then
    printf 'DRY_RUN:'
    printf ' %q' "$@"
    printf '\n'
    return 0
  fi
  "$@"
}

trim() {
  local value="${1:-}"
  value="${value#"${value%%[![:space:]]*}"}"
  value="${value%"${value##*[![:space:]]}"}"
  printf '%s' "$value"
}

require_cmd() {
  command -v "$1" >/dev/null 2>&1 || die "Missing required command: $1"
}

csv_list_contains() {
  local needle="$1"
  local csv="$2"
  local item

  IFS=',' read -r -a items <<<"$csv"
  for raw_item in "${items[@]}"; do
    item="$(trim "$raw_item")"
    [[ -n "$item" ]] || continue
    [[ "$item" == "$needle" ]] && return 0
  done

  return 1
}

gh_repo_stream_owned() {
  gh api --paginate "/user/repos?affiliation=owner&per_page=100" \
    | jq -c '.[] | {
        name,
        full_name,
        private,
        archived,
        fork,
        has_wiki,
        clone_url,
        ssh_url,
        owner: .owner.login
      }'
}

gh_repo_stream_org() {
  local org="$1"
  gh api --paginate "/orgs/${org}/repos?type=all&per_page=100" \
    | jq -c '.[] | {
        name,
        full_name,
        private,
        archived,
        fork,
        has_wiki,
        clone_url,
        ssh_url,
        owner: .owner.login
      }'
}

write_repo_metadata() {
  local repo_json="$1"
  local owner="$2"
  local repo="$3"
  local metadata_dir="${BACKUP_ROOT}/metadata/${owner}"
  mkdir -p "$metadata_dir"
  jq '.' <<<"$repo_json" > "${metadata_dir}/${repo}.json"
}

sync_git_mirror() {
  local source_url="$1"
  local destination="$2"
  local label="$3"

  mkdir -p "$(dirname "$destination")"

  if [[ -d "$destination" ]]; then
    log "Updating mirror: ${label}"
    run git -C "$destination" remote set-url origin "$source_url"
    run git -C "$destination" remote update --prune
  else
    log "Creating mirror: ${label}"
    run git clone --mirror "$source_url" "$destination"
  fi
}

fetch_lfs_objects() {
  local destination="$1"
  local label="$2"

  if [[ "${FETCH_LFS}" != "1" ]]; then
    return 0
  fi

  if ! git lfs version >/dev/null 2>&1; then
    die "FETCH_LFS=1 but git-lfs is not installed"
  fi

  log "Fetching LFS objects: ${label}"
  if ! run git -C "$destination" lfs fetch --all origin; then
    warn "LFS fetch failed for ${label}. The Git mirror is still valid, but LFS content may be incomplete."
  fi
}

gitlab_remote_url() {
  local repo_name="$1"
  printf 'git@gitlab.com:%s/%s.git' "$GITLAB_NAMESPACE" "$repo_name"
}

gitlab_remote_exists() {
  local repo_name="$1"
  git ls-remote "$(gitlab_remote_url "$repo_name")" >/dev/null 2>&1
}

ensure_gitlab_repo() {
  local repo_name="$1"

  if gitlab_remote_exists "$repo_name"; then
    return 0
  fi

  if [[ "${GITLAB_CREATE_REPOS}" != "1" ]]; then
    die "GitLab repo ${GITLAB_NAMESPACE}/${repo_name} does not exist and GITLAB_CREATE_REPOS=0"
  fi

  log "GitLab repo ${GITLAB_NAMESPACE}/${repo_name} will be created on first push if your SSH key can create projects in that namespace"
}

push_git_mirror_to_gitlab() {
  local destination="$1"
  local repo_name="$2"
  local label="$3"
  local remote_url

  remote_url="$(gitlab_remote_url "$repo_name")"

  if git -C "$destination" remote get-url gitlab >/dev/null 2>&1; then
    run git -C "$destination" remote set-url gitlab "$remote_url"
  else
    run git -C "$destination" remote add gitlab "$remote_url"
  fi

  log "Pushing branches and tags to GitLab: ${label}"
  run git -C "$destination" push --prune gitlab \
    '+refs/heads/*:refs/heads/*' \
    '+refs/tags/*:refs/tags/*'

  if [[ "${FETCH_LFS}" == "1" ]]; then
    if ! run git -C "$destination" lfs push --all gitlab; then
      warn "LFS push failed for ${label}. Check GitLab LFS configuration if this repo uses LFS."
    fi
  fi
}

should_backup_repo() {
  local repo_json="$1"
  local archived
  local forked
  local full_name

  archived="$(jq -r '.archived' <<<"$repo_json")"
  forked="$(jq -r '.fork' <<<"$repo_json")"
  full_name="$(jq -r '.full_name' <<<"$repo_json")"

  if [[ -n "${REPO_ALLOWLIST}" ]] && ! csv_list_contains "$full_name" "$REPO_ALLOWLIST"; then
    return 1
  fi

  if [[ -n "${REPO_BLOCKLIST}" ]] && csv_list_contains "$full_name" "$REPO_BLOCKLIST"; then
    return 1
  fi

  if [[ "$archived" == "true" && "${INCLUDE_ARCHIVED_REPOS}" != "1" ]]; then
    return 1
  fi

  if [[ "$forked" == "true" && "${INCLUDE_FORKS}" != "1" ]]; then
    return 1
  fi

  return 0
}

sync_repo_bundle() {
  local repo_json="$1"
  local full_name
  local owner
  local repo
  local clone_url
  local private_flag
  local has_wiki
  local mirror_dir
  local wiki_dir
  local wiki_url
  local wiki_repo_name

  full_name="$(jq -r '.full_name' <<<"$repo_json")"
  owner="${full_name%/*}"
  repo="${full_name#*/}"
  clone_url="$(jq -r '.clone_url' <<<"$repo_json")"
  private_flag="$(jq -r '.private' <<<"$repo_json")"
  has_wiki="$(jq -r '.has_wiki' <<<"$repo_json")"

  mirror_dir="${BACKUP_ROOT}/mirrors/${owner}/${repo}.git"
  write_repo_metadata "$repo_json" "$owner" "$repo"
  sync_git_mirror "$clone_url" "$mirror_dir" "$full_name"
  fetch_lfs_objects "$mirror_dir" "$full_name"

  if [[ "${ENABLE_GITLAB_PUSH}" == "1" ]]; then
    ensure_gitlab_repo "$repo"
    push_git_mirror_to_gitlab "$mirror_dir" "$repo" "$full_name"
  fi

  if [[ "$has_wiki" != "true" || "${INCLUDE_WIKIS}" != "1" ]]; then
    return 0
  fi

  wiki_url="${clone_url%.git}.wiki.git"
  wiki_repo_name="${repo}.wiki"
  wiki_dir="${BACKUP_ROOT}/mirrors/${owner}/${wiki_repo_name}.git"

  if git ls-remote "$wiki_url" >/dev/null 2>&1; then
    sync_git_mirror "$wiki_url" "$wiki_dir" "${full_name} wiki"
    if [[ "${ENABLE_GITLAB_PUSH}" == "1" && "${GITLAB_PUSH_WIKIS}" == "1" ]]; then
      ensure_gitlab_repo "$wiki_repo_name"
      push_git_mirror_to_gitlab "$wiki_dir" "$wiki_repo_name" "${full_name} wiki"
    fi
  else
    warn "Wiki enabled but no wiki repo found for ${full_name}; skipping wiki backup"
  fi
}

if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
  usage
  exit 0
fi

if [[ $# -gt 1 ]]; then
  usage >&2
  exit 1
fi

if [[ $# -eq 1 ]]; then
  ENV_FILE="$1"
  [[ -f "$ENV_FILE" ]] || die "Config file not found: $ENV_FILE"
  set -a
  # shellcheck source=/dev/null
  source "$ENV_FILE"
  set +a
fi

BACKUP_ROOT="${BACKUP_ROOT:-$HOME/github-repo-backups}"
INCLUDE_OWNED_REPOS="${INCLUDE_OWNED_REPOS:-1}"
GITHUB_ORGS="${GITHUB_ORGS:-}"
REPO_ALLOWLIST="${REPO_ALLOWLIST:-}"
REPO_BLOCKLIST="${REPO_BLOCKLIST:-}"
INCLUDE_ARCHIVED_REPOS="${INCLUDE_ARCHIVED_REPOS:-1}"
INCLUDE_FORKS="${INCLUDE_FORKS:-1}"
INCLUDE_WIKIS="${INCLUDE_WIKIS:-1}"
FETCH_LFS="${FETCH_LFS:-0}"
ENABLE_GITLAB_PUSH="${ENABLE_GITLAB_PUSH:-0}"
GITLAB_NAMESPACE="${GITLAB_NAMESPACE:-}"
GITLAB_CREATE_REPOS="${GITLAB_CREATE_REPOS:-1}"
GITLAB_PUSH_WIKIS="${GITLAB_PUSH_WIKIS:-1}"
DRY_RUN="${DRY_RUN:-0}"

require_cmd git
require_cmd gh
require_cmd jq
gh auth status --hostname github.com >/dev/null 2>&1 \
  || die "GitHub CLI is not authenticated. Run: gh auth login"

if [[ "${ENABLE_GITLAB_PUSH}" == "1" ]]; then
  [[ -n "${GITLAB_NAMESPACE}" ]] || die "ENABLE_GITLAB_PUSH=1 requires GITLAB_NAMESPACE"
fi

mkdir -p "${BACKUP_ROOT}/mirrors" "${BACKUP_ROOT}/metadata"

declare -A seen_repos=()
repo_count=0

if [[ "${INCLUDE_OWNED_REPOS}" != "1" && -z "${GITHUB_ORGS}" ]]; then
  die "Nothing to do. Set INCLUDE_OWNED_REPOS=1 and/or GITHUB_ORGS"
fi

log "Backup root: ${BACKUP_ROOT}"

while IFS= read -r repo_json; do
  [[ -n "$repo_json" ]] || continue

  full_name="$(jq -r '.full_name' <<<"$repo_json")"
  if [[ -n "${seen_repos[$full_name]:-}" ]]; then
    continue
  fi
  seen_repos["$full_name"]=1

  if ! should_backup_repo "$repo_json"; then
    log "Skipping repo due to filters: ${full_name}"
    continue
  fi

  sync_repo_bundle "$repo_json"
  ((repo_count+=1))
done < <(
  {
    if [[ "${INCLUDE_OWNED_REPOS}" == "1" ]]; then
      gh_repo_stream_owned
    fi

    IFS=',' read -r -a orgs <<<"${GITHUB_ORGS}"
    for raw_org in "${orgs[@]}"; do
      org="$(trim "$raw_org")"
      [[ -n "$org" ]] || continue
      gh_repo_stream_org "$org"
    done
  }
)

log "Completed backup run for ${repo_count} repositories"

And then this .env file:

# Local backup location
BACKUP_ROOT="$HOME/github-backup/repo-backups"

# Backup all repos owned by the authenticated GitHub user.
INCLUDE_OWNED_REPOS=1

# Optional comma-separated GitHub org names to back up too.
GITHUB_ORGS=""

# Optional exact repo allow/block lists using owner/repo names.
# If REPO_ALLOWLIST is non-empty, only those repos are backed up.
REPO_ALLOWLIST=""
REPO_BLOCKLIST=""

# Include archived repos and forks.
INCLUDE_ARCHIVED_REPOS=1
INCLUDE_FORKS=1

# Also back up wiki repos when they exist.
INCLUDE_WIKIS=1

# Set to 1 if you use Git LFS and have git-lfs installed.
FETCH_LFS=0

# Optional GitLab mirror push.
ENABLE_GITLAB_PUSH=0

# The target user or group on GitLab.
GITLAB_NAMESPACE=""

# If set to 1, GitLab may create missing repos on first push if your SSH key
# has permission to create projects in that namespace.
GITLAB_CREATE_REPOS=0

# Also push wiki mirrors to GitLab.
GITLAB_PUSH_WIKIS=0

# Print actions without changing anything.
DRY_RUN=0

And then here are some instructions to set this up:

  1. Install dependencies
sudo apt update && sudo apt install git jq

Then install the GitHub CLI:

sudo mkdir -p -m 755 /etc/apt/keyrings
wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update && sudo apt install gh
  1. Authenticate
gh auth login
gh auth setup-git
  1. Place your files

Put backup_github_repos.sh and your config.env somewhere like ~/github-backup/. Make the script executable:

chmod +x backup_github_repos.sh
  1. Test it
./backup_github_repos.sh config.env
  1. Automate with cron

Run crontab -e and add a line like:

0 3 * * * /home/pi/github-backup/backup_github_repos.sh /home/pi/github-backup/config.env >> /home/pi/github-backup/backup.log 2>&1

This runs the backup daily at 3 AM and logs output. Adjust the paths and schedule to your liking.

I might be missing some steps here, so if you're stuck somewhere make sure to give this to an LLM and ask it to help you do the setup.


Introducing Vemgram

It all started from this post on X from Akshay G Jain which was shared with by my friend Amit Sarda.

Akshay about IndiaMart

I immediately decided to plan and work on it. Initially, I was working at a slow pace but I kept working. And here we are with this brand-new platform called Vemgram that helps Indian manufacturers connect with retailers and build a strong digital presence.

Vemgram homepage screenshot

I am working with Bikash Kampo on this project and the concept is simple, we want to build a trusted directory of Indian manufactures where retailers won't have to worry about getting scammed, and manufactures get a solid digital presence. I am also regularly updating what I am working on for this project on this page.

It's still work in progress, and probably always will be, but the current version of Vemgram allows people to create business profiles like this, list their products, and receive business inquiries from retailers. And currently I am working on implementing custom domain and analytics features.

I have done an introduction post on X. And I am also open to feedback on this, so if you have any please email me or DM me on socials.

Additionally, I am grateful to my friends Amit Sarda, Rohit, and my girlfriend for their helpful feedback on my ideas.


Preventing Docker and journal cache bloat on VPS

I have two VPS servers running Dokploy, and I noticed that even 80 GB of disk space on the servers were getting full and apps failed to deploy. The culprits were Docker cache (old images, build cache, dangling stuff from past deployments) and systemd journal logs that grow indefinitely by default.

Running the docker builder prune -a command freed ~67 GB of space on the server, so I set up a few things to automate this process:

First, I added a cron job that auto-prunes Docker every 6 hours, but only if disk usage goes above 70%:

(crontab -l 2>/dev/null; echo '0 */6 * * * [ $(df / --output=pcent | tail -1 | tr -dc 0-9) -gt 70 ] && docker system prune -af --filter "until=72h" && docker builder prune -af >> /var/log/docker-prune.log 2>&1') | crontab -

This removes unused images and build cache older than 72 hours. It doesn't touch running containers, so active deployments are safe.

Then, I vacuumed the existing journal logs down to 200MB:

journalctl --vacuum-size=200M

And finally, I capped the journal log size permanently so it never grows beyond 200MB again:

sed -i 's/#SystemMaxUse=/SystemMaxUse=200M/' /etc/systemd/journald.conf
systemctl restart systemd-journald

Applied the same setup on both servers and now I don't have to worry about cache eating up disk space over time. I know there are some drawbacks of this method, but I think the benefits outweigh them.


LLM Knowledge Bases post by Andrej Karpathy

Andrej Karpathy recently published this post on X about building and managing personal knowledge bases using LLMs, and there's some interesting information in the post that I am going to collect below.

  • Lately, Andrej is spending more tokens manipulating knowledge than manipulating code.
  • He uses Obsidian for storing all the knowledge (articles, papers, repos, datasets, etc.) in Markdown and image formats. And uses Obsidian's Web Clipper to grab info from web articles.
  • He puts everything in a /raw folder and then uses an LLM to incrementally "compile" a wiki with visualizations. He specifically mentions that the LLM writes and maintains all of the data of the wiki and he doesn't manually edit/add anything.
  • He has ~100 articles on several topics that he can ask LLMs to fetch answers to complex questions against the wiki.
  • When chatting with the LLM against the wiki, he prefers generating new Markdown files, slideshows (Marp format), or matplotlib images. And then even files the outputs back into the wiki to further enhance it.

There's much more info in the post that you can read.

I love this approach and I guess I will also be using something similar very soon, as managing lots of stuff is becoming difficult over time. And if I do, I will write about it.


List of 187 Claude spinner verbs (leaked)

No idea who leaked the Claude Code codebase, but I came across this list of spinner verbs Claude uses from this post on X. And it's super cool.

  1. Accomplishing
  2. Actioning
  3. Actualizing
  4. Architecting
  5. Baking
  6. Beaming
  7. Beboppin'
  8. Befuddling
  9. Billowing
  10. Blanching
  11. Bloviating
  12. Boogieing
  13. Boondoggling
  14. Booping
  15. Bootstrapping
  16. Brewing
  17. Bunning
  18. Burrowing
  19. Calculating
  20. Canoodling
  21. Caramelizing
  22. Cascading
  23. Catapulting
  24. Cerebrating
  25. Channeling
  26. Channelling
  27. Choreographing
  28. Churning
  29. Clauding
  30. Coalescing
  31. Cogitating
  32. Combobulating
  33. Composing
  34. Computing
  35. Concocting
  36. Considering
  37. Contemplating
  38. Cooking
  39. Crafting
  40. Creating
  41. Crunching
  42. Crystallizing
  43. Cultivating
  44. Deciphering
  45. Deliberating
  46. Determining
  47. Dilly-dallying
  48. Discombobulating
  49. Doing
  50. Doodling
  51. Drizzling
  52. Ebbing
  53. Effecting
  54. Elucidating
  55. Embellishing
  56. Enchanting
  57. Envisioning
  58. Evaporating
  59. Fermenting
  60. Fiddle-faddling
  61. Finagling
  62. Flambéing
  63. Flibbertigibbeting
  64. Flowing
  65. Flummoxing
  66. Fluttering
  67. Forging
  68. Forming
  69. Frolicking
  70. Frosting
  71. Gallivanting
  72. Galloping
  73. Garnishing
  74. Generating
  75. Gesticulating
  76. Germinating
  77. Gitifying
  78. Grooving
  79. Gusting
  80. Harmonizing
  81. Hashing
  82. Hatching
  83. Herding
  84. Honking
  85. Hullaballooing
  86. Hyperspacing
  87. Ideating
  88. Imagining
  89. Improvising
  90. Incubating
  91. Inferring
  92. Infusing
  93. Ionizing
  94. Jitterbugging
  95. Julienning
  96. Kneading
  97. Leavening
  98. Levitating
  99. Lollygagging
  100. Manifesting
  101. Marinating
  102. Meandering
  103. Metamorphosing
  104. Misting
  105. Moonwalking
  106. Moseying
  107. Mulling
  108. Mustering
  109. Musing
  110. Nebulizing
  111. Nesting
  112. Newspapering
  113. Noodling
  114. Nucleating
  115. Orbiting
  116. Orchestrating
  117. Osmosing
  118. Perambulating
  119. Percolating
  120. Perusing
  121. Philosophising
  122. Photosynthesizing
  123. Pollinating
  124. Pondering
  125. Pontificating
  126. Pouncing
  127. Precipitating
  128. Prestidigitating
  129. Processing
  130. Proofing
  131. Propagating
  132. Puttering
  133. Puzzling
  134. Quantumizing
  135. Razzle-dazzling
  136. Razzmatazzing
  137. Recombobulating
  138. Reticulating
  139. Roosting
  140. Ruminating
  141. Sautéing
  142. Scampering
  143. Schlepping
  144. Scurrying
  145. Seasoning
  146. Shenaniganing
  147. Shimmying
  148. Simmering
  149. Skedaddling
  150. Sketching
  151. Slithering
  152. Smooshing
  153. Sock-hopping
  154. Spelunking
  155. Spinning
  156. Sprouting
  157. Stewing
  158. Sublimating
  159. Swirling
  160. Swooping
  161. Symbioting
  162. Synthesizing
  163. Tempering
  164. Thinking
  165. Thundering
  166. Tinkering
  167. Tomfoolering
  168. Topsy-turvying
  169. Transfiguring
  170. Transmuting
  171. Twisting
  172. Undulating
  173. Unfurling
  174. Unravelling
  175. Vibing
  176. Waddling
  177. Wandering
  178. Warping
  179. Whatchamacalliting
  180. Whirlpooling
  181. Whirring
  182. Whisking
  183. Wibbling
  184. Working
  185. Wrangling
  186. Zesting
  187. Zigzagging

If you don't remember seeing these anywhere, take a look at the screenshot below.

Claude Spinner Verbs

Yes, this is shown when you ask Claude Code to do something and then it's working on it. I believe these spinner verbs are randomly selected from the above list and shown.


OpenAI launches Codex plugin for Claude Code

OpenAI team has created a Codex plugin for Claude Code that you can use to trigger Codex inside Claude Code. It's described as:

Use the Codex plugin for Claude Code to delegate tasks to Codex or have Codex review your changes using your ChatGPT subscription.

Here's the plugin on GitHub that you can use. This thing works with ChatGPT subscription (including the Free plan) or with an OpenAI API key, and it does count against your Codex usage limits. If you're looking to learn more about it, Vaibhav from OpenAI has written a detailed article about it that you can follow.

Apart from this, Claude released computer use so there's this joke going around on X, and it is funny.

Codex Anthropic use

But jokes apart, this is going to be super helpful for code reviews. I am definitely using this.


Claude Code is hitting limits fast

People have been complaining about Claude Code hitting limits quickly for almost a week, and now, finally, the Claude Code team has started to look into this and low-key admitted that this issue exists. Lydia from Claude Code team posted this:

We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!

But why are they suddenly caring about this, and ignored everything previously? The reason is this post on Reddit by /u/skibidi-toaleta-2137. The person reverse-engineered the Claude Code standalone binary and found out two serious bugs which should be causing the issues.

  1. Bug 1: Sentinel replacement in standalone binary breaks cache when conversation discusses billing internals
  2. Bug 2: --resume ALWAYS breaks cache (since v2.1.69)

The person has also shared workaround to avoid these issues, but I guess since the Claude team is already working on this, I will wait for a few hours till these gets resolved.

All I can say is, thank you, stranger from Reddit.


Eating too much watermelon causes indigestion

I didn't know that eating too much watermelon can cause serious indigestion. The other day, I had ~1.5 kg of a watermelon at a single time (yes, I love them) and experienced digestion issues the next day.

Too much watermelons causing digestive issues

Apparently, this is a known fact and I had no idea about this, until my mom hinted and I checked online to learn this to be true.

Lessons learned.


Gemini introduces import memory and chats feature

Earlier this month, Claude introduced the feature to import memory from other AI chatbots. Now, Gemini has also introduced a feature like that, as you see in the screenshot.

Import memory feature in Gemini

The process is simple, you just copy the provided prompt, paste into AI provider you're using, copy the response, put into Gemini, and save. And that's it, now Gemini knows everything that the prompt extracted from Claude, ChatGPT, or others.

By the way, here's the huge prompt they provide:

You are helping me import context from one AI assistant to another. Your job is to go through our past conversations and sum up what you know about me.

In the output, please avoid using any first-person pronouns (I, my, me, mine) and any second-person pronouns (you, your, yours). Instead, refer to the individual you have learned about as "the user" or use neutral phrasing.

Preserve the user's words verbatim where possible, especially for instructions and preferences.

Categories (output in this order):
1. Demographics Information: Preferred names, profession, education, and general residence.
2. Interests & Preferences: Sustained, active engagements (not just owning an object or a one-time purchase).
3. Relationships: Confirmed, sustained relationships.
4. Dated Events, Projects & Plans: A log of significant, recent activities.
5. Instructions: Rules I've explicitly asked you to follow going forward, "always do X", "never do Y", and corrections to your behavior. Only include rules from stored memories, not from conversations.

Format:
Divide the content into the labeled section using the categories above. Try to include verbatim quotes from my prompts that justify each entry. Structure each entry using this format:
The user's name is <name>.
- Evidence: User said "call me <name>". Date: [YYYY-MM-DD].

Output:
- Format the final output summary as a text block.

I used this prompt on Claude and the response is extremely detailed, but contains a lot of personal information so I can't post here.

And not just the memory, but it also lets you import chats from ChatGPT, Claude, or other providers, as you see the import chats feature in the screenshot above.

Import chats in Gemini

They also describe import chat process in their documentation, as you see in the above screenshot. You can directly upload the ZIP file up to 5 GB in size.

I think, this import chats feature is actually cool for people who are completely migrating to Gemini from ChatGPT, Claude, or other providers.


LiteLLM Python library is compromised

Just learned that LiteLLM, a popular Python library that provides a unified interface to call multiple LLMs, has been compromised and is stealing sensitive info from users.

The post on X reads:

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate.

It seems the entire GitHub repo is compromised as this issue #24512 titled "[Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 — credential stealer" was closed by the owner saying "not planned". It means the owner's GitHub account is hacked and the hacker marked this as solved. But good that it's been reopened, and actively being discussed.

What the LiteLLM malware does

As explained in the FutureSearch article, the malware appears to be very sneaky and dangerous. So if you're affected with it, the current best option is to visit and browse through #24512, as the community is actively tracking the issue and trying to fix this.

@krrishdholakia hacked by teampcp

Looks like the library was compromised after founder's GitHub profile was hacked by a group or whatever called teampcp. Terrible (not in a good way).


Peak design = AI Studio x Claude Code

No matter what tool I have tried for great looking designs, Google's AI Studio always comes on top. Not to mention GPT-5.4 is the worst in design, Claude Code is slightly better, but Google AI Studio or even Stitch is the best.

So... here is my current workflow for building websites:

  1. Get the design done and ready on Google AI Studio and download the code as ZIP (same design prompts work better on AI Studio)
  2. Initiate a project in a folder (currently, using Laravel for most websites, but Next.js or any other framework would work)
  3. Unzip the AI Studio design and put the folder in the main project root, rename it as "inspiration" or anything you like
  4. Ask Claude Code to take match the design as in the "inspiration" folder, but ask it follow the best practices of native Laravel, Astro, or whatever stack you're working in

And done!

See the Claude Code do the magic.