Photo of DeepakNess DeepakNess

Raw notes

(417 notes)

Short notes, links, and thoughts – shared as I go through my day.


Better utilize Claude Code limits

Came across this post on X that shares a hack, a way to maximum utilize your Claude Code usage limits by setting up scheduled tasks. It seems silly at first, but it's actually helpful.

Claude Code reset Cron

As explained in the post, I set up a cron that sends "hi" every 5 hours starting from 8AM IST so that as soon as the limit resets, it immediately starts a new cycle so that you have clean time blocks.

30 2,7,12,17 * * *

And the best thing is, you don't need any extra tool to use for this.


Meta's new Muse Spark model is here

Meta just launched a new AI model called Muse Spark which is now available to use via meta.ai and the Meta app. It's not available via the API, yet.

As of now, it looks good on the evals as you see in the screenshot here. In categories, it's shown to be even better than Opus 4.6 and GPT-5.4.

Muse Spark model looks good on evals

I tried the model via meta.ai web interface with my Facebook login, and I must say that it has a better sense of design than GPT-5.4. For example, I just asked it to create a minimal personal website and this was the result.

Muse Spark at web design

I am still testing this, and will keep this post updated.

Also, I must say that I liked the meta.ai web interface and it feels fast and snappy. It even let me see the website in a new tab.


Milla Jovovich's mempalace is nice, but...

Milla Jovovich launched an open-source AI memory system called mempalace with a bold claim saying "highest-scoring AI memory system ever benchmarked". She first announced about this on her Instagram and then it's picked by multiple folks from there.

And yes, that Milla Jovovich from movies like Resident Evil and The Fifth Element. She's an engineer and loves to code as well.

But then I came across this issue on GitHub that highlights multiple false claims, as you see in the screenshot.

Issues with mempalace by Milla Jovovich

As she claims in the video, it's still work in progress, and she might still be working on this, so there might be some gaps in the README vs the actual codebase. But the concept of the tool is still very good, and there is also lots of good information in the README file that you can explore.


Google AI Edge Gallery app is good

Google has a new app for Android and also for iOS called AI Edge Gallery that lets people explore and use local on-device LLMs. Currently, it's featuring the newly launched Gemma 4 family of models.

This is written in their app description on app stores:

AI Edge Gallery is the premier destination for running the world's most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast.

And to my surprise, the app itself is open-source. I love whatever is Google's game here.

I tried the app on my Android device, and even downloaded the smallest Gemma-4-E2B-it model and it worked fine. The app has multiple different options, as you see in the screenshots, and it looks stunning as well.

Google AI Edge Gallery app on Android

I am traveling next week via train and the network is not stable sometimes, so I can easily chat with these local models. I have tested for a bit, and they're good at non-coding stuff.


Blocking disposable email sign-ups in Laravel

I noticed some users signing up on a project with throwaway emails from services like yopmail, guerrillamail, and tempmail. They'd sign up, poke around, and create multiple accounts to use the service for free.

I researched solutions for it and found propaganistas/laravel-disposable-email, a popular Laravel package (~1.35M downloads) that gives you an indisposable validation rule. It pulls from a community-maintained list of ~72k known disposable domains.

But I didn't want another package dependency for something this simple. So I grabbed their JSON file with 72,000+ disposable domains, and wrote a tiny custom validation rule:

class NotDisposableEmail implements ValidationRule
{
    public function validate(string $attribute, mixed $value, Closure $fail): void
    {
        $domain = strtolower(substr(strrchr($value, '@'), 1));

        if (in_array($domain, $this->disposableDomains(), true)) {
            $fail('Disposable email addresses are not allowed. Please use a permanent email address.');
        }
    }

    private function disposableDomains(): array
    {
        return once(function (): array {
            $path = resource_path('data/disposable-domains.json');

            if (! file_exists($path)) {
                return [];
            }

            return json_decode(file_get_contents($path), true) ?? [];
        });
    }
}

Then added it to the registration validation in CreateNewUser.php:

'email' => [
    'required',
    'string',
    'email',
    'max:255',
    new NotDisposableEmail,
    Rule::unique(User::class),
],

That's it. No package, no API calls, no latency. The JSON file loads once per request (using Laravel's once() helper), and if the file is somehow missing, it silently passes so nothing breaks. It shows this when someone tries signing up using a disposable email:

Disposable email signup block

The only downside is that the list is static and new disposable services won't be caught until I update the JSON. But for now, it's good.

Update:

After many people pointed out on Thread and X, I have now removed the blocking for disposable domains and exploring better ways to filter out low-quality signups.


Backing up important GitHub repos

Lately, GitHub hasn't been very stable and I also read a story of GitHub blocking someone's account on Reddit, so I do not have a lot of trust to keep my code only on one platform. And I used my small Raspberry Pi 4B 1 GB device to keep additional copies of my important repos.

Here's the bash script to mirror specific GitHub repos locally (with optional GitLab push). Supports org repos, wikis, LFS, allowlist/blocklist filtering, and dry-run mode:

First, this backup_github_repos.sh file:

#!/usr/bin/env bash
set -Eeuo pipefail

usage() {
  cat <<'EOF'
Backup all owned GitHub repositories to local mirror clones and optionally push them to GitLab.

Usage:
  ./backup_github_repos.sh [path/to/config.env]

Prerequisites:
  - git
  - gh
  - jq
  - gh auth login
  - gh auth setup-git

Optional:
  - git-lfs (if FETCH_LFS=1)
  - GitLab SSH key (if ENABLE_GITLAB_PUSH=1)

This script:
  - mirrors all repos you own on GitHub
  - can also mirror selected GitHub org repos
  - can limit backups to an exact allowlist of repos
  - can back up wiki repos when they exist
  - can push matching repos to GitLab over SSH

It does NOT back up GitHub issues, pull requests, discussions, or release assets.
EOF
}

log() {
  printf '[%s] %s\n' "$(date '+%F %T')" "$*"
}

warn() {
  printf '[%s] WARNING: %s\n' "$(date '+%F %T')" "$*" >&2
}

die() {
  printf '[%s] ERROR: %s\n' "$(date '+%F %T')" "$*" >&2
  exit 1
}

run() {
  if [[ "${DRY_RUN}" == "1" ]]; then
    printf 'DRY_RUN:'
    printf ' %q' "$@"
    printf '\n'
    return 0
  fi
  "$@"
}

trim() {
  local value="${1:-}"
  value="${value#"${value%%[![:space:]]*}"}"
  value="${value%"${value##*[![:space:]]}"}"
  printf '%s' "$value"
}

require_cmd() {
  command -v "$1" >/dev/null 2>&1 || die "Missing required command: $1"
}

csv_list_contains() {
  local needle="$1"
  local csv="$2"
  local item

  IFS=',' read -r -a items <<<"$csv"
  for raw_item in "${items[@]}"; do
    item="$(trim "$raw_item")"
    [[ -n "$item" ]] || continue
    [[ "$item" == "$needle" ]] && return 0
  done

  return 1
}

gh_repo_stream_owned() {
  gh api --paginate "/user/repos?affiliation=owner&per_page=100" \
    | jq -c '.[] | {
        name,
        full_name,
        private,
        archived,
        fork,
        has_wiki,
        clone_url,
        ssh_url,
        owner: .owner.login
      }'
}

gh_repo_stream_org() {
  local org="$1"
  gh api --paginate "/orgs/${org}/repos?type=all&per_page=100" \
    | jq -c '.[] | {
        name,
        full_name,
        private,
        archived,
        fork,
        has_wiki,
        clone_url,
        ssh_url,
        owner: .owner.login
      }'
}

write_repo_metadata() {
  local repo_json="$1"
  local owner="$2"
  local repo="$3"
  local metadata_dir="${BACKUP_ROOT}/metadata/${owner}"
  mkdir -p "$metadata_dir"
  jq '.' <<<"$repo_json" > "${metadata_dir}/${repo}.json"
}

sync_git_mirror() {
  local source_url="$1"
  local destination="$2"
  local label="$3"

  mkdir -p "$(dirname "$destination")"

  if [[ -d "$destination" ]]; then
    log "Updating mirror: ${label}"
    run git -C "$destination" remote set-url origin "$source_url"
    run git -C "$destination" remote update --prune
  else
    log "Creating mirror: ${label}"
    run git clone --mirror "$source_url" "$destination"
  fi
}

fetch_lfs_objects() {
  local destination="$1"
  local label="$2"

  if [[ "${FETCH_LFS}" != "1" ]]; then
    return 0
  fi

  if ! git lfs version >/dev/null 2>&1; then
    die "FETCH_LFS=1 but git-lfs is not installed"
  fi

  log "Fetching LFS objects: ${label}"
  if ! run git -C "$destination" lfs fetch --all origin; then
    warn "LFS fetch failed for ${label}. The Git mirror is still valid, but LFS content may be incomplete."
  fi
}

gitlab_remote_url() {
  local repo_name="$1"
  printf 'git@gitlab.com:%s/%s.git' "$GITLAB_NAMESPACE" "$repo_name"
}

gitlab_remote_exists() {
  local repo_name="$1"
  git ls-remote "$(gitlab_remote_url "$repo_name")" >/dev/null 2>&1
}

ensure_gitlab_repo() {
  local repo_name="$1"

  if gitlab_remote_exists "$repo_name"; then
    return 0
  fi

  if [[ "${GITLAB_CREATE_REPOS}" != "1" ]]; then
    die "GitLab repo ${GITLAB_NAMESPACE}/${repo_name} does not exist and GITLAB_CREATE_REPOS=0"
  fi

  log "GitLab repo ${GITLAB_NAMESPACE}/${repo_name} will be created on first push if your SSH key can create projects in that namespace"
}

push_git_mirror_to_gitlab() {
  local destination="$1"
  local repo_name="$2"
  local label="$3"
  local remote_url

  remote_url="$(gitlab_remote_url "$repo_name")"

  if git -C "$destination" remote get-url gitlab >/dev/null 2>&1; then
    run git -C "$destination" remote set-url gitlab "$remote_url"
  else
    run git -C "$destination" remote add gitlab "$remote_url"
  fi

  log "Pushing branches and tags to GitLab: ${label}"
  run git -C "$destination" push --prune gitlab \
    '+refs/heads/*:refs/heads/*' \
    '+refs/tags/*:refs/tags/*'

  if [[ "${FETCH_LFS}" == "1" ]]; then
    if ! run git -C "$destination" lfs push --all gitlab; then
      warn "LFS push failed for ${label}. Check GitLab LFS configuration if this repo uses LFS."
    fi
  fi
}

should_backup_repo() {
  local repo_json="$1"
  local archived
  local forked
  local full_name

  archived="$(jq -r '.archived' <<<"$repo_json")"
  forked="$(jq -r '.fork' <<<"$repo_json")"
  full_name="$(jq -r '.full_name' <<<"$repo_json")"

  if [[ -n "${REPO_ALLOWLIST}" ]] && ! csv_list_contains "$full_name" "$REPO_ALLOWLIST"; then
    return 1
  fi

  if [[ -n "${REPO_BLOCKLIST}" ]] && csv_list_contains "$full_name" "$REPO_BLOCKLIST"; then
    return 1
  fi

  if [[ "$archived" == "true" && "${INCLUDE_ARCHIVED_REPOS}" != "1" ]]; then
    return 1
  fi

  if [[ "$forked" == "true" && "${INCLUDE_FORKS}" != "1" ]]; then
    return 1
  fi

  return 0
}

sync_repo_bundle() {
  local repo_json="$1"
  local full_name
  local owner
  local repo
  local clone_url
  local private_flag
  local has_wiki
  local mirror_dir
  local wiki_dir
  local wiki_url
  local wiki_repo_name

  full_name="$(jq -r '.full_name' <<<"$repo_json")"
  owner="${full_name%/*}"
  repo="${full_name#*/}"
  clone_url="$(jq -r '.clone_url' <<<"$repo_json")"
  private_flag="$(jq -r '.private' <<<"$repo_json")"
  has_wiki="$(jq -r '.has_wiki' <<<"$repo_json")"

  mirror_dir="${BACKUP_ROOT}/mirrors/${owner}/${repo}.git"
  write_repo_metadata "$repo_json" "$owner" "$repo"
  sync_git_mirror "$clone_url" "$mirror_dir" "$full_name"
  fetch_lfs_objects "$mirror_dir" "$full_name"

  if [[ "${ENABLE_GITLAB_PUSH}" == "1" ]]; then
    ensure_gitlab_repo "$repo"
    push_git_mirror_to_gitlab "$mirror_dir" "$repo" "$full_name"
  fi

  if [[ "$has_wiki" != "true" || "${INCLUDE_WIKIS}" != "1" ]]; then
    return 0
  fi

  wiki_url="${clone_url%.git}.wiki.git"
  wiki_repo_name="${repo}.wiki"
  wiki_dir="${BACKUP_ROOT}/mirrors/${owner}/${wiki_repo_name}.git"

  if git ls-remote "$wiki_url" >/dev/null 2>&1; then
    sync_git_mirror "$wiki_url" "$wiki_dir" "${full_name} wiki"
    if [[ "${ENABLE_GITLAB_PUSH}" == "1" && "${GITLAB_PUSH_WIKIS}" == "1" ]]; then
      ensure_gitlab_repo "$wiki_repo_name"
      push_git_mirror_to_gitlab "$wiki_dir" "$wiki_repo_name" "${full_name} wiki"
    fi
  else
    warn "Wiki enabled but no wiki repo found for ${full_name}; skipping wiki backup"
  fi
}

if [[ "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
  usage
  exit 0
fi

if [[ $# -gt 1 ]]; then
  usage >&2
  exit 1
fi

if [[ $# -eq 1 ]]; then
  ENV_FILE="$1"
  [[ -f "$ENV_FILE" ]] || die "Config file not found: $ENV_FILE"
  set -a
  # shellcheck source=/dev/null
  source "$ENV_FILE"
  set +a
fi

BACKUP_ROOT="${BACKUP_ROOT:-$HOME/github-repo-backups}"
INCLUDE_OWNED_REPOS="${INCLUDE_OWNED_REPOS:-1}"
GITHUB_ORGS="${GITHUB_ORGS:-}"
REPO_ALLOWLIST="${REPO_ALLOWLIST:-}"
REPO_BLOCKLIST="${REPO_BLOCKLIST:-}"
INCLUDE_ARCHIVED_REPOS="${INCLUDE_ARCHIVED_REPOS:-1}"
INCLUDE_FORKS="${INCLUDE_FORKS:-1}"
INCLUDE_WIKIS="${INCLUDE_WIKIS:-1}"
FETCH_LFS="${FETCH_LFS:-0}"
ENABLE_GITLAB_PUSH="${ENABLE_GITLAB_PUSH:-0}"
GITLAB_NAMESPACE="${GITLAB_NAMESPACE:-}"
GITLAB_CREATE_REPOS="${GITLAB_CREATE_REPOS:-1}"
GITLAB_PUSH_WIKIS="${GITLAB_PUSH_WIKIS:-1}"
DRY_RUN="${DRY_RUN:-0}"

require_cmd git
require_cmd gh
require_cmd jq
gh auth status --hostname github.com >/dev/null 2>&1 \
  || die "GitHub CLI is not authenticated. Run: gh auth login"

if [[ "${ENABLE_GITLAB_PUSH}" == "1" ]]; then
  [[ -n "${GITLAB_NAMESPACE}" ]] || die "ENABLE_GITLAB_PUSH=1 requires GITLAB_NAMESPACE"
fi

mkdir -p "${BACKUP_ROOT}/mirrors" "${BACKUP_ROOT}/metadata"

declare -A seen_repos=()
repo_count=0

if [[ "${INCLUDE_OWNED_REPOS}" != "1" && -z "${GITHUB_ORGS}" ]]; then
  die "Nothing to do. Set INCLUDE_OWNED_REPOS=1 and/or GITHUB_ORGS"
fi

log "Backup root: ${BACKUP_ROOT}"

while IFS= read -r repo_json; do
  [[ -n "$repo_json" ]] || continue

  full_name="$(jq -r '.full_name' <<<"$repo_json")"
  if [[ -n "${seen_repos[$full_name]:-}" ]]; then
    continue
  fi
  seen_repos["$full_name"]=1

  if ! should_backup_repo "$repo_json"; then
    log "Skipping repo due to filters: ${full_name}"
    continue
  fi

  sync_repo_bundle "$repo_json"
  ((repo_count+=1))
done < <(
  {
    if [[ "${INCLUDE_OWNED_REPOS}" == "1" ]]; then
      gh_repo_stream_owned
    fi

    IFS=',' read -r -a orgs <<<"${GITHUB_ORGS}"
    for raw_org in "${orgs[@]}"; do
      org="$(trim "$raw_org")"
      [[ -n "$org" ]] || continue
      gh_repo_stream_org "$org"
    done
  }
)

log "Completed backup run for ${repo_count} repositories"

And then this .env file:

# Local backup location
BACKUP_ROOT="$HOME/github-backup/repo-backups"

# Backup all repos owned by the authenticated GitHub user.
INCLUDE_OWNED_REPOS=1

# Optional comma-separated GitHub org names to back up too.
GITHUB_ORGS=""

# Optional exact repo allow/block lists using owner/repo names.
# If REPO_ALLOWLIST is non-empty, only those repos are backed up.
REPO_ALLOWLIST=""
REPO_BLOCKLIST=""

# Include archived repos and forks.
INCLUDE_ARCHIVED_REPOS=1
INCLUDE_FORKS=1

# Also back up wiki repos when they exist.
INCLUDE_WIKIS=1

# Set to 1 if you use Git LFS and have git-lfs installed.
FETCH_LFS=0

# Optional GitLab mirror push.
ENABLE_GITLAB_PUSH=0

# The target user or group on GitLab.
GITLAB_NAMESPACE=""

# If set to 1, GitLab may create missing repos on first push if your SSH key
# has permission to create projects in that namespace.
GITLAB_CREATE_REPOS=0

# Also push wiki mirrors to GitLab.
GITLAB_PUSH_WIKIS=0

# Print actions without changing anything.
DRY_RUN=0

And then here are some instructions to set this up:

  1. Install dependencies
sudo apt update && sudo apt install git jq

Then install the GitHub CLI:

sudo mkdir -p -m 755 /etc/apt/keyrings
wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update && sudo apt install gh
  1. Authenticate
gh auth login
gh auth setup-git
  1. Place your files

Put backup_github_repos.sh and your config.env somewhere like ~/github-backup/. Make the script executable:

chmod +x backup_github_repos.sh
  1. Test it
./backup_github_repos.sh config.env
  1. Automate with cron

Run crontab -e and add a line like:

0 3 * * * /home/pi/github-backup/backup_github_repos.sh /home/pi/github-backup/config.env >> /home/pi/github-backup/backup.log 2>&1

This runs the backup daily at 3 AM and logs output. Adjust the paths and schedule to your liking.

I might be missing some steps here, so if you're stuck somewhere make sure to give this to an LLM and ask it to help you do the setup.


Introducing Vemgram

It all started from this post on X from Akshay G Jain which was shared with by my friend Amit Sarda.

Akshay about IndiaMart

I immediately decided to plan and work on it. Initially, I was working at a slow pace but I kept working. And here we are with this brand-new platform called Vemgram that helps Indian manufacturers connect with retailers and build a strong digital presence.

Vemgram homepage screenshot

I am working with Bikash Kampo on this project and the concept is simple, we want to build a trusted directory of Indian manufactures where retailers won't have to worry about getting scammed, and manufactures get a solid digital presence. I am also regularly updating what I am working on for this project on this page.

It's still work in progress, and probably always will be, but the current version of Vemgram allows people to create business profiles like this, list their products, and receive business inquiries from retailers. And currently I am working on implementing custom domain and analytics features.

I have done an introduction post on X. And I am also open to feedback on this, so if you have any please email me or DM me on socials.

Additionally, I am grateful to my friends Amit Sarda, Rohit, and my girlfriend for their helpful feedback on my ideas.


Preventing Docker and journal cache bloat on VPS

I have two VPS servers running Dokploy, and I noticed that even 80 GB of disk space on the servers were getting full and apps failed to deploy. The culprits were Docker cache (old images, build cache, dangling stuff from past deployments) and systemd journal logs that grow indefinitely by default.

Running the docker builder prune -a command freed ~67 GB of space on the server, so I set up a few things to automate this process:

First, I added a cron job that auto-prunes Docker every 6 hours, but only if disk usage goes above 70%:

(crontab -l 2>/dev/null; echo '0 */6 * * * [ $(df / --output=pcent | tail -1 | tr -dc 0-9) -gt 70 ] && docker system prune -af --filter "until=72h" && docker builder prune -af >> /var/log/docker-prune.log 2>&1') | crontab -

This removes unused images and build cache older than 72 hours. It doesn't touch running containers, so active deployments are safe.

Then, I vacuumed the existing journal logs down to 200MB:

journalctl --vacuum-size=200M

And finally, I capped the journal log size permanently so it never grows beyond 200MB again:

sed -i 's/#SystemMaxUse=/SystemMaxUse=200M/' /etc/systemd/journald.conf
systemctl restart systemd-journald

Applied the same setup on both servers and now I don't have to worry about cache eating up disk space over time. I know there are some drawbacks of this method, but I think the benefits outweigh them.


LLM Knowledge Bases post by Andrej Karpathy

Andrej Karpathy recently published this post on X about building and managing personal knowledge bases using LLMs, and there's some interesting information in the post that I am going to collect below.

  • Lately, Andrej is spending more tokens manipulating knowledge than manipulating code.
  • He uses Obsidian for storing all the knowledge (articles, papers, repos, datasets, etc.) in Markdown and image formats. And uses Obsidian's Web Clipper to grab info from web articles.
  • He puts everything in a /raw folder and then uses an LLM to incrementally "compile" a wiki with visualizations. He specifically mentions that the LLM writes and maintains all of the data of the wiki and he doesn't manually edit/add anything.
  • He has ~100 articles on several topics that he can ask LLMs to fetch answers to complex questions against the wiki.
  • When chatting with the LLM against the wiki, he prefers generating new Markdown files, slideshows (Marp format), or matplotlib images. And then even files the outputs back into the wiki to further enhance it.

There's much more info in the post that you can read.

I love this approach and I guess I will also be using something similar very soon, as managing lots of stuff is becoming difficult over time. And if I do, I will write about it.


List of 187 Claude spinner verbs (leaked)

No idea who leaked the Claude Code codebase, but I came across this list of spinner verbs Claude uses from this post on X. And it's super cool.

  1. Accomplishing
  2. Actioning
  3. Actualizing
  4. Architecting
  5. Baking
  6. Beaming
  7. Beboppin'
  8. Befuddling
  9. Billowing
  10. Blanching
  11. Bloviating
  12. Boogieing
  13. Boondoggling
  14. Booping
  15. Bootstrapping
  16. Brewing
  17. Bunning
  18. Burrowing
  19. Calculating
  20. Canoodling
  21. Caramelizing
  22. Cascading
  23. Catapulting
  24. Cerebrating
  25. Channeling
  26. Channelling
  27. Choreographing
  28. Churning
  29. Clauding
  30. Coalescing
  31. Cogitating
  32. Combobulating
  33. Composing
  34. Computing
  35. Concocting
  36. Considering
  37. Contemplating
  38. Cooking
  39. Crafting
  40. Creating
  41. Crunching
  42. Crystallizing
  43. Cultivating
  44. Deciphering
  45. Deliberating
  46. Determining
  47. Dilly-dallying
  48. Discombobulating
  49. Doing
  50. Doodling
  51. Drizzling
  52. Ebbing
  53. Effecting
  54. Elucidating
  55. Embellishing
  56. Enchanting
  57. Envisioning
  58. Evaporating
  59. Fermenting
  60. Fiddle-faddling
  61. Finagling
  62. Flambéing
  63. Flibbertigibbeting
  64. Flowing
  65. Flummoxing
  66. Fluttering
  67. Forging
  68. Forming
  69. Frolicking
  70. Frosting
  71. Gallivanting
  72. Galloping
  73. Garnishing
  74. Generating
  75. Gesticulating
  76. Germinating
  77. Gitifying
  78. Grooving
  79. Gusting
  80. Harmonizing
  81. Hashing
  82. Hatching
  83. Herding
  84. Honking
  85. Hullaballooing
  86. Hyperspacing
  87. Ideating
  88. Imagining
  89. Improvising
  90. Incubating
  91. Inferring
  92. Infusing
  93. Ionizing
  94. Jitterbugging
  95. Julienning
  96. Kneading
  97. Leavening
  98. Levitating
  99. Lollygagging
  100. Manifesting
  101. Marinating
  102. Meandering
  103. Metamorphosing
  104. Misting
  105. Moonwalking
  106. Moseying
  107. Mulling
  108. Mustering
  109. Musing
  110. Nebulizing
  111. Nesting
  112. Newspapering
  113. Noodling
  114. Nucleating
  115. Orbiting
  116. Orchestrating
  117. Osmosing
  118. Perambulating
  119. Percolating
  120. Perusing
  121. Philosophising
  122. Photosynthesizing
  123. Pollinating
  124. Pondering
  125. Pontificating
  126. Pouncing
  127. Precipitating
  128. Prestidigitating
  129. Processing
  130. Proofing
  131. Propagating
  132. Puttering
  133. Puzzling
  134. Quantumizing
  135. Razzle-dazzling
  136. Razzmatazzing
  137. Recombobulating
  138. Reticulating
  139. Roosting
  140. Ruminating
  141. Sautéing
  142. Scampering
  143. Schlepping
  144. Scurrying
  145. Seasoning
  146. Shenaniganing
  147. Shimmying
  148. Simmering
  149. Skedaddling
  150. Sketching
  151. Slithering
  152. Smooshing
  153. Sock-hopping
  154. Spelunking
  155. Spinning
  156. Sprouting
  157. Stewing
  158. Sublimating
  159. Swirling
  160. Swooping
  161. Symbioting
  162. Synthesizing
  163. Tempering
  164. Thinking
  165. Thundering
  166. Tinkering
  167. Tomfoolering
  168. Topsy-turvying
  169. Transfiguring
  170. Transmuting
  171. Twisting
  172. Undulating
  173. Unfurling
  174. Unravelling
  175. Vibing
  176. Waddling
  177. Wandering
  178. Warping
  179. Whatchamacalliting
  180. Whirlpooling
  181. Whirring
  182. Whisking
  183. Wibbling
  184. Working
  185. Wrangling
  186. Zesting
  187. Zigzagging

If you don't remember seeing these anywhere, take a look at the screenshot below.

Claude Spinner Verbs

Yes, this is shown when you ask Claude Code to do something and then it's working on it. I believe these spinner verbs are randomly selected from the above list and shown.


OpenAI launches Codex plugin for Claude Code

OpenAI team has created a Codex plugin for Claude Code that you can use to trigger Codex inside Claude Code. It's described as:

Use the Codex plugin for Claude Code to delegate tasks to Codex or have Codex review your changes using your ChatGPT subscription.

Here's the plugin on GitHub that you can use. This thing works with ChatGPT subscription (including the Free plan) or with an OpenAI API key, and it does count against your Codex usage limits. If you're looking to learn more about it, Vaibhav from OpenAI has written a detailed article about it that you can follow.

Apart from this, Claude released computer use so there's this joke going around on X, and it is funny.

Codex Anthropic use

But jokes apart, this is going to be super helpful for code reviews. I am definitely using this.


Claude Code is hitting limits fast

People have been complaining about Claude Code hitting limits quickly for almost a week, and now, finally, the Claude Code team has started to look into this and low-key admitted that this issue exists. Lydia from Claude Code team posted this:

We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!

But why are they suddenly caring about this, and ignored everything previously? The reason is this post on Reddit by /u/skibidi-toaleta-2137. The person reverse-engineered the Claude Code standalone binary and found out two serious bugs which should be causing the issues.

  1. Bug 1: Sentinel replacement in standalone binary breaks cache when conversation discusses billing internals
  2. Bug 2: --resume ALWAYS breaks cache (since v2.1.69)

The person has also shared workaround to avoid these issues, but I guess since the Claude team is already working on this, I will wait for a few hours till these gets resolved.

All I can say is, thank you, stranger from Reddit.


Eating too much watermelon causes indigestion

I didn't know that eating too much watermelon can cause serious indigestion. The other day, I had ~1.5 kg of a watermelon at a single time (yes, I love them) and experienced digestion issues the next day.

Too much watermelons causing digestive issues

Apparently, this is a known fact and I had no idea about this, until my mom hinted and I checked online to learn this to be true.

Lessons learned.


Gemini introduces import memory and chats feature

Earlier this month, Claude introduced the feature to import memory from other AI chatbots. Now, Gemini has also introduced a feature like that, as you see in the screenshot.

Import memory feature in Gemini

The process is simple, you just copy the provided prompt, paste into AI provider you're using, copy the response, put into Gemini, and save. And that's it, now Gemini knows everything that the prompt extracted from Claude, ChatGPT, or others.

By the way, here's the huge prompt they provide:

You are helping me import context from one AI assistant to another. Your job is to go through our past conversations and sum up what you know about me.

In the output, please avoid using any first-person pronouns (I, my, me, mine) and any second-person pronouns (you, your, yours). Instead, refer to the individual you have learned about as "the user" or use neutral phrasing.

Preserve the user's words verbatim where possible, especially for instructions and preferences.

Categories (output in this order):
1. Demographics Information: Preferred names, profession, education, and general residence.
2. Interests & Preferences: Sustained, active engagements (not just owning an object or a one-time purchase).
3. Relationships: Confirmed, sustained relationships.
4. Dated Events, Projects & Plans: A log of significant, recent activities.
5. Instructions: Rules I've explicitly asked you to follow going forward, "always do X", "never do Y", and corrections to your behavior. Only include rules from stored memories, not from conversations.

Format:
Divide the content into the labeled section using the categories above. Try to include verbatim quotes from my prompts that justify each entry. Structure each entry using this format:
The user's name is <name>.
- Evidence: User said "call me <name>". Date: [YYYY-MM-DD].

Output:
- Format the final output summary as a text block.

I used this prompt on Claude and the response is extremely detailed, but contains a lot of personal information so I can't post here.

And not just the memory, but it also lets you import chats from ChatGPT, Claude, or other providers, as you see the import chats feature in the screenshot above.

Import chats in Gemini

They also describe import chat process in their documentation, as you see in the above screenshot. You can directly upload the ZIP file up to 5 GB in size.

I think, this import chats feature is actually cool for people who are completely migrating to Gemini from ChatGPT, Claude, or other providers.


LiteLLM Python library is compromised

Just learned that LiteLLM, a popular Python library that provides a unified interface to call multiple LLMs, has been compromised and is stealing sensitive info from users.

The post on X reads:

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate.

It seems the entire GitHub repo is compromised as this issue #24512 titled "[Security]: CRITICAL: Malicious litellm_init.pth in litellm 1.82.8 — credential stealer" was closed by the owner saying "not planned". It means the owner's GitHub account is hacked and the hacker marked this as solved. But good that it's been reopened, and actively being discussed.

What the LiteLLM malware does

As explained in the FutureSearch article, the malware appears to be very sneaky and dangerous. So if you're affected with it, the current best option is to visit and browse through #24512, as the community is actively tracking the issue and trying to fix this.

@krrishdholakia hacked by teampcp

Looks like the library was compromised after founder's GitHub profile was hacked by a group or whatever called teampcp. Terrible (not in a good way).


Peak design = AI Studio x Claude Code

No matter what tool I have tried for great looking designs, Google's AI Studio always comes on top. Not to mention GPT-5.4 is the worst in design, Claude Code is slightly better, but Google AI Studio or even Stitch is the best.

So... here is my current workflow for building websites:

  1. Get the design done and ready on Google AI Studio and download the code as ZIP (same design prompts work better on AI Studio)
  2. Initiate a project in a folder (currently, using Laravel for most websites, but Next.js or any other framework would work)
  3. Unzip the AI Studio design and put the folder in the main project root, rename it as "inspiration" or anything you like
  4. Ask Claude Code to take match the design as in the "inspiration" folder, but ask it follow the best practices of native Laravel, Astro, or whatever stack you're working in

And done!

See the Claude Code do the magic.


Scraping 250k+ URLs using Claude Code (via Telegram)

This Saturday, I was about to leave for a movie with my friends and suddenly thought of experimenting with newly launched Claude Code channels. I set up Telegram to work with a Claude Code session, kept my laptop on, and then left for the movie, but when I was in the cab I started chatting about the scraping project and asked Claude to give me suggestions and ideas about how this will work.

Claude finalizing the scraping plan

And by the time I reached, Claude was already setting up the project and ready to start the scraping process. I give the final confirmation and get busy watching the movie. When I checked my phone during the interval, it had sent me a bunch of messages and the process was still running. It had discovered 260k URLs to scrape and had already completed scraping 36k URLs as you see below.

Claude sending scraping updates on Telegram

Whenever I messaged "Progress?" via Telegram, it quickly sent me a summary like above. The scraping process ran for ~8 hours, and it was still running when I returned home. And after a few hours, when the process was complete, it sent me this message confirming the completion.

Scraping completed message from Claude

Around ~10k URLs failed which is acceptable for a process this huge, but it had also failed to capture a few data points so I asked it to grab those as well and then the process again ran for ~3 hours. Finally, I had everything I needed. I, then, asked it to update the scraping script so we have the final polished data when we run the process the next time.

All I would say is, thank you, Claude.


Cursor dodged the huge bullet about Kimi K2.5

Two days ago, Cursor released the new Composer 2 model and the eval scores where were shown to be better than even Opus 4.6 (high). It was clear that the team did not train the model from the scratch, but Cursor did not mention anywhere which base model it is.

But then people found out it's the Kimi K2.5 model, and people started calling them out by mentioning that it's against Moonshot (the company behind Kimi models) terms of service to use the model for commercial purposes. This became interesting when several Moonshot folks started posting that Cursor did not come to them for licensing, as you see below.

Moonshot Kimi deleted tweets about Cursor's Composer 2

But then the most interesting part was, all the above tweets from Kimi you see were suddenly deleted, and people started speculating – maybe Cursor paid Kimi for the license after posts went viral and more such theories.

The real story came out when Lee Robinson from the Cursor team posted the clarification post. Initially, he did not mention the name Kimi K2.5 in the main post but then when people were calling them out, he admits that it was the Kimi K2.5 model, and quotes this post from Kimi AI where they mention this:

Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ' hosted RL and inference platform as part of an authorized commercial partnership.

So the whole story in simple terms is:

  • Cursor licenses the Kimi K2.5 model via inference partner FireworksAI (not via Kimi)
  • Heavily pretrains the model and releases as Composer 2
  • People find out that the underlying model is Kimi K2.5
  • Kimi folks didn't know this, so they post on X
  • Cursor reaches out to them privately explaining everything
  • Kimi folks delete the tweet, and post the clarification

Basically, no one was in the wrong here, and it's just an example of miscommunication, especially from Cursor's side. This could have gone very wrong for Cursor, but I'm glad the situation is now under control.


Better way to implement mailto: links

Honestly, I don't like mailto: links for myself because I use webmail and clicking on these links opens the default email app on my computer. However, I still had an email icon in the footer of my personal site, but I changed that click on copy instead of a mailto: link after coming across this post. Basically, clicking on the envelope icon in the footer copied the email to the clipboard, and then it briefly showed a green checkmark.

I thought this was provided a good user-experience and posted about this on my socials, especially on Threads. But almost all the feedback where negative, suggesting that this is a bad idea, users won't know what just happened, as you see in the screenshot below.

Better mailto: links

And they were all correct, it wasn't very clear about what just happened. I, then, thought about it and implemented the email thing differently. As you see in the screenshot below, started showing the full email in text and clicking on it copied the email to users' clipboard.

Screenshot showing full email in text and clicking on it copies

But I still wasn't satisfied with it so posted about this again on X, and then received this interesting recommendation about using the mailgo tool which is basically a new concept for mailto and tel links.

Screenshot showing how mailgo works

You can see in the screenshot that when I clicked on the mailto: link, it showed me a bunch of options instead of just opening the default email app or copying the email to the clipboard directly. I tried clicking on Open in Gmail, copy, etc. options and they all work as expected.

But the problem is, the mailgo project isn't maintained anymore, and there's no point in adding a dead dependency to my project. So... I built this from scratch, and it was very simple to do so.

The current solution I built from scratch

Here's how this is implemented, the one you see in the screenshot above:

<div id="email-wrap">
  <button onclick="this.parentElement.classList.toggle('open')">
    <!-- envelope icon -->
  </button>
  <div>
    <button onclick="navigator.clipboard.writeText('me@deepakness.com')
      .then(()=>{this.textContent='Email copied!';
      setTimeout(()=>{this.textContent='Copy me@deepakness.com'},1000)})">
      Copy me@deepakness.com
    </button>
    <a href="https://mail.google.com/mail/?view=cm&to=me@deepakness.com">
      Open in Gmail
    </a>
    <a href="mailto:me@deepakness.com">Open default client</a>
  </div>
</div>

And a tiny script to close the popup when clicking outside or pressing Escape:

document.addEventListener('click', function(e) {
  if (!e.target.closest('#email-wrap')) {
    document.getElementById('email-wrap').classList.remove('open');
  }
});

Then you would need some basic CSS to make it look good.

That's it.

This simple implementation works well, until I find a better solution for this.


arXiv now has a MCP server

arXiv has a new MCP server that you can connect to your AI applications, agents, and workflows to search papers, analyze PDFs, explore codebases, and synthesize research insights. From their docs, below are the available tools in the MCP:

  1. Embedding similarity search
  2. Full-text papers search
  3. Agentic paper retrieval
  4. Get paper content
  5. Answer PDF queries
  6. Read files from paper's GitHub repo

I think, this can be important for researchers and engineers, as their AI agent will now have access to all relevant research papers. Earlier, agents could search for papers via search engines, but I guess that had lots of errors and sometimes even returned broken URLs. And that wouldn't be the case with the MCP server.

I installed this to Claude Code by running the following command:

claude mcp add --transport http alphaxiv https://api.alphaxiv.org/mcp/v1

And I had to authenticate by running the /mcp command inside Claude, that looked like the below screenshot:

Installing arXiv new MCP server to Claude Code

And then it was working as expected.

I'm still exploring this, and will share more if I learn something noteworthy.


OpenAI launches GPT-5.4 mini and nano

After 2 weeks of the GPT-5.4 launch, OpenAI just launched the mini and nano versions of GPT-5.4, and it's already available in ChatGPT, Codex, and via the API. And as self-reported by OpenAI, these GPT-5.4 mini and nano models are very close to the larger GPT-5.4 model.

Evals - GPT-5.4 vs mini vs nano

I was reading Simon's post about the new models, he ran some experiments about describing images using the GPT-5.4 nano model and gives this estimate:

[...] describing every single photo in my 76,000 photo collection would cost around $52.44.

And from my tests as well, the models are fairly capable for simple repetitive tasks. But as compared to previous GPT-5 mini and nano models, they are costlier. You can check out the below table for comparison:

Model Input Price (per 1M tokens) Output Price (per 1M tokens)
GPT-5 mini $0.25 $2
GPT-5 nano $0.05 $0.4
GPT-5.4 mini $0.75 3x costlier $4.50 2.25x costlier
GPT-5.4 nano $0.20 4x costlier $2.25 5.6x costlier

But not to mention that the new mini and nano models are far more capable than the previous mini and nano models.


Just ask and Codex can now spin up subagents

I tried the OpenAI's newly launched feature subagents for Codex and it's awesome. I tried it via the Codex app on macOS, and the UI looks good as well.

A screenshot of multiple subagents working in Codex app

Although, spawning subagents consumes much more tokens as compared to a single agent for the same task, it does work faster as multiple agents are working on different asked tasks in parallel. By the way, the agent only spawns subagent(s) when you specifically ask it to. Codex docs have much more info about this and other related things.

I also learned from the Simon's post that Codex also lets you define and use custom agents:

Codex also lets you define custom agents as TOML files in ~/.codex/agents/. These can have custom instructions and be assigned to use specific models - including gpt-5.3-codex-spark if you want some raw speed.

For your info, the subagents feature is only new to Codex as it's already available in Claude Code, Gemini, Cursor, OpenCode, etc.


Pi powered via the Wi-Fi Router

Wifi Router powers Raspberry Pi

I was using a power adapter to power my Raspberry Pi 4B device, but then I noticed a USB port on my Wi-Fi router and then connected the Pi directly to it. And it's working as expected.

I will also soon connect the Pi via a LAN cable to the wifi router for faster internet.


WordPress 7 is coming with new features

WordPress 7 is about to be released on April 9, 2026, and it's coming with some interesting features. From this post on X, I learned that the new v7 will have optional Google Docs style editing options as you see below:

WordPress 7 real-time collaboration

Discussions about real-time collaboration are still ongoing to whether enable this by default or just keep optional. But most likely, it will be turned off by default.

WordPress 7 AI connectors page

Apart from this, WordPress 7 will have a new page for AI connectors as you see in the screenshot. As specified, all your API keys and credentials are stored here and shared across plugins. And I think, this is a good option.

Sometimes, I did have second thoughts about keeping my blog with 11ty or going back to WordPress as the Netlify build time of growing a lot. But then I started doing image optimizations locally and started hosting them externally on Cloudflare R2, this issue is now resolved. But I will keep an eye on WordPress, as an option for my personal site.


Recordly: open-source screen recorder for macOS

I have been using screen.studio to record videos for my channel for over 2 years now, and just learned that there is an open-source alternative to the app called Recordly. It helps you create similarly styled videos for free, and the app is available for macOS, Windows, and Linux as well.

Recordly.dev website

You can see Recordly's source on GitHub, and it's a fork of another open-source project called OpenScreen, but with some additional features. The creator mentions the following on the forked repo:

FAQ: What are the changes between this and Openscreen? A: Recordly adds a full cursor animation/rendering pipeline, native macOS screen capture, zoom animations faithful to Screen Studio, smoother panning behaviour, and more major tweaks.

This fork exists because the original maintainer does not wish implementing the architectural changes that make some of these features possible i.e. different recording pipeline.

By the way, I learned about Recordly from this post on X.