Skip to content
Development

Managing Local Storage in the AI Development Era

How to identify, clean, and monitor the hidden storage consumers that come with AI-assisted development tools like Claude Code and Kiro

Alexandre Agius

Alexandre Agius

AWS Solutions Architect

8 min read
Share:

The Problem

You’re building with Claude Code, Kiro, and other AI-assisted development tools. Your projects are thriving. Then one day: “No space left on device.”

Where did 50GB go? AI-assisted development introduces hidden storage consumers that traditional developers never worried about:

  • Package manager caches grow faster with AI suggesting more dependencies
  • MCP servers download Python packages, models, and tools
  • IDE and tool caches multiply across multiple AI assistants
  • Logs and conversation history accumulate silently

This guide shows you exactly where your storage is going and how to manage it.

Storage Breakdown: Where Does It Go?

A typical AI-assisted development environment consumes storage across these categories:

CategoryTypical SizeGrowth Rate
Python (uv/pip)5-20 GBFast
Node.js (npm/pnpm)2-10 GBMedium
Docker10-50 GBFast
Homebrew5-15 GBSlow
IDE Caches2-5 GBMedium
AI Tool Data1-5 GBMedium
Logs1-10 GBSlow

AI development accelerates this growth because:

  1. AI assistants suggest more packages to try
  2. MCP servers install their own dependencies
  3. Multiple Python environments for different agents
  4. Large ML/AI packages (numpy, torch, transformers)

Finding the Storage Consumers

Quick Audit Commands

# Overall disk usage
df -h

# Find large directories in home
du -sh ~/* 2>/dev/null | sort -hr | head -20

# Find hidden directories (caches live here)
du -sh ~/.* 2>/dev/null | sort -hr | head -20

Package Manager Caches

Python (uv):

# Check uv cache size
du -sh $(uv cache dir)

# See what's inside
ls -la $(uv cache dir)/wheels/ | head -20

Python (pip):

# Check pip cache
du -sh ~/.cache/pip
pip cache info

Node.js (npm):

# Check npm cache
du -sh ~/.npm
npm cache ls 2>/dev/null | wc -l

Node.js (pnpm):

# Check pnpm store
du -sh ~/.local/share/pnpm

Homebrew:

# Check Homebrew cache
du -sh $(brew --cache)

Docker Storage

# Docker disk usage summary
docker system df

# Detailed breakdown
docker system df -v

AI Tool Storage

Claude Code:

# Claude Code stores data in ~/.claude
du -sh ~/.claude

Kiro:

# Kiro configuration and cache
du -sh ~/.kiro 2>/dev/null

MCP Servers:

# MCP servers may create their own caches
# Check common locations
du -sh ~/.cache/mcp* 2>/dev/null
du -sh ~/Library/Caches/*mcp* 2>/dev/null

Cleaning Up

Python Cleanup

# uv - clean everything
uv cache clean

# uv - prune old entries only (safer)
uv cache prune

# pip - clean cache
pip cache purge

# Remove orphaned virtual environments
# First, list them:
find ~ -type d -name ".venv" -o -name "venv" 2>/dev/null

# Remove unused ones manually after review

Node.js Cleanup

# npm - clean cache
npm cache clean --force

# pnpm - clean store
pnpm store prune

# Remove orphaned node_modules (be careful!)
# First, find them:
find ~ -type d -name "node_modules" -prune 2>/dev/null | head -20

Docker Cleanup

# Remove unused containers, networks, images
docker system prune

# More aggressive - remove all unused images
docker system prune -a

# Remove build cache
docker builder prune

# Nuclear option - remove everything
docker system prune -a --volumes

Homebrew Cleanup

# Remove old versions
brew cleanup

# Remove all cached downloads
brew cleanup -s

# See what would be removed
brew cleanup -n

IDE and Tool Caches

VS Code:

# Clear VS Code cache (macOS)
du -sh ~/Library/Application\ Support/Code/Cache
rm -rf ~/Library/Application\ Support/Code/Cache/*

JetBrains IDEs:

# Check JetBrains caches (macOS)
du -sh ~/Library/Caches/JetBrains

Automation: The Cleanup Script

Knowing what to clean is one thing. Actually doing it every week is another. I automated this with a script that runs every Monday morning via cron, with macOS notifications so I know it’s happening.

The Script

Save this to ~/scripts/cleanup-dev-storage.sh:

#!/bin/bash
# cleanup-dev-storage.sh
# Weekly dev storage cleanup — scheduled Monday 9:30 AM

# macOS notification before starting
osascript -e 'display notification "Starting weekly dev storage cleanup..." with title "Dev Cleanup" sound name "Ping"'

echo "=== AI Dev Storage Cleanup ==="
echo "Date: $(date)"
echo ""

# Python
echo "Cleaning Python caches..."
uv cache prune 2>/dev/null || echo "uv not installed"
pip cache purge 2>/dev/null || echo "pip cache empty"

# Node.js
echo "Cleaning Node.js caches..."
npm cache clean --force 2>/dev/null || echo "npm not installed"
pnpm store prune 2>/dev/null || echo "pnpm not installed"

# Docker (conservative)
echo "Cleaning Docker..."
docker system prune -f 2>/dev/null || echo "Docker not running"

# Homebrew
echo "Cleaning Homebrew..."
brew cleanup -s 2>/dev/null || echo "Homebrew not installed"

echo ""
echo "=== Cleanup Complete ==="

# macOS notification when done
osascript -e 'display notification "Dev storage cleanup finished!" with title "Dev Cleanup" sound name "Glass"'

A few design choices worth noting:

  • Notifications before and after — the “Ping” sound tells you cleanup started, the “Glass” sound tells you it’s done. If Docker prune takes a moment, you’re not wondering why your machine is busy.
  • 2>/dev/null || echo "not installed" — every command handles missing tools gracefully. The script works whether you have all these tools or just some of them.
  • docker system prune -f — the -f flag skips the confirmation prompt. Without it, cron would hang waiting for input. This is the conservative prune (unused containers, networks, dangling images) — it won’t touch images you’re actively using.
  • Timestamp in output — makes the log file useful for tracking when cleanups actually ran.

Setting It Up

# Create the directories
mkdir -p ~/scripts ~/logs

# Make it executable
chmod +x ~/scripts/cleanup-dev-storage.sh

# Test it manually first
~/scripts/cleanup-dev-storage.sh

Scheduling with Cron

Add it to your crontab for Monday mornings:

# Open crontab editor
crontab -e

# Add this line:
30 9 * * 1 /Users/$USER/scripts/cleanup-dev-storage.sh >> /Users/$USER/logs/cleanup.log 2>&1

The cron expression 30 9 * * 1 means: minute 30, hour 9, any day of month, any month, day 1 (Monday).

Why Monday 9:30 AM and not Sunday 2 AM? Because cron only runs when your machine is awake. A MacBook that’s closed at 2 AM on Sunday will silently skip the job. Monday at 9:30 AM, you’re already working — the machine is on, and you’ll see the notification confirming it ran.

First-Run Permissions

On macOS, the first time cron triggers the script, you may see permission prompts in System Settings > Privacy & Security:

  • Notifications — allow cron to send notifications
  • Full Disk Access — cron may need this to access cache directories

If the notification doesn’t show up the first Monday, check these settings.

Checking the Logs

After a few weeks, review what’s being cleaned:

# See the latest cleanup output
tail -50 ~/logs/cleanup.log

# Check how much log has accumulated
du -sh ~/logs/cleanup.log

Monitoring: Stay Ahead of the Problem

Quick Status Alias

Add to your ~/.zshrc or ~/.bashrc:

# Check dev storage status
alias dev-storage='echo "=== Dev Storage ===" && \
  echo "uv: $(du -sh $(uv cache dir) 2>/dev/null | cut -f1)" && \
  echo "npm: $(du -sh ~/.npm 2>/dev/null | cut -f1)" && \
  echo "Docker: $(docker system df 2>/dev/null | grep "Images" | awk "{print \$4}")" && \
  echo "Homebrew: $(du -sh $(brew --cache) 2>/dev/null | cut -f1)"'

Set Disk Space Alerts

Create a simple monitor script:

#!/bin/bash
# check-disk-space.sh

THRESHOLD=85
USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')

if [ "$USAGE" -gt "$THRESHOLD" ]; then
  echo "WARNING: Disk usage at ${USAGE}%"
  echo "Run: cleanup-dev-storage.sh"
  # Optional: send notification
  # osascript -e 'display notification "Disk space low!" with title "Storage Alert"'
fi

Key Takeaways

  1. Know your storage consumers: AI development tools add layers of caching you might not expect. Python packages for MCP servers, conversation logs, and model caches add up fast.

  2. Schedule regular cleanups: Don’t wait for “disk full” errors. Run cleanup commands weekly or set up automation.

  3. AI tools require storage discipline: The convenience of AI-assisted development comes with a hidden cost. New tools need new habits - add storage monitoring to your workflow.

Quick Reference

WhatCheck SizeClean
uv cachedu -sh $(uv cache dir)uv cache prune
pip cachepip cache infopip cache purge
npm cachedu -sh ~/.npmnpm cache clean --force
Dockerdocker system dfdocker system prune
Homebrewdu -sh $(brew --cache)brew cleanup -s

Have questions? Connect with me on LinkedIn or check out more posts on agiusalexandre.com.

Alexandre Agius

Alexandre Agius

AWS Solutions Architect

Passionate about AI & Security. Building scalable cloud solutions and helping organizations leverage AWS services to innovate faster. Specialized in Generative AI, serverless architectures, and security best practices.

Never miss a post

Get notified when I publish new articles about AI, Cloud, and AWS.

No spam, unsubscribe anytime.

Comments

Sign in to leave a comment

Related Posts

Development

AWS Backup Cost Analysis

EBS snapshot costs were growing month-over-month with no clear explanation or optimization strategy.

4 min