Pete Millspaugh
@petemillspaugh.com
about 1 month ago I think I've become way too into colons. I ctrl-f'd 15 in the ~1,500 word blog post I just published
π¬ 0
β»οΈ 0
β€οΈ 2
I think I've become way too into colons. I ctrl-f'd 15 in the ~1,500 word blog post I just published
β Built with Command Code.
star the repo
β github.com/ahmadawais/...
β npmjs.com/package/mmm...
β built with @CommandCodeAI
β $ npm i -g command-code - booleans rendered as `yes/no`
- ASCII connectors instead of box-drawing glyphs
No dashboards.
No browser spelunking. Big win. Need I say more.
Just:
$ npx mmmodels
Or install it globally:
$ npm i -g mmmodels
If you work with models a lot and prefer terminals over tabs, try it. Each column has min/max widths and alignment. The renderer fits the table to the current terminal width, and if the requested columns still do not fit, it errors instead of wrapping into unreadable soup.
`--plain` actually means plain:
- no banner
- no color
- no spinner - do --sync or -s with any command to fetch live
Normal mode goes memory -> disk -> network.
If the fetch fails, it falls back to disk cache instead of hard-failing.
Tables are width-aware. Queries are tokenized, normalized, version-aware, and AND-matched across candidates. Version tokens like `4.6` are handled carefully so they do not accidentally match `4.5`.
Caching is simple on purpose:
- in-process cache
- disk cache in tmp
- network fetch from source If the same model family appears from multiple providers, search prefers the default source for that family instead of returning an arbitrary duplicate first.
Under the hood the search is custom-scored. PRs welcome. $ mmmodels claude
$ mmmodels list --provider anthropic --table
$ mmmodels search gpt --provider openai --json
$ mmmodels search claude --fields id,provider_id,limit.context,cost.input
One subtle feature I like: provider-aware ranking. - agent-friendly output with `--fields`, `--ids-only`, `--ndjson`, and `--json`
- width-aware terminal tables that fail cleanly instead of overflowing
- `--plain` mode for scripts, CI, and remote boxes
- local disk cache with offline-friendly fallback behavior
A few examples: What it does well:
- no-arg interactive TUI for browsing `mmmodels` all you need
- fuzzy search across model IDs, model names, and provider names
- filtering by provider, capabilities, and status
- explicit sorting and limiting with `--sort` and `--limit` Cognitive load of issues like: Name drifts. IDs collide across providers. Pricing and capability metadata changes constantly. The browser/tab workflow is too slow if you do this often (as someone building a frontier coding agent).
Again built with Command Code, with my CLI taste. - who ships them at what price
- how much context they have
- what they cost (esp caching)
- which ones support tools, reasoning, files, or structured output
The data is all there. The workflow as a CLI was missing. Introducing mmmodels π
ππππππππ is a CLI for browsing, filtering, and exploring AI models from hundreds of providers.
Built for both humans and agents.
$ πππ‘ ππππππππ
I wanted one terminal-native place to answer questions like:
- what models exist (fuzzy search) AI use case: Code search
AI can instantly find functions, files, or patterns across huge codebases. It helps you jump straight to the code that matters.
How do you search through your codebase? Does anyone have any cool blogs they follow? Updating my feed reader with new stuff, what should I add?
new cli-driven llm tools are looking more 'hardcore hacker' than ever, the programmer identity crisis means that these glorified chatbots have to be super butch
π π π
On March 12 at 1 pm ET, we're handing the mic to builders in our community to demo what they've shipped, share their tech stacks, and walk through the story behind the deploy.
βοΈ Set a reminder and tune in: www.youtube.com/live/twtN2kW... ICYMI: we rebuilt our docs!
- New design
- Better search & AI chat
- Cleaner navigation
- Versioned docs
Read about how we pulled it off without taking anything down π
pris.ly/b/docs-rebuild Anything using SQLite on zfs has huge latency with writes - Atuin, package managers, browser history, etc.
Thereβs some atuin focused fixes here: github.com/atuinsh/atui... you're welcome! also, ants are her favourite food π
AI Terminology #19: Quantization
β³ A technique that reduces the precision of a modelβs numbers (e.g., from 32-bit to 8-bit) to make it smaller, faster, and more efficient to run. btw the turtle is called Hex