2025-12-01//LOG
M4 Max: First Impressions After Real Work
I've had the M4 Max MacBook Pro for about two weeks now. This is not a review. I don't care about Geekbench scores. I care about what changes when your machine stops being the bottleneck.
Here's what actually happened.
First thing I did was clone TOPO Contabil. 42 NestJS modules across a monorepo. On my old machine, a cold start with full compilation took about 90 seconds. On the M4 Max, it takes 23. That's not an incremental improvement. That's a category change. The difference between "I'll check my phone while it compiles" and "oh, it's done."
Docker builds are where things get absurd. Multi-stage builds for our production images went from 4+ minutes to under 70 seconds. I rebuilt everything three times just to make sure I wasn't hallucinating. The unified memory architecture means Docker isn't fighting for RAM anymore. I gave it 24GB and the system didn't even flinch.
The fan. Let me talk about the fan. I ran our entire test suite, 42 modules, all specs in parallel, while Docker was building in another terminal, while Claude Code was indexing the codebase. The fan spun up to MAYBE a whisper. On my Intel MacBook Pro this would have sounded like a jet engine preparing for takeoff and then the machine would thermal throttle itself into uselessness.
Claude Code specifically benefits from this. Running an LLM-powered coding assistant means constant context processing, file indexing, large prompt construction. The M4 Max handles this without any perceptible lag. Responses feel snappier, not because the API is faster, but because the local processing around it is instant.
One thing that surprised me: compilation isn't the bottleneck anymore, so now I notice OTHER bottlenecks I never saw before. Network latency to our staging environment. Slow database seeds. That one integration test that takes 8 seconds because it actually hits an external API (shame on us). When the machine is fast enough, you finally see where the REAL slowness lives.
The 40-core GPU is overkill for development. I don't train models locally. But having 128GB of unified memory means I can run our entire infrastructure stack in Docker, have 30 browser tabs open in Arc, run VS Code AND a JetBrains IDE simultaneously, and still have headroom. That matters.
What I didn't expect: it changed my workflow. I used to batch tasks because context switching had a compile-time cost. Now I jump between modules freely. I run tests more often because the feedback loop is near instant. I experiment more because rebuilding is cheap.
Is it worth the price? If you write code for a living and you're on anything older than M2, yes. Not because it's faster on benchmarks. Because it removes friction you didn't know was slowing you down.
The best tool is the one you don't think about. The M4 Max is the first machine where I forgot about the machine.