LLMs Redefined: Local Language Models

We’ve been hearing that the cloud is the future of everything for over a decade now. I disagree. Even when I setup my Windows 11 workstation — I know, yikes, right? — I created a local account. I had no interest in syncing my everythings with MicroSoft’s servers. I’ve always been able to see things coming. Ever since I placed the order for that PC, I knew eventually Windows 11 would eat its own tail. That was three years ago, and boy am I happy that I relied solely on my gut. I trusted my inner knowing. Now there’s no option to create a local account when you setup a new PC. I unplugged that computer a couple of months ago. Until MicroSoft fixes Windows 11, it will remain in cold storage. Then again, there’s always Linux, if necessary.

Luckily I’m a Mac guy at heart. I have five other computers — all Macs, including the 1999 iMac, 2010 iMac, 2020 iMac, 2009 MacBook Pro, and 2023 M3 Pro MacBook Pro. I also own my Adobe, Maxon, and Red Giant software licenses so no subscriptions necessary. While I spent two years slowly migrating to Affinity, replacing Photoshop, Illustrator, and InDesign, once again my foresight was right. When given the opportunity to purchase a perpetual license for Cinema4D, I jumped at the opportunity. It’s version R21, but it’s good enough. Plus Blender is far more capable. My design workflows don’t require the latest ongoing alpha/beta updates. I’m a fan of stability, not the latest dog and pony show. I still sketch on napkins.

So when it comes to my stance on Large Language Models, I have to agree that going local, non-cloud, is the future. Apple has the fastest processors, buses, and incredibly well designed frameworks and infrastructure they instill into each product. I’m old school, and that means I support local LLMs, Local Language Models. I own my data and no prying eyes or chatbots have access to my most precious secrets. They’re still spinning their wheels trying to understand what I do share. Nvidia pushes GPU server farms gobbling up our power grids to the masses while Apple is quietly making progressive laps in the LLM space. I want my data local, and that means no cloud AI for me. When the time comes, I’ll be building out my own cluster in order to train my localized language model. I’ve gone nearly off-grid, leaving all of the distractions and shiny things behind. I sleep well at night now, too. I guess I could call out “eureka!” The Fountain of Youth was always living inside me.

“Everyone I meet think I’m ten years younger. I’m a reformed narcissist, now a deeply spiritual empath. How does this relate to LLMs? In every way because what we plant and harvest is not only how we feed ourselves, but what we feed our souls. There is light out there. All we have to do is take a moment, close our eyes, and breathe…the rest will come in due time. Slow down, wash your dishes by hand, and celebrate the moment.”