Your computer is your own world. It’s a digital home and the structure, content and aesthetics reflect the way we inhabit them. Or at least… it was meant to be that way. We have fallen far from the early 2000s promises of “Personal Computing” and, with each year, we are pushed further towards renting our lives in totality.

Finally, a computer for me!

That has to change. A home should take on the shape of your life through the dialogue between you and your (shared) environment, on your terms. You can glimpse the potential if you browse your old downloads, inbox or camera roll, you’ll see the fragmented and obfuscated story of your life in the audit trail. It’s all there but it’s in the wrong shape. I’ve spent at least a decade trying to use the computer to understand myself and met increasing resistance along the way.

As a kid, I was never a “CLI-first” developer but… over the years I’ve been forced to learn many arcane aspects of software development just to do what I actually want. At least in the terminal I have a home directory and my environment to explore my projects. What was once a place of confusion and apprehension (should I fuck something up irreparably) eventually became the default since: it stays out of my way[^1] and until recently it was main way to engage in dialogue with my computer[^2].

The terminal is one of the only places software composes[^3]. Every result can be saved, piped, branched. My shell is mine[^4]. And now an LLM can smooth the parts that used to require rote memorization while I learn new grammar through dialogue.


Modelling Reality

For developers, this works. But what about personal computing in general? Developers know software is made up of programs, instructions, protocols, clients, servers, databases, files, queues etc. and our tooling reflects the way we think about our systems.

But what about our lives? To be honest, I think you only need three concepts: people, places, and things[^5].

Things are fractal, a thing made of more things. A recipe contains ingredients. A project contains tasks. A memory contains moments. Similarly, places contain other places. A home has rooms, a company has teams, a folder has folders. People belong to places, own things, share both.

With creative use of these primitives almost anything is expressible... but in practice I don’t really see these concepts on my computer. I see applications, files, folders, emails and websites instead. Predetermined endpoints.

The problem with files, databases, apps, silos etc. is that they ask you to determine the structure before you have any way of knowing what it should be. You have to declare your schema, choose your folder name & hierarchy, pick your app, commit to a shape and a subscription plan. And then once you’ve made that choice, there’s friction to changing it. The system ossifies around your first guess.

Real dialogue is iterative. Every interaction builds understanding: yours of the system, the system’s of you. People don’t know what they mean until they’ve said it wrong a few times. Structure should emerge from use, not precede it. Instead we have optimised for upfront convenience (and long-term frustration).

Everything is a silo. Your emails in Gmail, your docs in Drive, your notes in Notion, your recipes screenshot into your camera roll. To the computer, a booking confirmation is the same “kind of thing” as a cherished memory or a shopping list. They’re flattened into formats that don’t know what they mean. A recipe isn’t a PDF. A memory isn’t a note. These things have structure, relationships, ways they want to connect.

To see all the ways the puzzle fits together, you have to let go of knowing the answer (me, 2025)

Ignorant Design

If you build software this should unsettle you.

The way we’ve designed for the past two decades: feature specs, user stories, backlogs, predetermined flows (created by “experts”)... It’s an approach that assumes we know in advance what users need. It leads to an smug attitude of believing you know “what is best” for them. You work backwards from constraints. You railroad. You decide what’s possible and ship it. Users learn your system or they leave. We let ourselves become intentionally ignorant of the details to make the problem easier to work on.

Let me be clear: I have always hated this approach[^6], but, with LLMs in the picture it’s clearly a dead end.

To realise Personal Computing we must enable users to build structure themselves, share it and iterate upon it. While rare, successful “vibecoding” means someone who understands their problem deeply enough can now assemble their own solution, provided the medium underneath supports them. So, while the LLM can help translate our ideas into technical solutions, we’re still missing a substrate that can keep up with open-ended thought.

This is really hard7… but mostly because we start from the wrong end: creating a perfect logic model. Human thought is messy, multi-modal and requires equal parts sketching (think Figjam), research (think Notion), experimentation (think After Effects) and analysis (think Excel). Today, in software, there are hard disjunctions and barriers between these modes.

When we invent new technical primitives: operating systems, databases, CRDTs, dynamic views and then try to retrofit human thought onto them... It never works. The tools stay powerful but alien, or they get simplified into something that loses the power entirely.

So the research question becomes: “what does the imagination actually need in order to build a world?” Our technical primitives must afford that ability. Technology is a byproduct of humans doing what matters to them, not something invented first and justified later.

Stop designing abstract systems for abstract people. Start from the story the user is trying to tell themselves about their own reality. The characters, the places, the things that matter to them. Then ask what the system needs to express in order to support that story as it evolves... without knowing it in advance.

Lack of clarity is a feature (artwork, me, 2025)

Beginner’s Mind

LLMs are no silver bullet but they do enable something useful: fuzziness in the dialogue. Now, we can gesture toward intent and watch structure stochastically emerge. You can say “group these somehow” and negotiate what “somehow” means. You can be vague and get somewhere[^8].

This is powerful but only if the substrate can hold what emerges (and insulate you from footguns). You need fractional subdivision (things that break into smaller things without losing coherence). You need derivative definition (the ability to say “this is like that, but different in these ways”) without copying everything. You need to understand who can see what, when, and where. Yes, for security, for privacy, but also for coherence. For the story to make sense.

The real value LLMs can bring: they can tell the story of your information system in your terms. The characters, the objects, the places you’re already familiar with can be rolled up into a narrative about what’s happening. What changed. What matters.

A world generates a story. And world-building is what the computer is for. Not in the fantasy sense, but in the practical one. Running your business is world-building. Raising a family is world-building. Any creative project, any logistical challenge, any attempt to understand your own life. All of it is the same process: making sense of people, places, and things as they change over time.

I have been slamming my head into this design problem for years and I finally see that what stands between the computer and the unbounded imagination: the courage to not know what’s best for the user.

✌️ Ben

Things I’ve been thinking about

[^1] …and it’s not using an entire web browser just to render a text editor view

[^2] Yes, this extends to any REPL

[^3] Yes, HTTP technically composes, but have you tried using an OAuth REST API lately?

[^4] And it gets really good!

[^5] Before you @ me, I have spent my entire career obsessed with data modelling, I know it’s not easy

[^6] Further: this is not even design in my view, design is for beings not flowcharts

[^7]Seriously, I would say we have all collectively failed at this in the Tools for Though space

[^8] Is it somewhere good? That’s a further discussion