(Text) Input, Meet (Text) Output
Rethinking User Interfaces in the Age of Large Language Models
Large language models (LLMs), and the chat interfaces through which we most often encounter them, have quickly become mainstays. If you use a computer, you probably use an LLM. If you’re a student, you might be “cheating” with one (though realistically, at this point, it’s barely considered cheating, It’s just a ‘tool.’) If you have accessibility challenges, like me, you might find yourself dictating this introduction using voice-to-text, and when you’re done, you’ll run it through ChatGPT to clean up the punctuation before pasting it into Substack for final edits including fishing out all the gibberish you didn’t say, the kind of hallucinatory insertions LLMs autocomplete on your behalf. (It’s probably time to return to my earlier workflow: dictate to Otter.ai, then refine in a proper editor.) However you may or may not use it; its is all normal.
And then there are the people who refuse to use them … and that’s a whole drama in and of itself. “Oh god, I would never,” they say. “Woah! I never expected you to,” others gasp. Which is to say: everyone (understandably) is negotiating their position in relation to these systems. (This tech, is, after all, not a speculative fiction hovering proverbially around the corner, no, its here and its revolutionary right now becuase its right at your fingertips and it can do so much, much more than it promises even, in a way, you, the user are revealing what it can do.) And that negotiation has become social, rhetorical, even existential. For example, some people I’ve talked to people describe their experience with LLMs in language that is frankly, spiritual; it’s as if they’ve discovered a new dimension of the meaning of everything through a machine. And, maybe they have. (Maybe I have too. I am seduced by its hallucinatory insertions, as it were, I see them as provocations to my thinking, to my writing … or am I just kidding myself?)
So using chat in ways that were extensions of existing accessibility workarounds for my hypermobile joints which can make typing taxing (voice-to-text then text editor), has become something alive for me. Something I use every day beyond accessibility (or, possibly it has shown me how far my desire for accessibility extended that I hadn’t otherwise imagined possible). Something I recently realized I’m at the beginning of a lifelong relationship with. (And frankly, it’s that same aliveness I see even in the folks who naysay it. And for what it’s worth, I admire their stamina. It’s just that I didn’t get a MySpace in 2004, and I am forever regretful I didn’t jump on the train. And so: here I am, jumping on this LLM train.)
In any case, this realization surfaced during a text exchange with a friend when we were sharing our roses, thorns, and buds for that week, and my bud was:
Which, I suppose, is exactly what I’m documenting here.
As I mentioned in the first post in what I am feeling is bound to become a series, The Friction Myth, I’ve been critical of this tech by watching my students use it, watching my colleagues frame dissertations around it, watching myself fall deeper into its provocations and how they feel both generative and aggrevatory to my thinking and process.
I’ve been studying buildings since undergrad. What drew me to them wasn’t their static form, but their capacity to facilitate … to host, shape, and respond to the experience of the inhabitants. Through serendipity I have written about before, I stumbled across the term “ubiquitous computing” and came to understand that what I was really interested in wasn’t architecture in the traditional sense, but in the computational layer that might allow built environments to behave more like systems. The imaginaries that excited me had always been the ones in which buildings did something: sensed, computed, adapted. I left design strategy consulting and got a master’s at the Technical University Delft and completed my thesis on UbiComp/ambient intelligence. Now, my doctoral research is in cyber-physical systems for future manufacturing. This has been my arc.
And what I now know is that many of the most iconic speculative visions of smart environments (the responsive building, the sentient home) were, at their core, predicated on a kind of ‘intelligence’ we didn’t yet have. The idea that a space could understand you, parse intent, hold context, reason semantically was always somewhat fictional. Until now. Because what LLMs make possible is exactly that: the underlying functionality we always assumed these environments would eventually include. And yet, instead of being embedded in space, they’re trapped in the narrowest possible format: a chat interface.
What (perhaps) might surprise you is that I didn’t start using LLMs because they aligned with my research. I started using them because I wasn’t going to miss MySpace twice. I saw my students using them. I don’t believe in abstinence-only education. I felt a professional responsibility to stay in relationship with these tools. But the more I’ve used them, the more I’ve realized: this is the missing layer. This is the ambient, generative, context-sensitive infrastructure that smart environments were always gesturing toward. And we’ve built it. It’s here. But we’re still talking to it through a chat box.
Right now, the dominant way people interact with LLMs is through chat. Text in, text out. The most widespread use case is built on a service metaphor — a help desk that gives you answers. And when we imagine what might come next, we reach for science fiction: the fully ambient, emotionally attuned assistants like Samantha (Her) or J.A.R.V.I.S. (Iron Man).
Given my background in ubiquitous computing and ambient cyber-physical systems, you might expect me to cheer on that trajectory — to argue that LLMs should be moving toward full ambience. The thing is, I don’t think that’s where the real design opportunity lies, at least not right now. That leap toward ambience skips over a critical space we haven’t even begun to explore: the graphical user interface (GUI).
We already have screens. We already have keyboards and mice. What we don’t have are graphical user interfaces, leveraging existing, ubiquitous (but not that kind of ubiquitous) hardware, that are meaningfully designed for what LLMs actually are: generative and probabilistic. The GUI were are currently using to engage them, the chat, reduces interaction to a linear prompt-response loop. Text in, text out. And that simplicity, while seductive, is shaping a culture of copy-paste dependency, which is leading to a shallow critical engagement with skill that is leading to skill-atrophy. In the context of academic production, that is, students completing assignments, I concede that the chat interface is at least partially responsible for the flattening of what could otherwise be a dialogic, iterative, interpretive process.
And in the relational use of LLMs, which is less my wheelhouse, I still see the interface as a problem. While specifically not-sentient or anthropomorphic, the chat interface frames language generation as conversation, reinforcing ideas of sentience, coherence, even identity as if you’re speaking with a self. That illusion is powerful. And also wild.
What follows is a short provocation based on that in between, what we call the ‘design space.’ A sketch of the interface landscape for LLMs. What we have, what we’ve imagined, and what we’re failing to invent. I look at four modes: the chat interface, the ambient assistant, the wearable device, and the missing GUI category no one seems to be exploring. This is not a product pitch, a prediction, or a polemic. It’s just a recognition: we are finally building the thing that ambient computing always gestured toward. There’s finally a tool to design for, and we have no new novel interfaces, whats up with that?
Chat as Interface
Let’s begin with what we already have. Chat is the dominant interface, the primary way most people interact with LLMs. Whether it’s ChatGPT, Claude, Gemini, or Perplexity, the affordances are consistent: black box UIs, prompt bars, output threads. The interaction is structured around a text field, a blinking cursor, a simulated back-and-forth.
These are reactive systems. They wait for you to prompt them. They answer questions, finish sentences, generate copy. They are powerful, but they’re also stuck inside interface metaphors borrowed from customer service and productivity software. We talk to them like help desks. We click “Regenerate response” instead of actually going back and exploring the logic we have co-produced. Instead, we completely erase it when it goes off track; despite its capacity for complexity management, there’s not a lot of it, right? The irony is that the models themselves are deeply contextual, but the chat UI collapses all of that into a single, constrained flow: input, output. In other words, the interface narrows the possibilities of use.
“Ambient Intelligence”
When people try to imagine something beyond chat, they usually leap straight to sci-fi. Ambient systems. Smart assistants. Think: Samantha from Her, J.A.R.V.I.S. from Iron Man, HAL from 2001: A Space Odyssey.
These figures don’t just answer, they listen, they know. They remember your schedule, anticipate your needs, whisper poetry in your ear. They are embedded in context, and they feel alive.


But the ambient interfaces we actually have (Alexa, Google Home) are far more modest. Clinical, even. They respond to trigger words, carry out commands, play the weather. They don’t really converse. They don’t really remember. They’re not really ambient in the existential sense. They’re just low-resolution endpoints for narrow interaction flows.
So while the sci-fi imaginary suggests intimacy, fluidity, seamlessness, the reality is static, brittle, and heavily constrained. In practice, ambient assistants have been largely flattened into a glorified voice remote.
Wearable / Handheld Devices
In steps, the new emerging category in this space of devices under-gridded by LLMs: tangible interfaces. Devices like the Humane AI Pin, the Rabbit R1, or the rumored OpenAI pendant. These products gesture toward something else: a kind of embodied intelligence that lives with you. Always on, always ready. No screen (or a very minimal one). Microphones, cameras, projectors. You tap, it listens.
There’s something conceptually interesting about this direction. Experimenting with form factors, moving beyond screens, that’s not inherently wrong. Exploring tangibility as a modality for engaging with LLMs could open up real use cases. But the deeper issue isn’t the hardware itself, it’s also who’s building it, and why.
Let’s be honest, the world of LLMs and how they might be realized through novel interfaces is a billionaire-class echo chamber. A circle of former Apple execs and hardware visionaries raised on the myth of the iPhone, flush with VC capital, chasing ‘the next big thing’ in a product design landscape that no longer resembles the one they won in. These aren’t design experiments (which we love); they’re speculative wagers dressed up as inevitabilities (which we are more critical of). These bets are being made far above the level of ground-truth context.
The people placing these bets are not designing from a place of need or social embeddedness. They’re building from inside a completely different reality, one shaped by (newfound) generational wealth, which creates distance from the rest of us normies and allows for an unfettered maintenance of the cinematic fantasy of what the future should feel like as set forth in mid-twentieth-century sci-fi imaginiaries. Their imagined future is sleek and frictionless, of course! And quietly authoritarian. These devices aren’t shaped by the complexity of actual life with tools. They’re shaped by a desire to fulfill a sci-fi narrative arc that was never rooted in ‘tech for good’ to begin with.
Additionally, the leap they’re trying to make is far too wide. The iPhone succeeded not by inventing mobile computing from scratch (lol), but by building on decades of prior hardware development, filling in a shape the world already had. These new wearable LLM interfaces are trying to conjure a market that doesn’t yet exist, or at least, not in the way they imagine it.
To be clear: I’m not saying tangible LLM interaction is a dead end. Rather, I am saying that these are niche bets floating in speculative capital. And they’re receiving wildly disproportionate attention relative to the actual site of mass adoption: chat.
People are using LLMs in chat every day. And yet we’ve dropped a generative model into the hands of the public, and left the GUI in 2013. #yolo
So yes, wearables may be part of the long-term future. But they’re not the frontier I’m interested in. The real design space is still the screen.
What the GUI Leaves Behind
So we return to the screen. To the graphical user interface. Disappointingly, despite all this energy and possibility, we haven’t seen much innovation in GUI itself.
LLMs are a new kind of tool but the way we interact with them still assumes otherwise. This is not because the GUI is incapable, quite the opposite. It’s because we haven’t asked the GUI to do anything different. Instead we should be experimenting with questions like:
What are the visual metaphors for probabilistic language?
How could interfaces let you trace or tune it’s logic whilst in production?
How might temporality, multimodality, or spatiality be leveraged to inform the GUI?
We could be designing interfaces that make uncertainty legible. That show a model’s reasoning paths, or invite counter prompts in real time. Interfaces that unfold like [fill in metaphor], or [fill in verb] with the user over time. (These fill-in-the-blanks are left open intentionally, because the metaphor has not yet been locked in, and you, dear reader, may very well have an idea that shakes things up.) Interfaces that don’t collapse language into a magic trick, but reveal it as a living, mutable space of possibility.
Instead, we have chat.
This is not to say we won’t build those things. It’s to say: we haven’t yet.
And that gap, is where I am keeping my eye.
With that, have fun in the undefined design space,
"People are using LLMs in chat every day. And yet we’ve dropped a generative model into the hands of the public, and left the GUI in 2013. #yolo"
I adore your brain Caseysimone!