← Home

The future of Personal Computing

I've been thinking a lot about the future of personal computing. By "personal computing", I'm broadly referring to any computing device that we can carry or wear on our body. This mainly includes cellphones and wearables.

I feel confident in predicting that it will look very different in the (near) future than it has looked over the last 20 years. I think this for a few reasons, some obvious and some more subtle.

A Second Brain

Until recently, our brains acted as the main reasoning engine, breaking down high-level goals into discrete, imperative commands that we would then mechanically execute on our devices.

This cognitive overhead consumed significant mental energy and time. For any task we wanted to complete, we humans did the majority of the work, not in physical execution, but in mental processing.

If you don't believe me, consider how often you procrastinate on "easy" tasks like sending a text message, writing an email, ordering something on Amazon, or making a phone call. These tasks typically take just minutes to execute, yet we delay them constantly. The mechanical execution of typing on a keyboard or tapping a screen isn't what we're avoiding. It's the mental work of translating our high-level intentions into specific actions.

We procrastinate sending that email because we don't want to figure out how to word it. We delay buying that shirt on Amazon because we're not sure exactly what we want and don't want to search through options, compare colors, and select the right size.

But this has changed.

With AI, we can now offload this cognitive burden to a second reasoning engine. AI can word that text message or email for you and send it. An agent can shop for you and using your shopping history, automatically find the right size and color. A simple voice command can instruct a device to take a picture for you.

When we spend less time doing things on our phones, we can spend more time with our friends and families doing the things that we love.

We can be present again.

Technology Fades away

While companies, app developers and advertisers are furiously trying to keep users glued to their devices, user preferences are starting to change. Nearly 6 in 10 U.S. adults say they use their smartphone “too much.” That’s an increase from 39% in 2015 to ~58% in more recent data. And about 38% of teens say they spend too much time on their phone. and a similar share (~36-39%) say they have tried or are trying to reduce time on their phone or social media.

This growing awareness of “too much screen time” isn’t just a passing frustration, it’s the start of a cultural shift. As people become more conscious of the tradeoffs between constant connectivity and well-being, they’re increasingly open to new ways of interacting with technology that don’t require staring at a glowing rectangle all day. At the same time, AI systems are becoming more natural, responsive, and capable, moving from clunky assistants to fluid collaborators. These two forces reinforce each other: the desire to spend less time on phones creates demand for alternatives, and the rapid improvements in AI supply exactly that alternative.

In other words, as AI grows more useful, it reduces the need for our phones to be the center of our digital lives. Why tap through screens and apps when a simple voice command or ambient interaction can handle the task faster and with less friction? Over time, this shift will compound. Every improvement in the models makes voice-first and ambient computing more attractive, which pulls people further away from habitual smartphone use. The cycle feeds itself: less reliance on phones drives more reliance on AI, and more reliance on AI accelerates the move away from screens.

As a result, I believe our technology will become more ambient. It will fade away from the spotlight into the human form factor. We will no longer bow our heads to our smartphones but command them, or more precisely, command the intelligence behind them, to do our bidding.

The Rise of Human Technology

So what’s next? I believe the future of personal computing will be distributed across our peripheral wearables. Instead of a single glowing slab in our pocket, we’ll interact with technology through a constellation of devices: a watch, an audio interface, and a vision device. Together, this trifecta forms a seamless interface with the digital world. Glasses let us see texts, maps, messages and more, overlaid in AR/VR. Earbuds give us always-on audio for calls and ambient information. Watches track our biomechanics, time, and health.

But that’s just the beginning. With advancements in brain–computer interfaces, we’ll be able to control technology with thought alone. EEG sensors already demonstrate the ability to decode brainwaves into commands with remarkable precision. Imagine thinking of making a call: your wearable interprets the intent, asks who you want to connect with, you picture the person’s face or name, and the system completes the action — all without lifting a finger or speaking a word.

So why cling to a screen at all? Over time, we won’t need one, aside from a few transitional use cases. The smartphone as we know it will fade, replaced by a compute and battery hub that powers our wearables. Interaction will move from something we hold and look at to something ambient, distributed, and woven directly into the human form factor.

Subscribe to get future posts before anyone else.