General

The Future Mark Zuckerberg Is Trying To Build

Julian Mercer
Julian Mercer
Senior Fashion Correspondent
13 min read
The Future Mark Zuckerberg Is Trying To Build

The Future Mark Zuckerberg Is Trying To Build

The Future Mark Zuckerberg Is Trying To Build

Mark Zuckerberg has spent the last decade quietly working on something that, if it lands the way he's betting it will, could reshape how nearly half the human population experiences reality. Not metaphorically literally. The future Mark Zuckerberg is trying to build involves holographic glasses you wear like a normal pair of frames, AI that knows what you're looking at and can respond to what you're thinking about, and a version of social connection that doesn't require you to be in the same room, or even on the same continent, as the people you love.

Whether that future excites or unsettles you probably says something about where you sit on the optimism-about-technology spectrum. But it's worth understanding what he's actually building not the press release version, not the skeptic's caricature before deciding what you think of it.

Ten Years of Work Inside a Pair of Glasses

The thing Zuckerberg is most visibly proud of right now is a prototype called Orion full holographic augmented reality glasses, not a headset, not a heads-up display with a narrow sliver of information, but actual glasses that can place full holograms into the physical world with a wide field of view. Meta has made only a few thousand of them. They are extraordinarily difficult to manufacture, and they are not yet for sale.

Inside those frames is a genuinely impressive stack of technology: micro-projectors that shoot light into waveguides etched with nano-scale patterns that catch and render holographic images, eye-tracking cameras that illuminate your eyes to sync what you're seeing with where you're looking, dual displays that have to stay synchronized as your head moves, radios communicating with external compute, speakers, microphones, environmental sensors, and a wrist-based neural interface that lets you control the device with subtle hand gestures. Getting all of that into something that looks like glasses not a ski helmet is the engineering problem Zuckerberg told his team he wanted to solve roughly a decade ago, when many of them weren't sure it was possible.

The demo he described involved playing ping pong against a holographic opponent and feeling the ball hit the paddle through haptic feedback convincingly enough that he dropped the virtual paddle on the virtual table at the end and watched it shatter, which he took as a sign the illusion was working. That's the bar he's chasing: not just images that look right, but physics that feel right, in a form factor people will actually wear.

Orion is a prototype, and Zuckerberg is clear that it's a stepping stone toward a consumer version. But he's also stopped thinking about this as a single product. His current view is that the market will settle into several permanent product lines: display-less AI glasses like the Ray-Ban Metas (which have sold well and are getting a real-time translation feature), a mid-tier heads-up display with a narrower field of view for text and directions, full holographic AR glasses like Orion at the premium end, and traditional mixed-reality headsets like the Quest 3S now priced at $299 for people who want maximum immersion without the miniaturization constraint. He's not trying to collapse those into one device. He thinks people will choose based on what they need, the same way they choose between a phone, a laptop, and a tablet.

Presence Is the Product

Strip away the hardware specs and Zuckerberg's pitch comes down to two things: presence and personalized intelligence.

On presence, his argument is straightforward and, honestly, hard to dismiss. The reason people have such a strong reaction the first time they try a convincing VR or mixed reality experience isn't the graphics it's that they feel, for the first time through a screen, like they're actually with someone. That's different from a video call. It's different from texting. It activates something older and deeper in how humans process social connection. Zuckerberg has spent twenty years designing social apps, and he describes the sense of presence as the "Holy Grail" of that work the thing every social platform has been approximating without quite reaching.

The conversation he keeps coming back to is a simple one: imagine playing a board game with your sister who lives across the country, and it actually feeling like she's in the room. Not a video window of her her, as a hologram, across a holographic board, with the physics of the game behaving like a real game. He doesn't think this replaces time you'd otherwise spend with people in person. His read on the data and it's a fair read is that most people already have fewer social connections than they want, so the question isn't whether digital presence will crowd out physical connection, but whether it can fill a gap that's currently just empty.

The data on that gap is worth sitting with. In the American Time Use Survey, the amount of time American adults spend socializing in person has dropped by nearly 30% over the last two decades. For people aged 15 to 24, the Surgeon General's office puts that decline at close to 70%. The share of Americans who report having no close friends at all has jumped from 3% to 12% over the last thirty years. Zuckerberg's honest answer to why is that a lot of it predates social media and smartphones it's economic, it's geographic, it's the erosion of what sociologists call social capital but his point is that digital presence technology isn't the cause of that trend, and it might be one of the tools that helps reverse it.

Touch is the honest limitation here, and Zuckerberg doesn't oversell it. Haptics for hands is tractable you can simulate the feeling of catching something or the impact of a ping pong ball but full-body force feedback, the kind you'd need to make a virtual hug feel like a hug, is a genuinely hard problem. He mentioned smell as another frontier that's probably not coming in any near-term device. His framing is that presence isn't a binary you either achieve or you don't it's a spectrum, and the technology is moving along it, with some dimensions (eye contact, spatial audio, realistic avatars) arriving much sooner than others (haptic force feedback, olfactory simulation).

The AI Layer: Personalized Intelligence at Scale

The second pillar is AI, and this is where Zuckerberg's vision gets both more compelling and more complicated to think through.

His argument for why glasses are the ideal AI form factor is straightforward: glasses sit on your face, they see what you see, they hear what you hear. Those are the two primary senses through which humans take in context about the world. An AI that has access to that stream of information what you're looking at, who you're talking to, what's happening around you can be genuinely personalized in a way that a chatbot on your phone cannot. The phone version of AI knows what you type. The glasses version knows what you experience.

Where this gets interesting, and where I think the conversation deserves more scrutiny than it usually gets, is in the question of what that personalization actually means for the AI-mediated social media environment most people already live in. Zuckerberg notes that social media has already shifted significantly at least half of what people see on platforms like Instagram and Facebook now comes from creators or algorithmic recommendations rather than people they personally know. AI will accelerate that shift, he says, through better creator tools, AI-generated summaries and content, and eventually AI versions of creators themselves digital artifacts that fans can interact with when the actual creator isn't available.

He's not wrong that this is coming. He's also not particularly troubled by it in a way that I think he probably should be. The question of what it means for human communication when a meaningful share of the "people" you interact with online are AI proxies even well-labeled, consensually-built ones is not one that has a comfortable answer yet. I couldn't find any serious research on what long-term exposure to AI social proxies does to people's expectations of real relationships, and I suspect nobody has that data because the phenomenon is too new. That's a knowledge gap that matters, and it's one the industry is moving faster than the science.

The Struggle Question

One of the more genuinely interesting threads in Zuckerberg's thinking is what you might call the struggle question: when does removing friction from a human activity make life better, and when does it remove something essential?

He's thoughtful about this. His answer on coding is that teaching kids to code is still worth doing even in a world where AI writes most software, because coding teaches a way of thinking rigorously and that cognitive habit has value independent of whether you ever manually write a for-loop again. He draws an analogy to calculators: you still want kids to develop numeracy, not because they'll need to do long division by hand, but because the underlying sense of how numbers behave is what lets you catch errors and evaluate arguments.

On language learning, he's more genuinely uncertain, which is refreshing. He can see the argument that learning another language teaches you something about your own, about culture, about the structure of thought but he also acknowledges that with real-time translation now available on the Ray-Ban Metas, the functional case for language learning is weakening, and people only have so many hours. He doesn't pretend to have resolved that tension.

His broader answer that we'll always find new things to struggle with, that human creativity and ambition tend to expand to fill whatever tools are available is probably true. But it's also the kind of answer that's hard to falsify, which makes it worth holding with some skepticism. History does support it, mostly. Whether it holds when the tools are as general-purpose as large language models is genuinely unknown.

The Open Source Bet, and What It Actually Means

Zuckerberg's decision to make Meta's Llama models open source is probably the most consequential strategic choice he's made in AI, and it's one that puts him in direct philosophical opposition to OpenAI and Google. His argument is that a future where one or two AI systems dominate everything is worse for users, for safety, for innovation than a future where thousands of developers build specialized AI systems on top of a shared open foundation, the same way the web produced a diversity of applications rather than one monolithic platform.

The safety argument for open source is counterintuitive but has historical support. In traditional software, open source projects have generally proven more secure over time, not less because more eyes on the code means vulnerabilities get found and patched faster than they would in a closed system where only internal teams are looking. Zuckerberg applies the same logic to AI models: Llama 3, Llama 3.1, Llama 3.2 iterate quickly in part because the broader developer community is stress-testing them. He's betting Llama 4 and 5 will follow the same pattern, with training runs scaling from tens of thousands of GPUs to hundreds of thousands.

Whether that logic holds as models become dramatically more capable is the part of the debate he acknowledges but doesn't fully resolve. The critics of open-sourcing frontier AI aren't primarily worried about bugs they're worried about misuse at a level of capability that doesn't have a software-security analogy. Zuckerberg's answer is essentially that the benefits of distributed scrutiny outweigh the risks of distributed access, and that history supports him. That's a defensible position. It's also a position that's easier to hold when your company benefits commercially from being the open-source provider.

The Scaling Question Nobody Can Answer

The biggest open question Zuckerberg keeps returning to is whether current AI architectures will keep scaling whether throwing more compute and more data at transformer-based models will keep producing meaningfully smarter systems, or whether there's a ceiling approaching that nobody can currently see.

He's transparent about the stakes: Meta is making infrastructure investments in the hundreds of billions of dollars on the assumption that scaling continues. Llama 3 was trained on 10,000 to 20,000 GPUs. Llama 4 is planned for more than 100,000. Llama 5 is planned to go further still. The interesting and somewhat vertiginous thing about current AI development is that, unlike previous architectures that hit clear performance plateaus, transformer models haven't shown a ceiling yet which could mean the ceiling is very high, or could mean it's just not visible from where we're standing.

Zuckerberg's honest answer is that he doesn't know, and neither does anyone else. He's betting on continued scaling, but he acknowledges that if the scaling laws break down, the industry slows down and waits for a new architectural breakthrough. That's not a catastrophe it's a pause but it would make the next decade look quite different from what he's currently planning for.

The Future Mark Zuckerberg Is Trying To Build

What the Future Mark Zuckerberg Is Building Actually Asks of Us

The most useful frame for thinking about all of this isn't whether Zuckerberg's vision is good or bad it's whether it's inevitable, and if so, whether the choices being made now about how to build it are the right ones.

On inevitability: something like what he's describing is almost certainly coming. The computing trend toward more ubiquitous, more natural, more socially embedded devices is real. AI that can see and hear what you do is already being built by multiple companies. The question is which version of this future we end up in, and who makes the decisions that shape it.

On those decisions: Zuckerberg is making enormous bets on open source, on presence as a product, on the idea that more connection is better than less that will affect billions of people who have no vote in the matter. Some of those bets seem well-reasoned and worth being optimistic about. The open-source approach to AI has real merits. The goal of making people feel genuinely present with people they love is a good goal. The commitment to driving down hardware costs so that mixed reality isn't just a luxury product is, frankly, more important than it gets credit for the Quest 3S at $299 is a more significant development than most tech coverage suggests, because it's the price point where a technology starts to actually diffuse through society rather than sitting in early-adopter households.

What I'm less sure about is whether the speed of this particular build is matched by sufficient thinking about what it does to the humans inside it not the users in a product-feedback sense, but people in a psychological and social sense. Zuckerberg is clearly curious about that question. Whether curiosity is enough, at this scale and at this pace, is the thing I keep coming back to and can't fully resolve.