Apple Has Always Been at War with Physics — and It's Happening Again
There is a pattern to how Apple makes its most interesting bets. It finds a hard physical constraint, something that looks like an immovable wall, and decides that software is the answer. It happened with the camera. It is happening right now with artificial intelligence.
There is a pattern to how Apple makes its most interesting bets. It finds a hard physical constraint, something that looks like an immovable wall, and decides that software is the answer. It happened with the camera. It is happening right now with artificial intelligence.
Act I: The Lens Problem
Great photography has always been, at its core, a physics problem. A camera captures light. The more light it captures, the richer the image, more detail in the shadows, less noise, better colour fidelity. The way you capture more light is through a large lens and a large sensor. This is why professional cameras are large. Size is not a design failure; it is a feature.
Then came the smartphone. A device engineered to slip into a pocket carries a sensor roughly the size of a fingernail and a lens that barely protrudes from the chassis. By every classical rule of optics, the images it produces should be terrible. For a while, they were.
Apple changed that, not by cheating physics, but by supplementing it. The camera hardware still operates under the same laws. But surrounding that hardware, Apple built a dense layer of software: multi-frame processing, computational noise reduction, Smart HDR, Deep Fusion, Photonic Engine. The phone takes a burst of frames in the milliseconds around a shutter press, computes across all of them simultaneously, and synthesizes an image no single frame could have produced. It is photography augmented by computation.
The result is photographs that routinely surpass what dedicated cameras cost thousands of dollars to achieve a decade ago. Apple did not repeal the laws of optics. It worked around them, cleverly and relentlessly, in software.
That discipline, computational photography, is now an industry. Every major phone manufacturer pursues it. Apple invented the category.
Act II: The Cloud Assumption
Artificial intelligence has its own version of the lens problem. The assumption baked into the industry is that powerful AI requires massive infrastructure: data centres, high-end GPUs, enormous amounts of memory and bandwidth. The corollary is obvious: the user's device cannot be where the real work happens. The device is a terminal. The cloud is the brain.
This assumption is not unreasonable. Large language models carry billions of parameters. Running inference on them demands hardware that, until recently, only existed in server farms. The economics pointed in one direction: send the data up, compute in the cloud, send the answer back.
Apple, characteristically, looked at that constraint and started building silicon.
The evidence has been accumulating quietly for eight years. When Apple introduced the iPhone X in 2017, the A11 Bionic contained a small, barely-mentioned addition: a 2-core Neural Engine, dedicated hardware for machine learning inference. No killer feature required it. No reviewer headlined it. Then the next chip had a larger one. Then the next. Generation after generation, Apple kept expanding this component with the same patient, unexplained determination. The M5, released in 2025, carries a 16-core Neural Engine. That is not a spec-sheet flourish. It is a long-term architectural bet, written in transistors, years before the payoff was visible.
And Apple did not keep this hardware to itself. Since the beginning, the Neural Engine has been available to third-party developers through Core ML. This is the move that matters most: Apple was not building a feature. It was building a platform, the same way it built camera hardware that any app could access, long before computational photography became a household term. Every iPhone sold is a node in a distributed AI compute network that no cloud provider can match for scale or proximity to the user.
What makes this bet more significant is that Apple is not the only one reading the same signal. Qualcomm's Snapdragon X Elite and Intel's Lunar Lake both feature dedicated neural processing hardware. Three companies with entirely different business models, all converging on the same structural decision. When that happens across an entire industry, it stops being a hunch and starts being a thesis. The question is not whether on-device AI is coming; the chips already say it is. The question is who built the platform first.
The parallel writes itself. In computational photography, the constraints were the light and the lens. In on-device AI, the constraint is compute and memory. In both cases, Apple's answer was not a product announcement. It was infrastructure, laid down quietly, generation by generation, until the moment the world was ready to use it.
The Pattern Underneath
What makes this parallel more than a coincidence is the underlying philosophy it reveals. Apple is, at its core, a company that believes the most interesting problems sit at the intersection of hardware and software, and that controlling both gives you leverage no one else has.
Computational photography was only possible because Apple designed the chip, the image signal processor, the operating system, and the camera APIs that third-party apps could call. No single piece works without the others. On-device AI is the same stack: custom silicon with dedicated inference hardware, a tightly integrated OS, and developer frameworks that expose that hardware to anyone who wants to use it.
Crucially, neither of these was primarily a consumer feature. They were platform investments. Apple did not build the image signal processor to win a camera shootout in a tech review. It built the infrastructure for a whole ecosystem of imaging capabilities, then let developers, and eventually competitors, define what was possible on top of it. The Neural Engine is following the same trajectory.
The question, in both cases, is the same: what becomes possible when every device in the world ships with dedicated AI compute, and any developer can reach it?
Apple's answer, then and now, is: more than you currently imagine.
Why This Bet Could Win
Neither of these bets required perfection. Computational photography on an iPhone does not match a medium-format camera. It never will. But it crossed a threshold: good enough, instantly, in your pocket, for almost any situation. Once it crossed that threshold, the market for dedicated point-and-shoot cameras collapsed. The winning condition was not technical supremacy; it was sufficient capability, everywhere, all the time.
On-device AI is chasing the same threshold. Cloud models will remain more powerful for heavy, open-ended tasks for the foreseeable future. But the question is not whether on-device AI can beat GPT-5 in a benchmark. The question is whether it can be good enough for the things people actually reach for dozens of times a day, understanding what is on your screen, drafting a reply, answering a question about something in your document, without a network round-trip, without sending your data anywhere, without a subscription to yet another cloud service.
If it can, it wins on every dimension that matters: speed, privacy, cost, reliability, and the quiet trust that the intelligence running on your device is working for you, not for a server farm somewhere.
Apple fought the laws of physics once, with a camera, by building infrastructure that an entire ecosystem could use. It is making the same bet now, with intelligence, in the same patient and deliberate way. The Neural Engine has been growing for eight years. The platform is already in a billion pockets.
That is not a product launch. That is a foundation.
This piece was inspired by The M5's Most Important Feature Isn't in Any Benchmark by Tiff In Tech: a sharp breakdown of Apple's Neural Engine evolution and what the chip roadmaps of Apple, Qualcomm, and Intel say about where AI is heading.