Expert reveals phone secrets that AI fans need to push Gemini and ChatGPT

Expert reveals phone secrets that AI fans need to push Gemini and ChatGPT

One of the most obvious — and honestly, the dullest —trends within the smartphone industry over the past couple of years has been the incessant talk about AI experiences. Silicon warriors, in particular, often touted how their latest mobile processor would enable on-device AI processes such as video generation.

We’re already there, albeit not completely. Amidst all the hype show with hit-and-miss AI tricks for smartphone users, the debate barely ever went beyond the glitzy presentations about the new processors and ever-evolving chatbots.

It was only when the Gemini Nano’s absence on the Google Pixel 8 raised eyebrows that the masses came to know about the critical importance of RAM capacity for AI on mobile devices. Soon, Apple also made it clear that it was keeping Apple Intelligence locked to devices with at least 8GB of RAM.

But the “AI phone” picture is not all about the memory capacity. How well your phone can perform AI-powered tasks also depends on the invisible RAM optimizations, as well as the storage modules. And no, I’m not just talking about the capacity.

Memory innovations headed to AI phones

Digital Trends sat with Micron, a global leader in memory and storage solutions, to break down the role of RAM and storage for AI processes on smartphones. The advancements made by Micron should be on your radar the next you go shopping for a top-tier phone. 

The latest from the Idaho-based company includes the G9 NAND mobile UFS 4.1 storage and 1γ (1-gamma) LPDDR5X RAM modules for flagship smartphones. So, how exactly do these memory solutions push the cause of AI on smartphones, apart from boosting the capacity? 

Let’s start with the G9 NAND UFS 4.1 storage solution. The overarching promise is frugal power consumption, lower latency, and high bandwidth. The UFS 4.1 standard can reach peak sequential read and write speeds of 4100 MBps, which amounts to a 15% gain over the UFS 4.0 generation while trimming the latency numbers, too. 

Another crucial benefit is that Micron’s next-gen mobile storage modules go all the way up to 2TB capacity. Moreover, Micron has managed to shrink their size, making them an ideal solution for foldable phones and next-gen slim phones such as the Samsung Galaxy S25 Edge. 

Moving over to the RAM progress, Micron has developed what it calls 1γ LPDDR5X RAM modules. They deliver a peak speed of 9200 MT/s, can pack 30% more transistors due to size shrinking, and consume 20% lower power while at it. Micron has already served the slightly slower 1β (1-beta) RAM solution packed inside the Samsung Galaxy S25 series smartphones.

The interplay of storage and AI 

Ben Rivera, Director of Product Marketing in Micron’s Mobile Business Unit, tells me that Micron has made four crucial enhancements atop their latest storage solutions to ensure faster AI operations on mobile devices. They include Zoned UFS, Data Defragmentation, Pinned WriteBooster, and Intelligent Latency Tracker. 

“This feature enables the processor or host to identify and isolate or “pin” a smartphone’s most frequently used data to an area of the storage device called the WriteBooster buffer (akin to a cache) to enable quick, fast access,” explains Rivera about the Pinned WriteBooster feature. 

Every AI model – think of Google Gemini or ChatGPT — that seeks to perform on-device tasks needs its own set of instruction files that are stored locally on a mobile device. Apple Intelligence, for example, needs 7GB of storage for all its shenanigans.

To perform a task, you can’t depute the entire AI package to the RAM, because it would need space for handling other critical chores such as calling or interacting with other important apps. To deal with the constraint on the Micron storage module, a memory map is created that only loads the needed AI weights from the storage and onto the RAM. 

When resources get tight, what you need is a faster data swap and reading. Doing so ensures that your AI tasks are executed without affecting the speed of other important tasks. Thanks to Pinned WriteBooster, this data exchange is sped up by 30%, ensuring the AI tasks are handled without any delays.

So, let’s say you need Gemini to pull up a PDF for analysis. The fast memory swap ensures that the needed AI weights are quickly shifted from the storage to the RAM module. 

Next, we have Data Defrag. Think of it as a desk or almirah organizer, one that ensures that objects are neatly grouped across different categories and placed in their unique cabinets so that it’s easy to find them. 

In the context of smartphones, as more data is saved over an extended period of usage, all of it is usually stored in a rather haphazard matter. The net impact is that when the onboard system needs access to a certain kind of files, it becomes harder to find them all, leading to slower operation. 

According to Rivera, Data Defrag not only helps with orderly storage of data, but also changes the route of interaction between the storage and device controller. In doing so, it enhances the read speed of data by an impressive 60%, which naturally hastens all kinds of user-machine interactions, including AI workflows. 

“This feature can help expedite AI features such as when a generative AI model, like one used to generate an image from a text prompt, is called from storage to memory, allowing data to be read faster from storage into memory,” the Micron executive told Digital Trends. 

Intelligence Latency Tracker is another feature that essentially keeps an eye on lag events and factors that might be slowing down the usual pace of your phone. It subsequently helps with debugging and optimizing the phone’s performance to ensure that regular, as well as AI tasks, don’t run into speed bumps. 

The final storage enhancement is Zoned UFS. This system ensures that data with similar I/O nature is stored in an orderly fashion. This is crucial because it makes it easier for the system to locate the necessary files, instead of wasting time rummaging through all the folders and directories. 

“Micron’s ZUFS feature helps organize data so that when the system needs to locate specific data for a task, it’s a faster and smoother process,” Rivera told us. 

Going beyond the RAM capacity

When it comes to AI workflows, you need a certain amount of RAM. The more, the better. While Apple has set the baseline at 8GB for its Apple Intelligence stack, players in the Android ecosystem have moved to 12GB as the safe default. Why so? 

“AI experiences are also extremely data-intensive and thus power-hungry. So, in order to deliver on the promise of AI, memory and storage need to deliver low latency and high performance at the utmost power efficiency,” explains Rivera. 

With its next-gen 1γ (1-gamma) LPDDR5X RAM solution for smartphones, Micron has managed to reduce the operational voltage of the memory modules. Then there’s the all-too-important question of local performance. Rivera says the new memory modules can hum at up to 9.6 gigabits per second, ensuring top-notch AI performance. 

Micron says improvements in the Extreme ultraviolet (EUV) lithography process have opened the doors for not only higher speeds, but also a healthy 20% jump in energy efficiency. 

The road to more private AI experiences? 

Microns’s next-gen RAM and storage solutions for smartphones are targeted not just at improving the AI performance, but also the general pace of your day-to-day smartphone chores. I was curious whether the G9 NAND mobile UFS 4.1 storage and 1γ (1-gamma) LPDDR5X RAM enhancements would also speed up the offline AI processors. 

Smartphone makers as well as AI labs are increasingly shifting towards local AI processing. That means instead of sending your queries to a cloud server where the operation is handled, and then the result is sent to your phone using an internet connection, the entire workflow is executed locally on your phone.

From transcribing calls and voice notes to processing your complex research material in PDF files, everything happens on your phone, and no personal data ever leaves your device. It’s a safer approach that is also faster, but at the same time, it requires beefy system resources. A faster and more efficient memory module is one of those prerequisites. 

Can Micron’s next-gen solutions help with local AI processing? It can. In fact, it will also speed up processes that require a cloud connection, such as generating videos using Google’s Veo model, which still require powerful compute servers.

“A native AI app running directly on the device would have the most data traffic since not only is it reading user data from the storage device, it’s also conducting AI inferencing on the device. In this case, our features would help optimize data flow for both,” Rivera tells me. 

So, how soon can you expect phones equipped with the latest Micron solutions to land on the shelves? Rivera says all major smartphone manufacturers will adopt Micron’s next-gen RAM and storage modules. As far as market arrival goes, “flagship models launching in late 2025 or early 2026” should be on your shopping radar.






Leave a Comment

Your email address will not be published. Required fields are marked *