We here at Nasi Lemak Tech are all about the tech while mixing it up with other spices of life like nasi lemak before. We did talk about what Huawei’s Machine Learning Algorithm (MLA) is all about here at this post. Of course, that’s not enough – Huawei wants to take things to the “Future of Mobile AI” – by combining both on-device AI + cloud AI with the help of a dedicated Neural Processing Unit (NPU).
In a short summary, the Kirin 960 era where all the MLA processing is done by the processor itself. This causes inefficiency and battery depletion as general purpose processors aren’t the best for such specific tasks – just like dedicated hardware video codecs versus software codecs that we’ve discussed here.
With a dedicated processor to handle the task, the physical wiring and layout of this processor are made to perform this task with utmost efficiency. Hence, Huawei’s decision to create a dedicated neural processing unit (NPU) is definitely a step forward. Is it going to be effective? I think yes.
Let’s take a look back at the MLA implemented in Huawei P10/P10 Plus. For MLA, all learning, deciphering, and performance/resource allocation and optimization were done on locally. No cloud AI was involved, and hence it takes a long time to learn and optimize enough to have a significance in user experience.
What Huawei has done for what they dub as the “mobile AI” with the help of the new neural processing unit (NPU) is by implementing a local (on-device) AI with cloud AI. This creates a platform to share and gather all information that the neural processing unit (NPU) has learned, creating a huge hive mind. Think of it like Unity from Rick and Morty, where all of the “citizens” actually share a single brain.
What value or difference does the new neural processing unit (NPU) with mobile AI bring? I’m not sure – we’ll have to test it out when it shows up in the market. So far, it sounds like Huawei has realized their mistake with the first generation of AI and is currently fixing it.