4 min read

If you look at the current trends in the mobile market space, a lot of mobile phone manufacturers portray artificial intelligence as the chief feature in their mobile phones. The total number of developers who build for mobile is expected to hit 14m mark by 2020, according to Evans Data survey. With this level of competition, developers have resorted to Artificial Intelligence to distinguish their app, or to make their mobile device stand out. AI on Mobile is the next big thing.

AI on Mobile can be incorporated in multiple forms. This may include hardware, such as AI chips as seen on Apple’s iPhone X or software-based, such as Google’s TensorFlow for Mobile. Let’s look in detail how smartphone manufacturers and mobile developers are leveraging the power of AI for Mobile for both hardware and software specifications.

Embedded chips and In-device AI

Mobile Handsets nowadays are equipped with specialized AI chips. These chips are embedded alongside CPUs to handle heavy lifting tasks in smartphones to bring AI on Mobile. These built-in AI engines can not only respond to your commands but also lead the way and make decisions about what it believes is best for you.

So, when you take a picture, the smartphone software, leveraging the power of AI hardware correctly identifies the person, object, or location being photographed and also compensates for low-resolution images by predicting the pixels that are missing. When we talk about battery life, AI allocates power to relevant functions eliminating unnecessary use of power. Also, in-device AI reduces data-processing dependency on cloud-based AI, saving both energy, time and associated costs.

The past few months have seen a large number of AI-based silicon popping everywhere. The trend first began with Apple’s neural engine, a part of the new A11 processor Apple developed to power the iPhone X.  This neural engine powers the machine learning algorithms that recognize faces and transfer facial expressions onto animated emoji.

Competing head first with Apple, Samsung revealed the Exynos 9 Series 9810. The chip features an upgraded processor with neural network capacity for AI-powered apps. Huawei also joined the party with Kirin 970 processor, a dedicated Neural Network Processing Unit (NPU) which was able to process 2,000 images per minute in a benchmark image recognition test.

Google announced the open beta of its Tensor Processing Unit 2nd Gen. ARM announced its own AI hardware called Project Trillium, a mobile machine learning processor.  Amazon is also working on a dedicated AI chip for its Echo smart speaker.

Google Pixel 2 features a Visual Core co-processor for AI. It offers an AI song recognizer, superior imaging capabilities, and even helps the Google Assistant understand the user commands/questions better.

The arrival of AI APIs for Mobile

Apart from in-device hardware, smartphones also have witnessed the arrival of Artificially intelligent APIs. These APIs add more power to a smartphone’s capabilities by offering personalization, efficient searching, accurate video and image recognition, and advanced data mining. Let’s look at a few powerful machine learning APIs and libraries targeted solely to Mobile devices.

It all began with Facebook announcing Caffe2Go in 2016. This Caffe version was designed for running deep learning models on mobile devices. It condensed the size of image and video processing AI models by 100x, to run neural networks with high efficiency on both iOS and Android. Caffe2Go became the core of Style Transfer, Facebook’s real-time photo stylization tool.

Then came Google’s TensorFlow Lite in 2017 announced at the Google I/O conference. Tensorflow Lite is a feather-light upshot for mobile and embedded devices. It is designed to be Lightweight, Speedy, and Cross-platform (the runtime is tailormade to run on various platforms–starting with Android and iOS.) TensorFlow Lite also supports the Android Neural Networks API, which can run computationally intensive operations for machine learning on mobile devices.

Following TensorFlow Lite came Apple’s CoreML, a programming framework designed to make it easier to run machine learning models on iOS. Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees. CoreML makes it easier for apps to process data locally using machine learning without sending user information to the cloud. It also optimizes models for Apple mobile devices, reducing RAM and power consumption.

Artificial Intelligence is finding its way into every aspect of a mobile device, whether it be through hardware with dedicated AI chips or through APIs for running AI-enabled services on hand-held devices. And this is just the beginning. In the near future, AI on Mobile would play a decisive role in driving smartphone innovation possibly being the only distinguishing factor consumers think of while buying a mobile device.

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here