3 min read

OpenAI, an Artificial intelligence research firm, brings in a wave of faster GPUs with their new GPU kernels, Block-Sparse GPU Kernels–software programs optimized to build sparse networks on Nvidia’s hardware chips.

These help in building faster yet efficient neural networks. Also, it won’t eat up much of memory space on your GPUs.

Neural networks are a complex branch of AI and are built using layers of connected nodes. However, their processing power is restricted by the architecture of the GPUs that they run on. Due to which, neural networks lack the presence of an efficient GPU implementation for sparse linear operations.

Researchers at OpenAI say that it is now possible to make neural networks highly efficient by bringing in sparse matrices into their design.

How sparse matrix helps GPUs

A sparse matrix is simply a mathematical matrix filled in with multiple entries of value zero. Such zero-valued elements can be easily compresses and detoured within matrix multiplications, which in turn saves computation time and also takes up lesser memory on GPUs.

Source: https://blog.openai.com/block-sparse-gpu-kernels/

The saved computational power can be later on used to train deep neural networks more efficiently. This means, neural networks can multi-task by performing inference, and running algorithms simultaneously, that too 10 times faster than the regular matrices.

The problem that OpenAi face with these sparse matrix is, Nvidia, the biggest name in the manufacturing of GPUs for neural networks does not have  a support for sparse matrix models within its hardware.

Enter Block sparse GPU kernels…

Block sparse GPU kernels: Sparse matrix gets an upgrade

To overcome the problem with sparsity within the Nvidia hardware, a team of researchers at OpenAI developed Block sparse GPU kernels.

Source:  https://blog.openai.com/block-sparse-gpu-kernels/

Key points to note about block sparse GPU kernels:

  • They are written in Nvidia’s CUDA programming language.
  • At present, they are only compatible with TensorFlow
  • Also, they only support Nvidia’s GPUs.

OpenAI also declared that it is sharing its block sparse GPU kernels with the wider research community in order to put it to use in other developments. Also, these kernels would be expanded to support other hardware and frameworks.

OpenAI used the neural network enhanced with the block sparse GPU kernels, to carry out sentiment analysis on the reviews for IMDB and Amazon. The result was, these sparse models won over the dense models on all sentiment datasets.

Source: https://s3-us-west-2.amazonaws.com/openai-assets/blocksparse/blocksparsepaper.pdf

OpenAI also mentioned that their sparse model improved at a state-of-the-art level on the IMDB dataset from 5.91% error to 5.01%. They say it has been a promising improvement over their previous results, which performed extremely well on shorter sentence level datasets.

As these new kernels seem very promising, the OpenAI research team does not have an ultimate view on when and where these kernels would help. The community promises to explore this space further.

To learn how to install and develop Block sparse GPU kernels, click on the GitHub link here.

 

A Data science fanatic. Loves to be updated with the tech happenings around the globe. Loves singing and composing songs. Believes in putting the art in smart.

LEAVE A REPLY

Please enter your comment!
Please enter your name here