4 min read

DeepMind AlphaZero beating best chess bot, Numba 0.36, Apple’s machine learning framework Turi Create, and Gensim 3.2.0 among today’s top stories in machine learning, artificial intelligence and data science news.

DeepMind’s AlphaZero is now the most dominant chess playing entity on the planet!

Google’s self-learning AI Alpha Zero teaches itself chess from scratch in four hours, and beats previous champion

A few months after demonstrating its dominance over the game of Go, DeepMind’s AlphaZero AI has trounced the world’s top-ranked chess engine—and it did so without any prior knowledge of the game and after just four hours of self-training.  In a one-on-one tournament against Stockfish 8, the reigning computer chess champion, the DeepMind-built system didn’t lose a single game, winning or drawing all of the 100 matches played. AlphaZero is a modified version of AlphaGo Zero, the AI that recently won all 100 games of Go against its predecessor, AlphaGo. The system works nearly identically to AlphaGo Zero, but instead of playing Go, the machine is programmed to play chess and shogi.

Shape of things to come: IoT heralds ‘smart’ gymming!

Practix using IoT tracking device in its Gyms for real-time data analytics

Practix said it has developed an activity tracking system for gyms, using an IoT tracking device with real-time data analytics done by AI and machine learning algorithms. The Berlin-based group offers the gym customers automatic, real-time logging of their workout and data-based analytics and metrics for their workout. Gym members receive a wristband which can connect to any current gear in the gym through scanning Practix’s NFC patch. The wristband tracks user’s workout data and runs it through their algorithms. The output is presented through the Practix app and website where both gym operator and the customer can see the detailed information about their workout.

Numba 0.36 released

Numba 0.36.1 announced with LLVM 5, the stencil decorator, and built with Anaconda Distribution 5 compilers

Numba 0.36.1 has been released with new features and some fixes to user-reported bugs (version 0.36.0 was never released). Numba has been upgraded to require llvmlite 0.21, which increases the required LLVM version to 5.0 resulting in minor improvements to code generation, especially for AVX-512. LLVM 5 also adds support for AMD Ryzen CPUs. Whereas a new compiler decorator has been introduced in this release: stencil. Similar to vectorize, it allows to write a simple kernel function that is expanded into a larger array calculation.

According to the developers, stencil is for implementing “neighborhood” calculations, like local averages and convolutions. The kernel function accesses a view of the full array using relative indexing (i.e. a[-1] means one element to the left of the central element) and returns a scalar that goes into the output array. The ParallelAccelerator compiler passes can also multithread a stencil the same way they multithread an array expression. “The current @stencil implements only one option for boundary handling, but more can be added, and it does allow for asymmetric neighborhoods, which are important for trailing averages,” the developers said.

Also, since Anaconda has started using custom builds of GCC 7.2 (on Linux) and clang 4.0 (on OS X) to build conda packages in order to ensure the latest compiler performance and security features were enabled (even on older Linux distributions like CentOS 6), developers have migrated the build process for Numba conda packages on Mac and Linux over to these compilers for consistency with the rest of the distribution. “When doing AOT compilation in Numba, it uses the same compiler that was used for NumPy, so on Anaconda it will remind you to install the required compiler packages with conda,” Numba team said.

Apple’s ambitious foray into ML, AI..

Apple open sources ‘Turi Create’ machine learning framework on Github

After acquiring machine learning startup Turi last year, Apple has now created a new machine learning framework called Turi Create. The tech giant has shared the framework on Github. According to Apple, Turi Create is designed to simplify the development of custom machine learning models. Apple says Turi Create is easy to use, has a visual focus, is fast and scalable, and is flexible. Turi Create is designed to export models to Core ML for use in iOS, macOS, watchOS, and tvOS apps. With Turi Create, developers can quickly build a feature that allows their app to recognize specific objects in images. Doing so takes just a few lines of code. Turi Create covers several common scenarios including recommender systems, image classification, image similarity, object detection, activity classifier, and text classifier.

Gensim’s “Christmas Come Early”

Gensim 3.2.0 released: new Poincare embeddings, speed up of FastText, pretrained models for download, Linux/Windows/MacOS wheels and performance improvements

Gensim has announced the release of its version 3.2.0 (with a codename Christmas Come Early). The new version comes with pre-trained models for download and implements Poincaré embeddings. FastText has been significantly optimized with fast multithreaded implementation, natively in Python/Cython. The release deprecates the existing wrapper for Facebook’s C++ implementation.There are also binary pre-compiled wheels for Windows, OSX and Linux. Users no longer need to have a C compiler for using the fast (Cythonized) version of word2vec, doc2vec, fasttext etc. Gensim 3.2.0 also adds DeprecationWarnings to deprecated methods and parameters, with a clear schedule for removal. There are other performance improvements and bug fixes, the details of which are available on GitHub.

LEAVE A REPLY

Please enter your comment!
Please enter your name here