5 min read

New machine learning language Tile, new HPC systems from Dell EMC and HPE, Microsoft’s Neural Fuzzing, and Amazon’s project Ironman in today’s data science news.

Introducing Tile

Tile: A new language for machine learning from Vertex.AI

Vertex.AI has released a new machine learning language called Tile. It is a tensor manipulation language that is used in PlaidML’s backend to generate custom kernels for each specific operation on each GPU. The automatically produced kernels make it easier to add support of GPUs and new processors, and saves time and effort overall. Tile’s syntax balances expressiveness and optimization to cover the widest range of operations to build neural networks. It closely resembles mathematical notation for describing linear algebra operations, and fully supports automatic differentiation. Vertex.AI said in its official blog that Tile was designed to be parallelizable as well as analyzable. In Tile, it’s possible to analyze issues ranging from cache coherency, use of shared memory, and memory bank conflicts.

Dell EMC announces new HPC systems

Dell EMC announces high-performance computing bundles aimed at AI, deep learning

At the SuperComputing 2017 conference in Denver, Dell EMC introduced a set of high-performance computing (HPC) systems, Dell EMC Ready Bundles for Machine and Deep Learning. These systems, the companies said, intend to bring HPC and data analytics into mainstream thus helping in fraud detection, image processing, financial investment analysis and personalized medicine. The set of services are expected to be available in the first half of 2018.

Dell EMC announces new PowerEdge server designed specifically for HPC workloads

Dell EMC introduced a new PowerEdge server designed specifically for HPC workloads: Dell EMC PowerEdge C4140 server. As part of a joint development agreement with NVIDIA, this new server supports up to four NVIDIA Tesla V100 GPU accelerators with PCIe and NVLink high-speed interconnect technology. The servers also leverage two Intel Xeon Scalable Processors, and is thus “ideal for intensive machine learning and deep learning applications to drive advances in scientific imaging, oil and gas exploration, financial services and other HPC industry verticals.”  The Dell EMC PowerEdge C4140 is expected to be available worldwide in December 2017.

Hewlett Packard announces set of upgraded HPC systems for AI

HPE Apollo 2000 Gen10

In a bid to make high-performance computing (HPC) and AI more accessible to enterprises, Hewlett Packard Enterprise has announced a set of upgraded high-density compute and storage systems. The HPE Apollo 2000 Gen10 is a multi-server platform for enterprises looking to support HPC and deep learning applications with limited datacenter space. The platform supports Nvidia Tesla V100 GPU accelerators to enable deep learning training and inference for use cases such as real-time video analytics for public safety. Enterprises deploying the HPE Apollo 2000 Gen10 system can start small with a single 2U shared infrastructure and scale out up to 80 HPE ProLiant Gen10 servers in a 42U rack.

HPE Apollo 4510 Gen10

The HPE Apollo 4510 Gen10 system is designed for enterprises with data-intensive workloads that are using object storage as an active archive. The system, which has 16 percent more cores than the previous generation, HPE said, and it offers storage capacity of up to 600TB in a 4U form factor with standard server depth. It also supports NVMe cards.

HPE Apollo 70

Hewlett Packard Enterprise has announced the launch of HPE Apollo 70, its first ARM-based HPC system using Cavium’s 64-bit ARMv8-A ThunderX2 server processor. Set to become available in 2018, the system is designed for memory-intensive HPC workloads, and is compatible with HPC components from HPE’s ecosystem partners including Red Hat Enterprise Linux, SUSE Linux Enterprise Server for ARM, and Mellanox InfiniBand and Ethernet fabric solutions.

HPE LTO-8 Tape

Hewlett Packard announced HPE LTO-8 Tape, which allows enterprises to offload primary storage to tape, with a storage capacity of 30 terabytes per tape cartridge — double that of the previous LTO-7 generation. The HPE LTO-8 Tape is slated for general availability in December 2017.

HPE T950

The HPE T950 tape library now stores up to 300 petabytes of data, Hewlett Packard announced. The HPE TFinity ExaScale tape library provides storage capacity for up to 1.6 exabytes of data, the company said.

Announcing Microsoft’s Neural Fuzzing

Neural Fuzzing: Microsoft uses machine learning, deep neural networks for new vulnerability testing

Microsoft has announced a new method for discovering software security vulnerabilities, called ‘neural fuzzing.’ The method combines machine learning and deep neural networks to use past experience in order to identify overlooked issues better. The neural fuzzing method takes traditional fuzz testing and adds a machine learning model to insert a deep neural network in the feedback loop of a ‘greybox fuzzer.’ Development Lead William Blum said the neural fuzzing approach is simple because it is not based on sophisticated handcrafted heuristics; instead, it simply learns from an existing fuzzer. He also argued that the new method explores data more quickly than a traditional fuzzer, and that it could be applied to any fuzzer, including blackbox and random fuzzers. “Right now, our model only learns fuzzing locations, but we could also use it to learn other fuzzing parameters such as the type of mutation or strategy to apply,” Blum said.

Amazon to launch Ironman

Amazon Web Services set to launch AI project Ironman, ease the use of Google’s TensorFlow

Amazon Web services could introduce a new service code-named ‘Ironman’ that will make it easier for people to do artificial intelligence work involving lots of different kinds of data, according to a report published in The Information. The Ironman program includes a new AWS cloud “data warehouse” service that collects data from multiple sources within a company and stores it in a central location. Besides, AWS plans to make it easier for people to use TensorFlow. Google made TensorFlow available under an open-source license in 2015, and the library is now widely used among researchers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here