2 min read
Users can simply install Mars tensor with the following code:
import mars.tensor as mta = mt.random.rand(1000, 2000)(a + 1).sum(axis=1).execute()
According to a Medium post by Synced, “Mars can simply tile a large tensor into small chunks and describe the inner computation with a directed graph, enabling the running of parallel computation on a wide range of distributed environments, from a single machine to a cluster comprising thousands of machines.”
Xuye Qin, Alibaba Cloud Senior Engineer, bragged about Mars’ performance by stating, “Mars can complete the computation on a 2.25T-size matrix and a 2.25T-size matrix multiplication in two hours.”
Unlike NumPy, Mars provides users with the ability to run matrix computation at a very large-scale. Alibaba developers carried out a simple experiment to test Mars’ performance. According to the graph below where NumPy (represented by a red cross at the upper left) lags far behind Mars tensors, which is successful in achieving ideal performance values.
Mars supports a subset of NumPy interfaces, which include:
- Arithmetic and mathematics: +, -, *, /, exp, log, etc.
- Reduction along axes (sum, max, argmax, etc).
- Most of the array creation routines (empty, ones_like, diag, etc). Mars not only supports create array/tensor on GPU, but also supports create sparse tensor.
- Most of the array manipulation routines such as reshape, rollaxis, concatenate, etc.
- Basic indexing (indexing by ints, slices, newaxes, and Ellipsis)
- Fancy indexing along a single axis with lists or NumPy arrays, e.g. x[[1, 4, 8],:5]
- Universal functions for elementwise operations.
- Linear algebra functions including product (dot, matmul, etc.) and decomposition (cholesky, svd, etc.).
To know more about Mars in detail, visit its official GitHub page.