2 min read

Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel.

What is numpywren?

Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of “cores” and “memory”. Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction.

Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours.

They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting.

Why serverless for Numpywren?

Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms.

What’s next for Numpywren?

One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives.

The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem.

The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel.

As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research.

For further explanation, go through the research paper.

Read Next

Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework

Azure Functions 2.0 launches with better workload support for serverless

How Serverless computing is making AI development easier

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.