3 min read

Duer OS Prometheus Project, PLATO platform, HPE’s neural network chip, and BullSequana S AI servers, in today’s trending data science news.

Baidu’s new OS for AI capabilities

Baidu launches new operating system Duer OS Prometheus Project to advance conversational AI

Baidu Inc., has officially launched a new operating system to speed the conversational AI capabilities. Known as the “Duer OS Prometheus Project,” the operating system is already providing conversational support to 10 major domains and more than 100 subdomains in China. Baidu has announced a $1 million fund to invest in the efforts in this space.

Announcing AI platform PLATO

AI.io launches PLATO, an AI-based operating platform for enterprise

AI.io has announced an AI-based neural network PLATO which expands as Perceptive Learning Artificial intelligence Technology Operating platform. The platform powers apps in consumer, and enterprise use cases for machine learning, deep learning, natural language processing, computer vision, machine reasoning, and cognitive AI. Based on the specific needs of the business, PLATO can be used to extract value from large amounts of structured and unstructured data using an intuitive, easy to use interface and toolkit. The data can then be converted, normalized, and enriched with concepts, relationships, sentiment and tone giving PLATO the ability to understand the content in a fully cognitive manner. Businesses can then embed the data into existing applications and workflows, with new insights. Thus Plato can enhance the overall decision-making resulting in revenue growth.

HPE’s upcoming processor could well be an accelerator

HPE developing its own neural network chip that is faster than anything in the market

Hewlett Packard Enterprise could be developing an advanced chip for high-performance computing under intense power and physical space limitations characteristic of space missions. Recently when VP and GM Tom Bradicich was asked about the processor architecture, he said the Dot Product Engine is less of a full processor and more like an accelerator, which takes offload of certain multiplication elements common in neural network inference and broader HPC applications. “DPE is not a neural network per se, in the sense that it’s not a fixed configuration, but rather is reconfigurable, and can be used for inference of several types of neural networks like DNN, CNN, RNN. Hence it can do neural network jobs and workloads,” he said, adding that DPE executes linear algebra in the analog domain, which is more efficient than digital implementations, such as dedicated ASICs. With the added advantage of reconfigurability, DPE is fast because it accelerates vector * matrix math, dot product multiplication, by exploiting Ohms Law on a memristor array. In fact, the way it is designed, it is “faster than anything available on the market, Bradicich claimed, with a “much better fit for the performance, power, and space requirements of extreme edge environments.”

Announcing BullSequana S servers

Atos launches next generation AI servers “BullSequana S” that are ultra-scalable, ultra-flexible

Atos has developed BullSequana S, its in-house next-generation servers optimized for machine learning, business‐critical computing applications and in-memory environments. BullSequana S comes with a unique combination of powerful processors CPUs and GPUs. Leveraging a modular architecture, the BullSequana S server’s flexibility offers customers an agility to add machine learning and AI capacity to existing enterprise workloads, thanks to the introduction of a GPU. Within a single server, GPU, storage and compute modules are mixed. BullSequana S integrates the advanced Intel Xeon Scalable processors Skylake with an innovative architecture designed by Atos’ R&D teams.

LEAVE A REPLY

Please enter your comment!
Please enter your name here