4 min read

Yesterday, the team at Redis Labs, the provider of Redis Enterprise announced that its customers can now scale their datasets using Intel Optane DC persistent memory. Scaling will be offered cost-effectively at a  multi-petabyte scale, at sub-millisecond speeds.

Also, the two-day RedisConf2019 (2-3 April) was held at San Francisco, yesterday, where 1500 Redis developers, innovators and contributors shared their use cases and experiences.

Redis Enterprise, a linearly scalable, in-memory multi-model database, supports native and probabilistic data structures, AI, streams, document, graph, time series, and search. It has been designed and optimized to be operated in either mode of Intel’s persistent memory technology that is, Memory Mode and App Direct Mode. Redis Enterprise offers the customers flexibility for using the most effective mode to process their massive data sets quickly and cost-effectively.

Intel Optane DC persistent memory is a memory technology that provides a combination of affordable large capacity and support for data persistence. Redis Labs collaborated closely with Intel throughout the development of Intel Optane DC persistent memory for providing high-performance to the Redis Enterprise database. Also, it drastically improved performance in Benchmark testing while offering huge cost savings at the same time.

The benchmark testing conducted by various companies to test Intel Optane DC persistent memory, reveals that Redis Enterprise has proved that a single cluster node with a multi-terabyte dataset can support over one million operations per second at sub-millisecond latency while also serving over 80% of the requests from persistent memory. Redis Enterprise on Intel Optane DC persistent memory also offered more than 40 percent cost savings as compared to the traditional DRAM-only memory.

Key features of Intel Optane DC persistent memory

  • It optimizes in-memory databases for advanced analytics in multi-cloud environments.
  • It reduces the wait time associated with fetching the data sets from the system storage.
  • It also helps in transforming the content delivery networks while bringing in greater memory capacity for delivering immersive content at the intelligent edge and provides better user experiences.
  • It provides consistent QoS (Quality of Service) levels in order to reach out to more customers while managing TCO (Total Cost of Ownership) both from hardware and operating cost levels. It also provides cost-effective solutions for customers.
  • Intel Optane DC persistent memory provides with a persistent memory tier between DRAM and SSD that provides up to 6TBs of non-volatile memory capacity in a two-socket server and up to 1.5TB of DRAM.
  • Moreover, it extends a standard machine’s memory capacity to 7.5TBs of byte-addressable memory (DRAM + persistent memory), while also providing persistence. This technology is available in a DIMM form factor and as a 128, 256, and 512GB persistent memory module.

Alvin Richards, chief product officer at Redis Labs wrote to us in an email, “Enterprises are faced with increasingly massive datasets that require instantaneous processing across multiple data-models. With Intel Optane DC persistent memory, combining with the rich data models supported by Redis Enterprise, global enterprises can now achieve sub-millisecond latency while processing millions of operations per second with affordable server infrastructure costs.” He further added, “Through our close collaboration with Intel, Redis Enterprise on Intel Optane DC persistent memory our customers will not have to compromise on performance, scale, and budget for their multi-terabyte datasets.”

Redis Enterprise is available for any cloud service or as downloadable software for hardware along with Intel Optane DC persistent memory support.

To know more about Intel Optane DC persistent memory, check out the Intel’s page.

Announcements at RedisConf 19

Yesterday at the RedisConf19, Redis Labs introduced two new data models and a data programmability paradigm for multi-model operation. The company made major announcements including Redis TimeSeries, RedisAI and RedisGears.

RedisTimeSeries

Redis TimeSeries is designed to collect and store high volume and velocity data and organize it by time intervals. It helps organizations to easily process useful data points with built-in capabilities for downsampling, aggregation, and compression. This provides organizations with the ability to query and extract data in real-time for analytics.

RedisAI

RedisAI eliminates the need to migrate data to and from different environments and it allows developers to apply state-of-the-art AI models to the data. RedisAI reduces processing overhead by integrating with common deep learning frameworks including TensorFlow, PyTorch, and TorchScript, and by utilizing Redis Cluster capabilities over GPU-based servers.

RedisGears

RedisGears, an in-database serverless engine, can operate multiple models simultaneously. It is based on the efficient Redis Cluster distributed architecture and enables infinite programmability options supporting event-driven or transaction-based operations.

Today, Redis Labs will be showing how to get the most of Redis Enterprise on Intel’s persistent memory at RedisConf19.

Read Next

Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL)

Redis Labs announces its annual Growth of more than 60% in the Fiscal Year 2019

Redis Labs raises $60 Million in Series E Funding led by Francisco partners