3 min read

The newest version of V8 is now out in its beta form and is expected to go fully live with the Chrome 69 Stable in a couple of weeks time. Considering that there is a new branch of V8 created every 6 weeks, this newly released version of the JavaScript Engine has multiple features that will have developers on the lookout for its full and proper release.

Here are some of the features that are expected to improve the previously released versions.

Embedded built-ins saving memory

The newly released version supports a host of built-in functions. Examples are methods on built-in objects such as Array.prototype.sort and RegExp.prototype.exec, but also a wide range of internal functionality.

Built–in functions cause a huge overhead as they are complied at build-time. They are then serialized into a snapshot and finally deserialized at runtime to create the initial JS heap state. In the entire process, they consume around 700KB in each Isolate (an Isolate roughly corresponds to a browser tab in Chrome). To combat this issue, V8 V6.4 included lazy deserialization which meant that that each Isolate only paid only for the built-ins that it actually needs (but each Isolate still had its own copy).

Embedded built-ins take this feature a notch higher. This is shared by all Isolates, and embedded into the binary itself instead of copied onto the JavaScript heap. This means that built-ins exist in memory only once regardless of how many Isolates are running. This has led to a 9% reduction of the V8 heap size over the top 10k websites on x64. Of these sites, 50% save at least 1.2 MB, 30% save at least 2.1 MB, and 10% save 3.7 MB or more.

Improved performance

Liftoff, WebAssembly’s new baseline compiler, helps complex websites that use big WebAssmbly modules to start up faster. Depending on the hardware, there is more than 10x increase in the speed of the system when the latest V8 is used.

Faster DataView operations

DataView methods have been revamped in V6.9. As compared to previous versions, where calling C++ was costly, this new version has reduced the same. Moreover, using DataViews is now proving to be efficient thanks to the inline calls to DataView methods when compiling JavaScript code in TurboFan. This has resulted in better peak performance for hot code.

Faster processing of WeakMaps during garbage collection

V8 v6.9 also looks at improving WeakMap processing. It reduces Mark-Compact garbage collection pause times thus leading to faster operations.

Concurrent and incremental marking can now process WeakMaps. Previously all this work was done in the final atomic pause of Mark-Compact GC. Since moving all of the work outside the pause isn’t suitable, the GC now also does more work in parallel to further reduce pause times. These optimizations essentially halved the average pause time for Mark-Compact GCs in the Web Tooling Benchmark.

WeakMap processing uses a fixed-point iteration algorithm. This can degrade to quadratic runtime behavior in certain cases. The new release is now able to switch to another algorithm that is guaranteed to finish in linear time if the GC does not finish within a certain number of iterations. Previously, the GC took a few seconds to finish even with a relatively small heap, while the linear algorithm finishes the processing within a few milliseconds.

Head over to the github page for more information on V8. You can also visit the official blog of the V8 javaScript engine for more clarity on the release.

Read Next

5 JavaScript machine learning libraries you need to know

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

JavaScript async programming using Promises [Tutorial]

LEAVE A REPLY

Please enter your comment!
Please enter your name here