Monday, February 12, 2018

Lazy deserialization

TL;DR: Lazy deserialization was recently enabled by default in V8 version 6.4, reducing V8’s memory consumption by over 500 KB per browser tab on average. Read on to find out more!

Introducing V8 snapshots

But first, let’s take a step back and have a look at how V8 uses heap snapshots to speed up creation of new Isolates (which roughly correspond to a browser tab in Chrome). My colleague Yang Guo gave a good introduction on that front in his article on custom startup snapshots:

The JavaScript specification includes a lot of built-in functionality, from math functions to a full-featured regular expression engine. Every newly-created V8 context has these functions available from the start. For this to work, the global object (for example, the window object in a browser) and all the built-in functionality must be set up and initialized into V8’s heap at the time the context is created. It takes quite some time to do this from scratch.

Fortunately, V8 uses a shortcut to speed things up: just like thawing a frozen pizza for a quick dinner, we deserialize a previously-prepared snapshot directly into the heap to get an initialized context. On a regular desktop computer, this can bring the time to create a context from 40 ms down to less than 2 ms. On an average mobile phone, this could mean a difference between 270 ms and 10 ms.

To recap: snapshots are critical for startup performance, and they are deserialized to create the initial state of V8’s heap for each Isolate. The size of the snapshot thus determines the minimum size of the V8 heap, and larger snapshots translate directly into higher memory consumption for each Isolate.

A snapshot contains everything needed to fully initialize a new Isolate, including language constants (e.g., the undefined value), internal bytecode handlers used by the interpreter, built-in objects (e.g., String), and the functions installed on built-in objects (e.g., String.prototype.replace) together with their executable Code objects.

Startup snapshot size in bytes from 2016-01 to 2017-09. The x-axis shows V8 revision numbers.

Over the past two years, the snapshot has nearly tripled in size, going from roughly 600 KB in early 2016 to over 1500 KB today. The vast majority of this increase comes from serialized Code objects, which have both increased in count (e.g., through recent additions to the JavaScript language as the language specification evolves and grows); and in size (built-ins generated by the new CodeStubAssembler pipeline ship as native code vs. the more compact bytecode or minimized JS formats).

This is bad news, since we’d like to keep memory consumption as low as possible.

Lazy deserialization

One of the major pain points was that we used to copy the entire content of the snapshot into each Isolate. Doing so was especially wasteful for built-in functions, which were all loaded unconditionally but may never have ended up being used.

This is where lazy deserialization comes in. The concept is quite simple: what if we were to only deserialize built-in functions just before they were called?

A quick investigation of some of the most popular websites showed this approach to be quite attractive: on average, only 30% of all built-in functions were used, with some sites only using 16%. This looked remarkably promising, given that most of these sites are heavy JS users and these numbers can thus be seen as a (fuzzy) lower bound of potential memory savings for the web in general.

As we began working on this direction, it turned out that lazy deserialization integrated very well with V8’s architecture and there were only a few, mostly non-invasive design changes necessary to get up and running:

  1. Well-known positions within the snapshot. Prior to lazy deserialization, the order of objects within the serialized snapshot was irrelevant since we’d only ever deserialize the entire heap at once. Lazy deserialization must be able to deserialize any given built-in function on its own, and therefore has to know where it is located within the snapshot.
  2. Deserialization of single objects. V8’s snapshots were initially designed for full heap deserialization, and bolting on support for single-object deserialization required dealing with a few quirks such as non-contiguous snapshot layout (serialized data for one object could be interspersed with data for other objects) and so-called backreferences (which can directly reference objects previously deserialized within the current run).
  3. The lazy deserialization mechanism itself. At runtime, the lazy deserialization handler must be able to a) determine which code object to deserialize, b) perform the actual deserialization, and c) attach the serialized code object to all relevant functions.

Our solution to the first two points was to add a new dedicated built-ins area to the snapshot, which may only contain serialized code objects. Serialization occurs in a well-defined order and the starting offset of each Code object is kept in a dedicated section within the built-ins snapshot area. Both back-references and interspersed object data are disallowed.

Lazy built-in deserialization is handled by the aptly named DeserializeLazy built-in, which is installed on all lazy built-in functions at deserialization time. When called at runtime, it deserializes the relevant Code object and finally installs it on both the JSFunction (representing the function object) and the SharedFunctionInfo (shared between functions created from the same function literal). Each built-in function is deserialized at most once.

In addition to built-in functions, we have also implemented lazy deserialization for bytecode handlers. Bytecode handlers are code objects that contain the logic to execute each bytecode within V8’s Ignition interpreter. Unlike built-ins, they neither have an attached JSFunction nor a SharedFunctionInfo. Instead, their code objects are stored directly in the dispatch table into which the interpreter indexes when dispatching to the next bytecode handler. Lazy deserialization is similar as to built-ins: the DeserializeLazy handler determines which handler to deserialize by inspecting the bytecode array, deserializes the code object, and finally stores the deserialized handler in the dispatch table. Again, each handler is deserialized at most once.

Results

We evaluated memory savings by loading the top 1000 most popular websites using Chrome 65 on an Android device, with and without lazy deserialization.

On average, V8’s heap size decreased by 540 KB, with 25% of the tested sites saving more than 620 KB, 50% saving more than 540 KB, and 75% saving more than 420 KB.

Runtime performance (measured on standard JS benchmarks such as Speedometer, as well as a wide selection of popular websites) has remained unaffected by lazy deserialization.

Next steps

Lazy deserialization ensures that each Isolate only loads the built-in code objects that are actually used. That is already a big win, but we believe it is possible to go one step further and reduce the (built-in-related) cost of each Isolate to effectively zero.

We hope to bring you updates on this front later this year. Stay tuned!

Posted by Jakob Gruber (@schuay)

Thursday, February 1, 2018

V8 release v6.5

Every six weeks, we create a new branch of V8 as part of our release process. Each version is branched from V8’s Git master immediately before a Chrome Beta milestone. Today we’re pleased to announce our newest branch, V8 version 6.5, which is in beta until its release in coordination with Chrome 65 Stable in several weeks. V8 v6.5 is filled with all sorts of developer-facing goodies. This post provides a preview of some of the highlights in anticipation of the release.

Untrusted code mode

In response to the latest speculative side-channel attack called Spectre, V8 introduced an untrusted code mode. If you embed V8, consider leveraging this mode in case your application processes user-generated, not-trustworthy code. Please note that the mode is enabled by default, including in Chrome.

Streaming compilation for WebAssembly code

The WebAssembly API provides a special function to support streaming compilation in combination with the fetch() API:

const module = await WebAssembly.compileStreaming(fetch('foo.wasm'));

This API has been available since V8 v6.1 and Chrome 61, although the initial implementation didn’t actually use streaming compilation. However, with V8 v6.5 and Chrome 65 we take advantage of this API and compile WebAssembly modules already while we are still downloading the module bytes. As soon as we download all bytes of a single function, we pass the function to a background thread to compile it.

Our measurements show that with this API, the WebAssembly compilation in Chrome 65 can keep up with up to 50 Mbit/sec download speed on high-end machines. This means that if you download WebAssembly code with 50 Mbit/sec, compilation of that code finishes as soon as the download finishes.

For the graph below we measure the time it takes to download and compile a WebAssembly module with 67 MB and about 190,000 functions. We do the measurements with 25 Mbit/sec, 50 Mbit/sec, and 100 Mbit/sec download speed.

When the download time is longer than the compile time of the WebAssembly module, e.g. in the graph above with 25 Mbit/sec and 50 Mbit/sec, then WebAssembly.compileStreaming() finishes compilation almost immediately after the last bytes are downloaded.

When the download time is shorter than the compile time, then WebAssembly.compileStreaming() takes about as long as it takes to compile the WebAssembly module without downloading the module first.

Speed

We continued to work on widening the fast-path of JavaScript builtins in general, adding a mechanism to detect and prevent a ruinous situation called a “deoptimization loop.” This occurs when your optimized code deoptimizes, and there is no way to learn what went wrong. In such scenarios, TurboFan just keeps trying to optimize, finally giving up after about 30 attempts. This would happen if you did something to alter the shape of the array in the callback function of any of our second order array builtins. For example, changing the length of the array — in v6.5, we note when that happens, and stop inlining the array builtin called at that site on future optimization attempts.

We also widened the fast-path by inlining many builtins that were formerly excluded because of a side-effect between the load of the function to call and the call itself, for example a function call. And String.prototype.indexOf got a 10× performance improvement in function calls.

In V8 v6.4, we’d inlined support for Array.prototype.forEach, Array.prototype.map, and Array.prototype.filter. In V8 v6.5 we’ve added inlining support for:

  • Array.prototype.reduce
  • Array.prototype.reduceRight
  • Array.prototype.find
  • Array.prototype.findIndex
  • Array.prototype.some
  • Array.prototype.every

Furthermore, we’ve widened the fast path on all these builtins. At first we would bail out on seeing arrays with floating-point numbers, or (even more bailing out) if the arrays had “holes” in them, e.g. [3, 4.5, , 6]. Now, we handle holey floating-point arrays everywhere except in find and findIndex, where the spec requirement to convert holes into undefined throws a monkey-wrench into our efforts (for now…!).

The following image shows the improvement delta compared to V8 v6.4 in our inlined builtins, broken down into integer arrays, double arrays, and double arrays with holes. Time is in milliseconds.

V8 API

Please use git log branch-heads/6.4..branch-heads/6.5 include/v8.h to get a list of the API changes.

Developers with an active V8 checkout can use git checkout -b 6.5 -t branch-heads/6.5 to experiment with the new features in V8 v6.5. Alternatively you can subscribe to Chrome’s Beta channel and try the new features out yourself soon.

Posted by the V8 team

Monday, January 29, 2018

Optimizing hash tables: hiding the hash code

ECMAScript 2015 introduced several new data structures such as Map, Set, WeakSet, and WeakMap, all of which use hash tables under the hood. This post details the recent improvements in how V8 v6.3+ stores the keys in hash tables.

Hash code

A hash function is used to map a given key to a location in the hash table. A hash code is the result of running this hash function over a given key.

In V8, the hash code is just a random number, independent of the object value. Therefore, we can’t recompute it, meaning we must store it.

For JavaScript objects that were used as keys, previously, the hash code was stored as a private symbol on the object. A private symbol in V8 is similar to a Symbol, except that it’s not enumerable and doesn’t leak to userspace JavaScript.

function GetObjectHash(key) {
  const hash = key[hashCodeSymbol];
  if (IS_UNDEFINED(hash)) {
    hash = (MathRandom() * 0x40000000) | 0;
    if (hash === 0) hash = 1;
    key[hashCodeSymbol] = hash;
  }
  return hash;
}

This worked well because we didn’t have to reserve memory for a hash code field until the object was added to a hash table, at which point a new private symbol was stored on the object.

V8 could also optimize the hash code symbol lookup just like any other property lookup using the IC system, providing very fast lookups for the hash code. This works well for monomorphic IC lookups, when the keys have the same hidden class. However, most real-world code doesn’t follow this pattern, and often keys have different hidden classes, leading to slow megamorphic IC lookups of the hash code.

Another problem with the private symbol approach was that it triggered a hidden class transition in the key on storing the hash code. This resulted in poor polymorphic code not just for the hash code lookup but also for other property lookups on the key and deoptimization from optimized code.

JavaScript object backing stores

A JavaScript object (JSObject) in V8 uses two words (apart from its header): one word for storing a pointer to the elements backing store, and another word for storing a pointer to the properties backing store.

The elements backing store is used for storing properties that look like array indices, whereas the properties backing store is used for storing properties whose keys are strings or symbols. See this V8 blog post by Camillo Bruni for more information about these backing stores.

const x = {};
x[1] = 'bar';      // ← stored in elements
x['foo'] = 'bar';  // ← stored in properties

Hiding the hash code

The easiest solution to storing the hash code would be to extend the size of a JavaScript object by one word and store the hash code directly on the object. However, this would waste memory for objects that aren’t added to a hash table. Instead, we could try to store the hash code in the elements store or properties store.

The elements backing store is an array containing its length and all the elements. There’s not much to be done here, as storing the hashcode in a reserved slot (like the 0th index) would still waste memory when we don’t use the object as a key in a hash table.

Let’s look at the properties backing store. There are two kinds of data structures used as a properties backing store: arrays and dictionaries.

Unlike the array used in the elements backing store which does not have an upper limit, the array used in the properties backing store has an upper limit of 1022 values. V8 transitions to using a dictionary on exceeding this limit for performance reasons. (I’m slightly simplifying this — V8 can also use a dictionary in other cases, but there is a fixed upper limit on the number of values that can be stored in the array.)

So, there are three possible states for the properties backing store:

  1. empty (no properties)
  2. array (can store up to 1022 values)
  3. dictionary

The properties backing store is empty

For the empty case, we can directly store the hash code in this offset on the JSObject.

The properties backing store is an array

V8 represents integers less than 231 (on 32-bit systems) unboxed, as Smis. In a Smi, the least significant bit is a tag used to distinguish it from pointers, while the remaining 31 bits hold the actual integer value.

Normally, arrays store their length as a Smi. Since we know that the maximum capacity of this array is only 1022, we only need 10 bits to store the length. We can use the remaining 21 bits to store the hash code!

The properties backing store is a dictionary

For the dictionary case, we increase the dictionary size by 1 word to store the hashcode in a dedicated slot at the beginning of the dictionary. We get away with potentially wasting a word of memory in this case, because the proportional increase in size isn’t as big as in the array case.

With these changes, the hash code lookup no longer has to go through the complex JavaScript property lookup machinery.

Performance improvements

The SixSpeed benchmark tracks the performance of Map and Set, and these changes resulted in a ~500% improvement.

This change caused a 5% improvement on the Basic benchmark in ARES6 as well.

This also resulted in an 18% improvement in one of the benchmarks in the Emberperf benchmark suite that tests Ember.js.

Posted by Sathya Gunasekaran, keeper of hash codes

Wednesday, January 24, 2018

Chrome welcomes Speedometer 2.0!

Ever since its initial release of Speedometer 1.0 in 2014, the Blink and V8 teams have been using the benchmark as a proxy for real-world use of popular JavaScript frameworks and we achieved considerable speedups on this benchmark. We verified independently that these improvements translate to real user benefits by measuring against real-world websites and observed that improvements of page load times of popular websites also improved the Speedometer score.

JavaScript has rapidly evolved in the meantime, adding many new language features with ES2015 and later standards. The same is true for the frameworks themselves, and as such Speedometer 1.0 has become outdated over time. Hence using Speedometer 1.0 as an optimization indicator raises the risk of not measuring newer code patterns that are actively used.

The Blink and V8 teams welcome the recent release of the updated Speedometer 2.0 benchmark. Applying the original concept to a list of contemporary frameworks, transpilers and ES2015 features makes the benchmark a prime candidate for optimizations again. Speedometer 2.0 is a great addition to our real-world performance benchmarking tool belt.

Chrome's mileage so far

The Blink and V8 teams have already completed a first round of improvements, underlying the importance of this benchmark to us and continuing our journey of focusing on real-world performance. Comparing Chrome 60 from July 2017 with the latest Chrome 64 we have achieved about a 21% improvement on the total score (runs per minute) on a mid-2016 Macbook Pro (4 core, 16GB RAM).


Let’s zoom into the individual Speedometer 2.0 line items. We doubled the performance of the React runtime by improving Function.prototype.bind. Vanilla-ES2015, AngularJS, Preact, and VueJS improved by 15%-29% due to speeding up the JSON parsing and various other performance fixes. The jQuery-TodoMVC app's runtime was reduced by improvements to Blink's DOM implementation, including more lightweight form controls and tweaks to our HTML parser. Additional tweaking of V8's inline caches in combination with the optimizing compiler yielded improvements across the board.


A significant change over Speedometer 1.0 is the calculation of the final score. Previously the average of all scores favoured working only on the slowest line items. When looking at the absolute times spent in each line item we see for instance that the EmberJS-Debug version takes roughly 35 times as long as the fastest benchmark. Hence to improve the overall score focusing on EmberJS-Debug has the highest potential.


Speedometer 2.0 uses the geometric mean for the final score, favouring equal investments into each framework. Let us consider our recent 16.5% improvement of Preact from above. It would be rather unfair to forgo the 16.5% improvement just because of its minor contribution to the total time.

We are looking forward to bring further performance improvements to Speedometer 2.0 and through that to the whole web. Stay tuned for more performance high-fives.

Posted by the Blink and V8 teams