Wednesday, November 29, 2017

Orinoco: young generation garbage collection

JavaScript objects in V8 are allocated on a heap managed by V8’s garbage collector. In previous blog posts we have already talked about how we reduce garbage collection pause times (more than once) and memory consumption. In this blog post we introduce the parallel Scavenger, one of the latest features of Orinoco, V8’s mostly concurrent and parallel garbage collector and discuss design decisions and alternative approaches we implemented on the way.

V8 partitions its managed heap into generations where objects are initially allocated in the “nursery” of the young generation. Upon surviving a garbage collection, objects are copied into the intermediate generation, which is still part of the young generation. After surviving another garbage collection, these objects are moved into the old generation (see Figure 1). V8 implements two garbage collectors: one that frequently collects the young generation, and one that collects the full heap including both the young and old generation. Old-to-young generation references are roots for the young generation garbage collection. These references are recorded to provide efficient root identification and reference updates when objects are moved.

Figure 1: Generational garbage collection

Since the young generation is relatively small (up to 16MiB in V8) it fills up quickly with objects and requires frequent collections. Until M62, V8 used a Cheney semispace copying garbage collector (see below) that divides the young generation into two halves. During JavaScript execution only one half of the young generation is available for allocating objects, while the other half remains empty. During a young garbage collection, live objects are copied from one half to the other half, compacting the memory on the fly. Live objects that have already been copied once are considered part of the intermediate generation and are promoted to the old generation.

Starting with M62, V8 switched the default algorithm for collecting the young generation to a parallel Scavenger, similar to Halstead’s semispace copying collector with the difference that V8 makes use of dynamic instead of static work stealing across multiple threads. In the following we explain three algorithms: a) the single-threaded Cheney semispace copying collector, b) a parallel Mark-Evacuate scheme, and c) the parallel Scavenger.

Single-threaded Cheney’s Semispace Copy

Until M62, V8 used Cheney’s semispace copying algorithm which is well-suited for both single-core execution and a generational scheme. Before a young generation collection, both semispace halves of memory are committed and assigned proper labels: the pages containing the current set of objects are called from-space while the pages that objects are copied to are called to-space.

The Scavenger considers references in the call stack and references from the old to the young generation as roots. Figure 2 illustrates the algorithm where initially the Scavenger scans these roots and copies objects reachable in the from-space that have not yet been copied to the to-space. Objects that have already survived a garbage collection are promoted (moved) to the old generation. After root scanning and the first round of copying, the objects in the newly allocated to-space are scanned for references. Similarly, all promoted objects are scanned for new references to from-space. These three phases are interleaved on the main thread. The algorithm continues until no more new objects are reachable from either to-space or the old generation. At this point the from-space only contains unreachable objects, i.e., it only contains garbage.

Processing
Figure 2: Cheney’s semispace copying algorithm used for young generation garbage collections in V8

Parallel Mark-Evacuate

We experimented with a parallel Mark-Evacuate algorithm based on the V8’s full Mark-Sweep-Compact collector. The main advantage is leveraging the already existing garbage collection infrastructure from the full Mark-Sweep-Compact collector. The algorithm consists of three phases: marking, copying, and updating pointers, as shown in Figure 3. To avoid sweeping pages in the young generation to maintain free lists, the young generation is still maintained using a semispace that is always kept compact by copying live objects into to-space during garbage collection. The young generation is initially marked in parallel. After marking, live objects are copied in parallel to their corresponding spaces. Work is distributed based on logical pages. Threads participating in copying keep their own local allocation buffers (LABs) which are merged upon finishing copying. After copying, the same parallelization scheme is applied for updating inter-object pointers. These three phases are performed in lockstep, i.e., while the phases themselves are performed in parallel, threads have to synchronize before continuing to the next phase.

Processing
Figure 3: Young Generation Parallel Mark-Evacuate garbage collection in V8

Parallel Scavenge

The parallel Mark-Evacuate collector separates the phases of computing liveness, copying live objects, and updating pointers. An obvious optimization is to merge these phases, resulting in an algorithm that marks, copies, and updates pointers at the same time. By merging those phases we actually get the parallel Scavenger used by V8, which is a version similar to Halstead’s semispace collector with the difference that V8 uses dynamic work stealing and a simple load balancing mechanism for scanning the roots (see Figure 4). Like the single-threaded Cheney algorithm, the phases are: scanning for roots, copying within the young generation, promoting to the old generation, and updating pointers. We found that the majority of the root set is usually the references from the old generation to the young generation. In our implementation, remembered sets are maintained per-page, which naturally distributes the roots set among garbage collection threads. Objects are then processed in parallel. Newly found objects are added to a global work list from which garbage collection threads can steal. This work list provides fast task local storage as well as global storage for sharing work. A barrier makes sure that tasks do not prematurely terminate when the sub graph currently processed is not suitable for work stealing (e.g. a linear chain of objects). All phases are performed in parallel and interleaved on each task, maximizing the utilization of worker tasks.

Processing
Figure 4: Young generation parallel Scavenger in V8

Results and outcome

The Scavenger algorithm was initially designed having optimal single-core performance in mind. The world has changed since then. CPU cores are often plentiful, even on low-end mobile devices. More importantly, often these cores are actually up and running. To fully utilize these cores, one of the last sequential components of V8’s garbage collector, the Scavenger, had to be modernized.

The big advantage of a parallel Mark-Evacuate collector is that exact liveness information is available. This information can e.g. be used to avoid copying at all by just moving and relinking pages that contain mostly live objects which is also performed by the full Mark-Sweep-Compact collector. In practice, however, this was mostly observable on synthetic benchmarks and rarely showed up on real websites. The downside of the parallel Mark-Evacuate collector is the overhead of performing three separate lockstep phases. This overhead is especially noticeable when the garbage collector is invoked on a heap with mostly dead objects, which is the case on many real-world webpages. Note that invoking garbage collections on heaps with mostly dead objects is actually the ideal scenario, as garbage collection is usually bounded by the size of live objects.

The parallel Scavenger closes this performance gap by providing performance that is close to the optimized Cheney algorithm on small or almost empty heaps while still providing a high throughput in case the heaps get larger with lots of live objects.

V8 supports, among many other platforms, as Arm big.LITTLE. While offloading work on little cores benefits battery lifetime, it can lead to stalling on the main thread when work packages for little cores are too big. We observed that page-level parallelism does not necessarily load balance work on big.LITTLE for a young generation garbage collection due to the limited number of pages. The Scavenger naturally solves this issue by providing medium-grained synchronization using explicit work lists and work stealing.

Figure 5: Total young generation garbage collection time (in ms) across various websites

V8 now ships with the parallel Scavenger which reduces the main thread young generation garbage collection total time by about 20%–50% across a large set of benchmarks (details on our perf waterfalls). Figure 5 shows a comparison of the implementations across various real-world websites, showing improvements around 55% (2×). Similar improvements can be observed on maximum and average pause time while maintaining minimum pause time. The parallel Mark-Evacuate collector scheme has still potential for optimization. Stay tuned if you want to find out what happens next.

Posted by friends of TSAN: Ulan Degenbaev, Michael Lippautz, and Hannes Payer

Thursday, November 16, 2017

Taming architecture complexity in V8 — the CodeStubAssembler

In this post we’d like to introduce the CodeStubAssembler (CSA), a component in V8 that has been a very useful tool in achieving some big performance wins over the last several V8 releases. The CSA also significantly improved the V8 team’s ability to quickly optimize JavaScript features at a low-level with a high degree of reliability, which improved the team’s development velocity.

A brief history of builtins and hand-written assembly in V8

To understand the CSA’s role in V8, it’s important to understand a little bit of the context and history that led to its development.

V8 squeezes performance out of JavaScript using a combination of techniques. For JavaScript code that runs a long time, V8’s TurboFan optimizing compiler does a great job of speeding up the entire spectrum of ES2015+ functionality for peak performance. However, V8 also needs to execute short-running JavaScript efficiently for good baseline performance. This is especially the case for the so-called builtin functions on the pre-defined objects that are available to all JavaScript programs as defined by the ECMAScript specification.

Historically, many of these builtin functions were self-hosted, that is, they were authored by a V8 developer in JavaScript—albeit a special V8-internal dialect. To achieve good performance, these self-hosted builtins rely on the same mechanisms V8 uses to optimize user-supplied JavaScript. As with user-supplied code, the self-hosted builtins require a warm-up phase in which type feedback is gathered and they need to be compiled by the optimizing compiler.

Although this technique provides good builtin performance in some situations, it’s possible to do better. The exact semantics of the pre-defined functions on the Array.prototype are specified in exquisite detail in the spec. For important and common special cases, V8’s implementers know in advance exactly how these builtin functions should work by understanding the specification, and they use this knowledge to carefully craft custom, hand-tuned versions up front. These optimized builtins handle common cases without warm-up or the need to invoke the optimizing compiler, since by construction baseline performance is already optimal upon first invocation.

To squeeze the best performance out of hand-written built-in JavaScript functions (and from other fast-path V8 code that are also somewhat confusingly called builtins), V8 developers traditionally wrote optimized builtins in assembly language. By using assembly, the hand-written builtin functions were especially fast by, among other things, avoiding expensive calls to V8’s C++ code via trampolines and by taking advantage of V8’s custom register-based ABI that it uses internally to call JavaScript functions.

Because of the advantages of hand-written assembly, V8 accumulated literally tens of thousands of lines of hand-written assembly code for builtins over the years… per platform. All of these hand-written assembly builtins were great for improving performance, but new language features are always being standardized, and maintaining and extending this hand-written assembly was laborious and error-prone.

Enter the CodeStubAssembler

V8 developers wrestled with a dilemma for many years: is it possible to create builtins that have the advantage of hand-written assembly without also being fragile and difficult to maintain?

With the advent of TurboFan the answer to this question is finally “yes”. TurboFan’s backend uses a cross-platform intermediate representation (IR) for low-level machine operations. This low-level machine IR is input to an instruction selector, register allocator, instruction scheduler and code generator that produce very good code on all platforms. The backend also knows about many of the tricks that are used in V8’s hand-written assembly builtins—e.g. how to use and call a custom register-based ABI, how to support machine-level tail calls, and how to elide the construction of stack frames in leaf functions. That knowledge makes the TurboFan backend especially well-suited for generating fast code that integrates well with the rest of V8.

This combination of functionality made a robust and maintainable alternative to hand-written assembly builtins feasible for the first time. The team built a new V8 component—dubbed the CodeStubAssembler or CSA—that defines a portable assembly language built on top of TurboFan’s backend. The CSA adds an API to generate TurboFan machine-level IR directly without having to write and parse JavaScript or apply TurboFan’s JavaScript-specific optimizations. Although this fast-path to code generation is something that only V8 developers can use to speed up the V8 engine internally, this efficient path for generating optimized assembly code in a cross-platform way directly benefits all developers’ JavaScript code in the builtins constructed with the CSA, including the performance-critical bytecode handlers for V8’s interpreter, Ignition.

The CSA and JavaScript compilation pipelines


The CSA interface includes operations that are very low-level and familiar to anybody who has ever written assembly code. For example, it includes functionality like “load this object pointer from a given address” and “multiply these two 32-bit numbers”. The CSA has type verification at the IR level to catch many correctness bugs at compile time rather than runtime. For example, it can ensure that a V8 developer doesn’t accidentally use an object pointer that is loaded from memory as the input for a 32-bit multiplication. This kind of type verification is simply not possible with hand-written assembly stubs.

A CSA test-drive

To get a better idea of what the CSA offers, let’s go through a quick example. We’ll add a new internal builtin to V8 that returns the string length from an object if it is a String. If the input object is not a String, the builtin will return undefined.

First, we add a line to the BUILTIN_LIST_BASE macro in V8’s builtin-definitions.h file that declares the new builtin called GetStringLength and specifies that it has a single input parameter that is identified with the constant kInputObject:

TFS(GetStringLength, kInputObject)

The TFS macro declares the builtin as a TurboFan builtin using standard CodeStub linkage, which simply means that it uses the CSA to generate its code and expects parameters to be passed via registers.

We can then define the contents of the builtin in builtins-string-gen.cc:

TF_BUILTIN(GetStringLength, CodeStubAssembler) {
  Label not_string(this);

  // Fetch the incoming object using the constant we defined for
  // the first parameter.
  Node* const maybe_string = Parameter(Descriptor::kInputObject);

  // Check to see if input is a Smi (a special representation
  // of small numbers). This needs to be done before the IsString
  // check below, since IsString assumes its argument is an
  // object pointer and not a Smi. If the argument is indeed a
  // Smi, jump to the label |not_string|.
  GotoIf(TaggedIsSmi(maybe_string), &not_string);

  // Check to see if the input object is a string. If not, jump to
  // the label |not_string|.
  GotoIfNot(IsString(maybe_string), &not_string);

  // Load the length of the string (having ended up in this code
  // path because we verified it was string above) and return it
  // using a CSA "macro" LoadStringLength.
  Return(LoadStringLength(maybe_string));

  // Define the location of label that is the target of the failed
  // IsString check above.
  BIND(&not_string);

  // Input object isn't a string. Return the JavaScript undefined
  // constant.
  Return(UndefinedConstant());
}

Note that in the example above, there are two types of instructions used. There are primitive CSA instructions that translate directly into one or two assembly instructions like GotoIf and Return. There are a fixed set of pre-defined CSA primitive instructions roughly corresponding to the most commonly used assembly instructions you would find on one of V8’s supported chip architectures. Others instructions in the example are macro instructions, like LoadStringLength, TaggedIsSmi, and IsString, that are convenience functions to output one or more primitive or macro instructions inline. Macro instructions are used to encapsulate commonly used V8 implementation idioms for easy reuse. They can be arbitrarily long and new macro instructions can be easily defined by V8 developers whenever needed.

After compiling V8 with the above changes, we can run mksnapshot, the tool that compiles builtins to prepare them for V8’s snapshot, with the --print-code command-line option. This options prints the generated assembly code for each builtin. If we grep for GetStringLength in the output, we get the following result on x64 (the code output is cleaned up a bit to make it more readable):

    test al,0x1
    jz not_string
    movq rbx,[rax-0x1]
    cmpb [rbx+0xb],0x80
    jnc not_string
    movq rax,[rax+0xf]
    retl
not_string:
    movq rax,[r13-0x60]
    retl

On 32-bit ARM platforms, the following code is generated by mksnapshot:

    tst r0, #1
    beq +28 -> not_string
    ldr r1, [r0, #-1]
    ldrb r1, [r1, #+7]
    cmp r1, #128
    bge +12 -> not_string
    ldr r0, [r0, #+7]
    bx lr
not_string:
    ldr r0, [r10, #+16]
    bx lr

Even though our new builtin uses a non-standard (at least non-C++) calling convention, it’s possible to write test cases for it. The following code can be added to test-run-stubs.cc to test the builtin on all platforms:

TEST(GetStringLength) {
  HandleAndZoneScope scope;
  Isolate* isolate = scope.main_isolate();
  Heap* heap = isolate->heap();
  Zone* zone = scope.main_zone();

  // Test the case where input is a string
  StubTester tester(isolate, zone, Builtins::kGetStringLength);
  Handle<String> input_string(
      isolate->factory()->
        NewStringFromAsciiChecked("Oktoberfest"));
  Handle<Object> result1 = tester.Call(input_string);
  CHECK_EQ(11, Handle<Smi>::cast(result1)->value());

  // Test the case where input is not a string (e.g. undefined)
  Handle<Object> result2 =
      tester.Call(factory->undefined_value());
  CHECK(result2->IsUndefined(isolate));
}

For more details about using the CSA for different kinds of builtins and for further examples, see this wiki page.

A V8 developer velocity multiplier

The CSA is more than just an universal assembly language that targets multiple platforms. It enables much quicker turnaround when implementing new features compared to hand-writing code for each architecture as we used to do. It does this by providing all of the benefits of hand-written assembly while protecting developers against its most treacherous pitfalls:

  • With the CSA, developers can write builtin code with a cross-platform set of low-level primitives that translate directly to assembly instructions. The CSA’s instruction selector ensures that this code is optimal on all of the platforms that V8 targets without requiring V8 developers to be experts in each of those platform’s assembly languages.
  • The CSA’s interface has optional types to ensure that the values manipulated by the low-level generated assembly are of the type that the code author expects.
  • Register allocation between assembly instructions is done by the CSA automatically rather than explicitly by hand, including building stack frames and spilling values to the stack if a builtin uses more registers than available or makes call. This eliminates a whole class of subtle, hard-to-find bugs that plagued hand-written assembly builtins. By making the generated code less fragile the CSA drastically reduces the time required to write correct low-level builtins.
  • The CSA understands ABI calling conventions—both standard C++ and internal V8 register-based ones—making it possible to easily interoperate between CSA-generated code and other parts of V8.
  • Since CSA code is C++, it’s easy to encapsulate common code generation patterns in macros that can be easily reused in many builtins.
  • Because V8 uses the CSA to generate the bytecode handlers for Ignition, it is very easy to inline the functionality of CSA-based builtins directly into the handlers to improve the interpreter’s performance.
  • V8’s testing framework supports testing CSA functionality and CSA-generated builtins from C++ without having to write assembly adapters.

All in all, the CSA has been a game changer for V8 development. It has significantly improved the team’s ability to optimize V8. That means we are able to optimize more of the JavaScript language faster for V8’s embedders.

Posted by Daniel Clifford, CodeStubAssembler assembler

Monday, November 6, 2017

Announcing the Web Tooling Benchmark

JavaScript performance has always been important to the V8 team, and in this post we would like to discuss a new JavaScript Web Tooling Benchmark that we have been using recently to identify and fix some performance bottlenecks in V8. You may already be aware of V8’s strong commitment to Node.js and this benchmark extends that commitment by specifically running performance tests based on common developer tools built upon Node.js. The tools in the Web Tooling Benchmark are the same ones used by developers and designers today to build modern web sites and cloud-based applications. In continuation of our ongoing efforts to focus on real-world performance rather than artificial benchmarks, we created the benchmark using actual code that developers run every day.

The Web Tooling Benchmark suite was designed from the beginning to cover important developer tooling use cases for Node.js. Because the V8 team focuses on core JavaScript performance, we built the benchmark in a way that focuses on the JavaScript workloads and excludes measurement of Node.js-specific I/O or external interactions. This makes it possible to run the benchmark in Node.js, in all browsers, and in all major JavaScript engine shells, including ch (ChakraCore), d8 (V8), jsc (JavaScriptCore) and jsshell (SpiderMonkey). Even though the benchmark is not limited to Node.js, we are excited that the Node.js benchmarking working group is considering using the tooling benchmark as a standard for Node performance as well (nodejs/benchmarking#138).

The individual tests in the tooling benchmark cover a variety of tools that developers commonly use to build JavaScript-based applications, for example:

See the in-depth analysis for details on all the included tests.

Based on past experience with other benchmarks like Speedometer, where tests quickly become outdated as new versions of frameworks become available, we made sure it is straight-forward to update each of the tools in the benchmarks to more recent versions as they are released. By basing the benchmark suite on npm infrastructure, we can easily update it to ensure that it is always testing the state of the art in JavaScript development tools. Updating a test case is just a matter of bumping the version in the package.json manifest.

We created a tracking bug and a spreadsheet to contain all the relevant information that we have collected about V8’s performance on the new benchmark up to this point. Our investigations have already yielded some interesting results. For example, we discovered that V8 was often hitting the slow path for instanceof (v8:6971), incurring a 3–4× slowdown. We also found and fixed performance bottlenecks in certain cases of property assignments of the form of obj[name] = val where obj was created via Object.create(null). In these cases, V8 would fall off the fast-path despite being able to utilize the fact that obj has a null prototype (v8:6985). These and other discoveries made with the help of this benchmark improve V8, not only in Node.js, but also in Chrome.

We not only looked into making V8 faster, but also fixed and upstreamed performance bugs in the benchmark’s tools and libraries whenever we found them. For example, we discovered a number of performance bugs in Babel where code patterns like

value = items[items.length - 1];
lead to accesses of the property "-1", because the code didn’t check whether items is empty beforehand. This code pattern causes V8 to go through a slow-path due to the "-1" lookup, even though a slightly modified, equivalent version of the JavaScript is much faster. We helped to fix these issues in Babel (babel/babel#6582, babel/babel#6581 and babel/babel#6580). We also discovered and fixed a bug where Babel would access beyond the length of a string (babel/babel#6589), which triggered another slow-path in V8. Additionally we optimized out-of-bounds reads of arrays and strings in V8. We’re looking forward to continue working with the community on improving the performance of this important use case, not only when run on top of V8, but also when run on other JavaScript engines like ChakraCore.

Our strong focus on real-world performance and especially on improving popular Node.js workloads is shown by the constant improvements in V8’s score on the benchmark over the last couple of releases:


Since V8 5.8, which is the last V8 release before switching to the Ignition+TurboFan architecture, V8’s score on the tooling benchmark has improved by around 60%.

Over the last several years, the V8 team has come to recognize that no one JavaScript benchmark — even a well-intentioned, carefully crafted one — should be used as a single proxy for a JavaScript engine’s overall performance. However, we do believe that the new Web Tooling Benchmark highlights areas of JavaScript performance that are worth focusing on. Despite the name and the initial motivation, we have found that the Web Tooling Benchmark suite is not only representative of tooling workloads, but is representative of a large range of more sophisticated JavaScript applications that are not tested well by front end-focused benchmarks like Speedometer. It is by no means a replacement for Speedometer, but rather a complementary set of tests.

The best news of all is that given how the Web Tooling Benchmark is constructed around real workloads, we expect that our recent improvements in benchmark scores will translate directly into improved developer productivity through less time waiting for things to build. Many of these improvements are already available in Node.js: at the time of writing, Node 8 LTS is at V8 6.1 and Node 9 is at V8 6.2.

The latest version of the benchmark is hosted at https://v8.github.io/web-tooling-benchmark/.

Benedikt Meurer, @bmeurer, JavaScript Performance Juggler