About

This page contains results from an ETS benchmark showcasing the performance and scalability of a new ETS ordered_set implementation. The new ETS ordered_set implementation is only enabled when the option {write_concurrency, true} is passed to the ets:new function. The write_concurrency option had no effect on the created table when used in combination with ordered_set before the new implementation was added. The following papers describe the data structure that the new ETS ordered_set implementation is based on (the contention adapting search tree):

A Contention Adapting Approach to Concurrent Ordered Sets. Journal of Parallel and Distributed Computing, 2018. Kjell Winblad and Konstantinos Sagonas. (publisher, preprint)

More Scalable Ordered Set for ETS Using Adaptation. In Thirteenth ACM SIGPLAN workshop on Erlang (2014). Kjell Winblad and Konstantinos Sagonas. (publisher, preprint)

Benchmark Description

The benchmark measures how many ETS operations per second X Erlang processes can perform on a single table. Each of the X processes repeatedly selects an operation to do from a given set of operations. The likelihood that a certain operation will be selected is also given to the benchmark. The table that the processes operate on is prefilled with 500K items before each benchmark run starts. The source code for the benchmark is located in the function ets_SUITE:throughput_benchmark/0 (see "$ERL_TOP/lib/stdlib/test/ets_SUITE.erl"). Below is a list with brief descriptions of the operations:

Benchmark Machine and Erlang Parameters

Machine Configuration

Machine:
Microsoft Azure VM instance: Standard D64s v3 (64 vcpus, 256 GB memory):

Operating System:

Description:	Ubuntu 18.04.2 LTS
Linux version:	4.18.0-1014-azure
      

Erlang Parameters

erl +sbt tnnps

Benchmark Results

ETS Benchmark Result Viewer

This page generates graphs from data produced by the ETS benchmark which is defined in the function ets_SUITE:throughput_benchmark/0 (see "$ERL_TOP/lib/stdlib/test/ets_SUITE.erl").

Note that one can paste results from several benchmark runs into the field below. Results from the same scenario but from different benchmark runs will be relabeled and ploted in the same graph automatically. This makes comparisons of different ETS versions easy.

Note also that that lines can be hidden by clicking on the corresponding label.

Paste the generated data in the field below and press the Render button:

Include Throughput Plot
Include % More Throughput Than Worst Plot
Include % Less Throughput Than Best Plot
Bar Plot
Same X Spacing Between Points
Show [ordered_set,public,{write_concurrency,true},{read_concurrency,true}]
Show [set,public,{write_concurrency,true},{read_concurrency,true}]