Meltdown and Spectre Performance Implications

  Jan 8, 2018   |      Brand Hunt

performance linux kernel meltdown spectre

Spectre and Meltdown logosOver the last several days, the technology world has been focused on the impact of the Meltdown and Spectre vulnerabilities. There are several good articles published about these vulnerabilities, among them coverage from The Register and an overview from Red Hat.

In all of these discussions, there’s a common thread: the kernel fixes for these vulnerabilities will carry a performance cost. The question is – how much of a cost?

Now that some of the patches are available for Meltdown and Spectre, we’re able to provide guidance on how these patches will impact performance critical AMPS deployments. Let’s start by saying that the degradation you’ll see from these patches is highly dependent on your workload and features of AMPS you’re using.

Red Hat’s Performance Team has provided guidance on workloads and the expected performance degradation which I’m copying here for easier reference: https://access.redhat.com/articles/3307751.

In order to provide more detail, Red Hat’s performance team has categorized the performance results for Red Hat Enterprise Linux 7, (with similar behavior on Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 5), on a wide variety of benchmarks based on performance impact:

  • Measureable: 8-19% - Highly cached random memory, with buffered I/O, OLTP database workloads, and benchmarks with high kernel-to-user space transitions are impacted between 8-19%. Examples include OLTP Workloads (tpc), sysbench, pgbench, netperf (< 256 byte), and fio (random I/O to NvME).

  • Modest: 3-7% - Database analytics, Decision Support System (DSS), and Java VMs are impacted less than the “Measurable” category. These applications may have significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).

  • Small: 2-5% - HPC (High Performance Computing) CPU-intensive workloads are affected the least with only 2-5% performance impact because jobs run mostly in user space and are scheduled using cpu-pinning or numa-control. Examples include Linpack NxN on x86 and SPECcpu2006.

  • Minimal: <2 % Linux accelerator technologies that generally bypass the kernel in favor of user direct access are the least affected, with less than 2% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N). Userspace accesses to VDSO like get-time-of-day are not impacted. We expect similar minimal impact for other offloads.

AMPS Workload Estimates

Transaction Log/Message Queues/Replication: Use cases with a Transaction Log concerned with performance are typically using a PCIe or NVMe storage device and involve significant networking, which crosses into the kernel space frequently. Our performance simulator for using AMPS within a large trading system sees just under a 12% performance degradation (both in maximum achievable throughput and increase in median latency), however this extreme simulation spends significant time in the kernel and minimal time in user mode. We expect these use cases to fall in the range between upper-end of the Modest and the lower end of the Measurable impact range of 5-12%.

Large Scale View Server Where AMPS is spending most of its time executing user mode code (SOW queries, content filtering, delta publish/subscribe, etc.) should see an impact in the Small range.

Other Considerations and Mitigation

PCID support: To minimize the impact of these patches, verify your systems have PCID enabled. Without this feature, the patches will yield a higher performance impact. You can verify your host environment has PCID support by verifying the pcid flag is reported in your /proc/cpuinfo flags row (or in your lscpu flags output if you have that command available):

$ lscpu | grep pcid

Flags:                 fpu vme de pse tsc msr pae mce cx8 apic
sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse
sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc
arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm
abm epb invpcid_single spec_ctrl ibpb_support tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2
erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm arat pln
pts

Kernel Bypass Technologies: Using Kernel bypass technologies such as OpenOnload is a great way of minimizing the impact from these patches. If you’re already using these technologies, then your impact from these patches will be far less than those not using them.

What’s Next

We’ll keep this post updated with any new findings or suggestions. If you have any questions, please don’t hesitate writing comments below.


Read Next:   You Shall Not Pass: Banning Misbehaving Clients with fail2ban