IWETHEY v. 0.3.0 | TODO
1,095 registered users | 0 active users | 0 LpH | Statistics
Login | Create New User
IWETHEY Banner

Welcome to IWETHEY!

New has anyone looked at the source code of the linux patches to see what they are doing yet?
I havn't had time with all of the meetings but would like to take a boo to see what kind of impact to thruput it will be.
From what I have read a system call will go to a new area blocked off from user space and executed.
Looking at how they are doing that may shed some light on performance.
Been so long I dont even remember where to get the source for patches.
"Science is the belief in the ignorance of the experts" – Richard Feynman
New I wouldn't know what I was looking at
But according to the people I was reading to claim to know, this is basically throwing away all the performance gains from speculative execution, and possibly worse than that. I haven't seen anyone describe in simple terms* how to determine if your workload will be affected by that.


* Simple enough for me, that is. People who know what they're doing probably understand the summaries.
--

Drew
New I understand the summaries and I used to build drivers (a long time ago)
there is a link on redhat on
tuning the fixes
https://access.redhat.com/articles/3311301

https://access.redhat.com/articles/3307751

Measureable: 8-19% - Highly cached random memory, with buffered I/O, OLTP database workloads, and benchmarks with high kernel-to-user space transitions are impacted between 8-19%. Examples include OLTP Workloads (tpc), sysbench, pgbench, netperf (less than 256 byte), and fio (random I/O to NvME).

Modest: 3-7% - Database analytics, Decision Support System (DSS), and Java VMs are impacted less than the “Measurable” category. These applications may have significant sequential disk or network traffic, but kernel/device drivers are able to aggregate requests to moderate level of kernel-to-user transitions. Examples include SPECjbb2005, Queries/Hour and overall analytic timing (sec).

Small: 2-5% - HPC (High Performance Computing) CPU-intensive workloads are affected the least with only 2-5% performance impact because jobs run mostly in user space and are scheduled using cpu-pinning or numa-control. Examples include Linpack NxN on x86 and SPECcpu2006.

Minimal: Linux accelerator technologies that generally bypass the kernel in favor of user direct access are the least affected, with less than 2% overhead measured. Examples tested include DPDK (VsPERF at 64 byte) and OpenOnload (STAC-N). Userspace accesses to VDSO like get-time-of-day are not impacted. We expect similar minimal impact for other offloads.

NOTE: Because microbenchmarks like netperf/uperf, iozone, and fio are designed to stress a specific hardware component or operation, their results are not generally representative of customer workload. Some microbenchmarks have shown a larger performance impact, related to the specific area they stress.
"Science is the belief in the ignorance of the experts" – Richard Feynman
Expand Edited by boxley Jan. 4, 2018, 10:56:17 PM EST
New some code snippets on explaining the issue issue from google
"Science is the belief in the ignorance of the experts" – Richard Feynman
New A site with lots of information about the two flaws, links to vendor info, etc.
New Patch source
https://lkml.org/lkml/2017/12/4/709

I haven't had the chance yet to take a look (too much snow to shovel ;-)
     Intel keeps on giving - (scoenye) - (23)
         17-33% hit to processing speed? This is going to hurt me -NT - (boxley) - (2)
             Same. - (malraux) - (1)
                 we have both kinds of VMs oversubscribed and thin provisioned -NT - (boxley)
         "Speculative Execution™" ... whazzup? with a self-parody like That. Love. It. Roll dice, croupier! - (Ashton)
         Not just Intel: everyone gets to play with Spectre! -NT - (pwhysall)
         has anyone looked at the source code of the linux patches to see what they are doing yet? - (boxley) - (5)
             I wouldn't know what I was looking at - (drook) - (3)
                 I understand the summaries and I used to build drivers (a long time ago) - (boxley) - (2)
                     some code snippets on explaining the issue issue from google - (boxley) - (1)
                         A site with lots of information about the two flaws, links to vendor info, etc. - (Another Scott)
             Patch source - (scoenye)
         Given the accelerating, historical skull-buggery of the species, immanent-in and causal - (Ashton) - (7)
             Was it accident or malice? - (drook) - (6)
                 Most of the informed speculation I've seen seems to lean toward "accident". - (CRConrad) - (4)
                     Oh, who wants "informed" speculation ... I'll take the good old "wild" myself -NT - (drook)
                     This. - (Another Scott) - (2)
                         No, that's not their job - (drook) - (1)
                             But "our" stuff _i_s_ "their" stuff nowadays. - (CRConrad)
                 Perspicuous fork, there - (Ashton)
         Some more benchmarks - (malraux)
         Once again, die intel die! -NT - (a6l6e6x) - (2)
             Once again.. we'unses placed Too-Many eggs in one human-flawed basket. -NT - (Ashton)
             And AMD, and Apple, and POWER... -NT - (pwhysall)

Why sure I'm a billiard player!
105 ms