Back to home page

OSCL-LXR

 
 

    


0001 L1TF - L1 Terminal Fault
0002 ========================
0003 
0004 L1 Terminal Fault is a hardware vulnerability which allows unprivileged
0005 speculative access to data which is available in the Level 1 Data Cache
0006 when the page table entry controlling the virtual address, which is used
0007 for the access, has the Present bit cleared or other reserved bits set.
0008 
0009 Affected processors
0010 -------------------
0011 
0012 This vulnerability affects a wide range of Intel processors. The
0013 vulnerability is not present on:
0014 
0015    - Processors from AMD, Centaur and other non Intel vendors
0016 
0017    - Older processor models, where the CPU family is < 6
0018 
0019    - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
0020      Penwell, Pineview, Silvermont, Airmont, Merrifield)
0021 
0022    - The Intel XEON PHI family
0023 
0024    - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
0025      IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
0026      by the Meltdown vulnerability either. These CPUs should become
0027      available by end of 2018.
0028 
0029 Whether a processor is affected or not can be read out from the L1TF
0030 vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
0031 
0032 Related CVEs
0033 ------------
0034 
0035 The following CVE entries are related to the L1TF vulnerability:
0036 
0037    =============  =================  ==============================
0038    CVE-2018-3615  L1 Terminal Fault  SGX related aspects
0039    CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
0040    CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
0041    =============  =================  ==============================
0042 
0043 Problem
0044 -------
0045 
0046 If an instruction accesses a virtual address for which the relevant page
0047 table entry (PTE) has the Present bit cleared or other reserved bits set,
0048 then speculative execution ignores the invalid PTE and loads the referenced
0049 data if it is present in the Level 1 Data Cache, as if the page referenced
0050 by the address bits in the PTE was still present and accessible.
0051 
0052 While this is a purely speculative mechanism and the instruction will raise
0053 a page fault when it is retired eventually, the pure act of loading the
0054 data and making it available to other speculative instructions opens up the
0055 opportunity for side channel attacks to unprivileged malicious code,
0056 similar to the Meltdown attack.
0057 
0058 While Meltdown breaks the user space to kernel space protection, L1TF
0059 allows to attack any physical memory address in the system and the attack
0060 works across all protection domains. It allows an attack of SGX and also
0061 works from inside virtual machines because the speculation bypasses the
0062 extended page table (EPT) protection mechanism.
0063 
0064 
0065 Attack scenarios
0066 ----------------
0067 
0068 1. Malicious user space
0069 ^^^^^^^^^^^^^^^^^^^^^^^
0070 
0071    Operating Systems store arbitrary information in the address bits of a
0072    PTE which is marked non present. This allows a malicious user space
0073    application to attack the physical memory to which these PTEs resolve.
0074    In some cases user-space can maliciously influence the information
0075    encoded in the address bits of the PTE, thus making attacks more
0076    deterministic and more practical.
0077 
0078    The Linux kernel contains a mitigation for this attack vector, PTE
0079    inversion, which is permanently enabled and has no performance
0080    impact. The kernel ensures that the address bits of PTEs, which are not
0081    marked present, never point to cacheable physical memory space.
0082 
0083    A system with an up to date kernel is protected against attacks from
0084    malicious user space applications.
0085 
0086 2. Malicious guest in a virtual machine
0087 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0088 
0089    The fact that L1TF breaks all domain protections allows malicious guest
0090    OSes, which can control the PTEs directly, and malicious guest user
0091    space applications, which run on an unprotected guest kernel lacking the
0092    PTE inversion mitigation for L1TF, to attack physical host memory.
0093 
0094    A special aspect of L1TF in the context of virtualization is symmetric
0095    multi threading (SMT). The Intel implementation of SMT is called
0096    HyperThreading. The fact that Hyperthreads on the affected processors
0097    share the L1 Data Cache (L1D) is important for this. As the flaw allows
0098    only to attack data which is present in L1D, a malicious guest running
0099    on one Hyperthread can attack the data which is brought into the L1D by
0100    the context which runs on the sibling Hyperthread of the same physical
0101    core. This context can be host OS, host user space or a different guest.
0102 
0103    If the processor does not support Extended Page Tables, the attack is
0104    only possible, when the hypervisor does not sanitize the content of the
0105    effective (shadow) page tables.
0106 
0107    While solutions exist to mitigate these attack vectors fully, these
0108    mitigations are not enabled by default in the Linux kernel because they
0109    can affect performance significantly. The kernel provides several
0110    mechanisms which can be utilized to address the problem depending on the
0111    deployment scenario. The mitigations, their protection scope and impact
0112    are described in the next sections.
0113 
0114    The default mitigations and the rationale for choosing them are explained
0115    at the end of this document. See :ref:`default_mitigations`.
0116 
0117 .. _l1tf_sys_info:
0118 
0119 L1TF system information
0120 -----------------------
0121 
0122 The Linux kernel provides a sysfs interface to enumerate the current L1TF
0123 status of the system: whether the system is vulnerable, and which
0124 mitigations are active. The relevant sysfs file is:
0125 
0126 /sys/devices/system/cpu/vulnerabilities/l1tf
0127 
0128 The possible values in this file are:
0129 
0130   ===========================   ===============================
0131   'Not affected'                The processor is not vulnerable
0132   'Mitigation: PTE Inversion'   The host protection is active
0133   ===========================   ===============================
0134 
0135 If KVM/VMX is enabled and the processor is vulnerable then the following
0136 information is appended to the 'Mitigation: PTE Inversion' part:
0137 
0138   - SMT status:
0139 
0140     =====================  ================
0141     'VMX: SMT vulnerable'  SMT is enabled
0142     'VMX: SMT disabled'    SMT is disabled
0143     =====================  ================
0144 
0145   - L1D Flush mode:
0146 
0147     ================================  ====================================
0148     'L1D vulnerable'                  L1D flushing is disabled
0149 
0150     'L1D conditional cache flushes'   L1D flush is conditionally enabled
0151 
0152     'L1D cache flushes'               L1D flush is unconditionally enabled
0153     ================================  ====================================
0154 
0155 The resulting grade of protection is discussed in the following sections.
0156 
0157 
0158 Host mitigation mechanism
0159 -------------------------
0160 
0161 The kernel is unconditionally protected against L1TF attacks from malicious
0162 user space running on the host.
0163 
0164 
0165 Guest mitigation mechanisms
0166 ---------------------------
0167 
0168 .. _l1d_flush:
0169 
0170 1. L1D flush on VMENTER
0171 ^^^^^^^^^^^^^^^^^^^^^^^
0172 
0173    To make sure that a guest cannot attack data which is present in the L1D
0174    the hypervisor flushes the L1D before entering the guest.
0175 
0176    Flushing the L1D evicts not only the data which should not be accessed
0177    by a potentially malicious guest, it also flushes the guest
0178    data. Flushing the L1D has a performance impact as the processor has to
0179    bring the flushed guest data back into the L1D. Depending on the
0180    frequency of VMEXIT/VMENTER and the type of computations in the guest
0181    performance degradation in the range of 1% to 50% has been observed. For
0182    scenarios where guest VMEXIT/VMENTER are rare the performance impact is
0183    minimal. Virtio and mechanisms like posted interrupts are designed to
0184    confine the VMEXITs to a bare minimum, but specific configurations and
0185    application scenarios might still suffer from a high VMEXIT rate.
0186 
0187    The kernel provides two L1D flush modes:
0188     - conditional ('cond')
0189     - unconditional ('always')
0190 
0191    The conditional mode avoids L1D flushing after VMEXITs which execute
0192    only audited code paths before the corresponding VMENTER. These code
0193    paths have been verified that they cannot expose secrets or other
0194    interesting data to an attacker, but they can leak information about the
0195    address space layout of the hypervisor.
0196 
0197    Unconditional mode flushes L1D on all VMENTER invocations and provides
0198    maximum protection. It has a higher overhead than the conditional
0199    mode. The overhead cannot be quantified correctly as it depends on the
0200    workload scenario and the resulting number of VMEXITs.
0201 
0202    The general recommendation is to enable L1D flush on VMENTER. The kernel
0203    defaults to conditional mode on affected processors.
0204 
0205    **Note**, that L1D flush does not prevent the SMT problem because the
0206    sibling thread will also bring back its data into the L1D which makes it
0207    attackable again.
0208 
0209    L1D flush can be controlled by the administrator via the kernel command
0210    line and sysfs control files. See :ref:`mitigation_control_command_line`
0211    and :ref:`mitigation_control_kvm`.
0212 
0213 .. _guest_confinement:
0214 
0215 2. Guest VCPU confinement to dedicated physical cores
0216 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0217 
0218    To address the SMT problem, it is possible to make a guest or a group of
0219    guests affine to one or more physical cores. The proper mechanism for
0220    that is to utilize exclusive cpusets to ensure that no other guest or
0221    host tasks can run on these cores.
0222 
0223    If only a single guest or related guests run on sibling SMT threads on
0224    the same physical core then they can only attack their own memory and
0225    restricted parts of the host memory.
0226 
0227    Host memory is attackable, when one of the sibling SMT threads runs in
0228    host OS (hypervisor) context and the other in guest context. The amount
0229    of valuable information from the host OS context depends on the context
0230    which the host OS executes, i.e. interrupts, soft interrupts and kernel
0231    threads. The amount of valuable data from these contexts cannot be
0232    declared as non-interesting for an attacker without deep inspection of
0233    the code.
0234 
0235    **Note**, that assigning guests to a fixed set of physical cores affects
0236    the ability of the scheduler to do load balancing and might have
0237    negative effects on CPU utilization depending on the hosting
0238    scenario. Disabling SMT might be a viable alternative for particular
0239    scenarios.
0240 
0241    For further information about confining guests to a single or to a group
0242    of cores consult the cpusets documentation:
0243 
0244    https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v1/cpusets.rst
0245 
0246 .. _interrupt_isolation:
0247 
0248 3. Interrupt affinity
0249 ^^^^^^^^^^^^^^^^^^^^^
0250 
0251    Interrupts can be made affine to logical CPUs. This is not universally
0252    true because there are types of interrupts which are truly per CPU
0253    interrupts, e.g. the local timer interrupt. Aside of that multi queue
0254    devices affine their interrupts to single CPUs or groups of CPUs per
0255    queue without allowing the administrator to control the affinities.
0256 
0257    Moving the interrupts, which can be affinity controlled, away from CPUs
0258    which run untrusted guests, reduces the attack vector space.
0259 
0260    Whether the interrupts with are affine to CPUs, which run untrusted
0261    guests, provide interesting data for an attacker depends on the system
0262    configuration and the scenarios which run on the system. While for some
0263    of the interrupts it can be assumed that they won't expose interesting
0264    information beyond exposing hints about the host OS memory layout, there
0265    is no way to make general assumptions.
0266 
0267    Interrupt affinity can be controlled by the administrator via the
0268    /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
0269    available at:
0270 
0271    https://www.kernel.org/doc/Documentation/core-api/irq/irq-affinity.rst
0272 
0273 .. _smt_control:
0274 
0275 4. SMT control
0276 ^^^^^^^^^^^^^^
0277 
0278    To prevent the SMT issues of L1TF it might be necessary to disable SMT
0279    completely. Disabling SMT can have a significant performance impact, but
0280    the impact depends on the hosting scenario and the type of workloads.
0281    The impact of disabling SMT needs also to be weighted against the impact
0282    of other mitigation solutions like confining guests to dedicated cores.
0283 
0284    The kernel provides a sysfs interface to retrieve the status of SMT and
0285    to control it. It also provides a kernel command line interface to
0286    control SMT.
0287 
0288    The kernel command line interface consists of the following options:
0289 
0290      =========== ==========================================================
0291      nosmt       Affects the bring up of the secondary CPUs during boot. The
0292                  kernel tries to bring all present CPUs online during the
0293                  boot process. "nosmt" makes sure that from each physical
0294                  core only one - the so called primary (hyper) thread is
0295                  activated. Due to a design flaw of Intel processors related
0296                  to Machine Check Exceptions the non primary siblings have
0297                  to be brought up at least partially and are then shut down
0298                  again.  "nosmt" can be undone via the sysfs interface.
0299 
0300      nosmt=force Has the same effect as "nosmt" but it does not allow to
0301                  undo the SMT disable via the sysfs interface.
0302      =========== ==========================================================
0303 
0304    The sysfs interface provides two files:
0305 
0306    - /sys/devices/system/cpu/smt/control
0307    - /sys/devices/system/cpu/smt/active
0308 
0309    /sys/devices/system/cpu/smt/control:
0310 
0311      This file allows to read out the SMT control state and provides the
0312      ability to disable or (re)enable SMT. The possible states are:
0313 
0314         ==============  ===================================================
0315         on              SMT is supported by the CPU and enabled. All
0316                         logical CPUs can be onlined and offlined without
0317                         restrictions.
0318 
0319         off             SMT is supported by the CPU and disabled. Only
0320                         the so called primary SMT threads can be onlined
0321                         and offlined without restrictions. An attempt to
0322                         online a non-primary sibling is rejected
0323 
0324         forceoff        Same as 'off' but the state cannot be controlled.
0325                         Attempts to write to the control file are rejected.
0326 
0327         notsupported    The processor does not support SMT. It's therefore
0328                         not affected by the SMT implications of L1TF.
0329                         Attempts to write to the control file are rejected.
0330         ==============  ===================================================
0331 
0332      The possible states which can be written into this file to control SMT
0333      state are:
0334 
0335      - on
0336      - off
0337      - forceoff
0338 
0339    /sys/devices/system/cpu/smt/active:
0340 
0341      This file reports whether SMT is enabled and active, i.e. if on any
0342      physical core two or more sibling threads are online.
0343 
0344    SMT control is also possible at boot time via the l1tf kernel command
0345    line parameter in combination with L1D flush control. See
0346    :ref:`mitigation_control_command_line`.
0347 
0348 5. Disabling EPT
0349 ^^^^^^^^^^^^^^^^
0350 
0351   Disabling EPT for virtual machines provides full mitigation for L1TF even
0352   with SMT enabled, because the effective page tables for guests are
0353   managed and sanitized by the hypervisor. Though disabling EPT has a
0354   significant performance impact especially when the Meltdown mitigation
0355   KPTI is enabled.
0356 
0357   EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
0358 
0359 There is ongoing research and development for new mitigation mechanisms to
0360 address the performance impact of disabling SMT or EPT.
0361 
0362 .. _mitigation_control_command_line:
0363 
0364 Mitigation control on the kernel command line
0365 ---------------------------------------------
0366 
0367 The kernel command line allows to control the L1TF mitigations at boot
0368 time with the option "l1tf=". The valid arguments for this option are:
0369 
0370   ============  =============================================================
0371   full          Provides all available mitigations for the L1TF
0372                 vulnerability. Disables SMT and enables all mitigations in
0373                 the hypervisors, i.e. unconditional L1D flushing
0374 
0375                 SMT control and L1D flush control via the sysfs interface
0376                 is still possible after boot.  Hypervisors will issue a
0377                 warning when the first VM is started in a potentially
0378                 insecure configuration, i.e. SMT enabled or L1D flush
0379                 disabled.
0380 
0381   full,force    Same as 'full', but disables SMT and L1D flush runtime
0382                 control. Implies the 'nosmt=force' command line option.
0383                 (i.e. sysfs control of SMT is disabled.)
0384 
0385   flush         Leaves SMT enabled and enables the default hypervisor
0386                 mitigation, i.e. conditional L1D flushing
0387 
0388                 SMT control and L1D flush control via the sysfs interface
0389                 is still possible after boot.  Hypervisors will issue a
0390                 warning when the first VM is started in a potentially
0391                 insecure configuration, i.e. SMT enabled or L1D flush
0392                 disabled.
0393 
0394   flush,nosmt   Disables SMT and enables the default hypervisor mitigation,
0395                 i.e. conditional L1D flushing.
0396 
0397                 SMT control and L1D flush control via the sysfs interface
0398                 is still possible after boot.  Hypervisors will issue a
0399                 warning when the first VM is started in a potentially
0400                 insecure configuration, i.e. SMT enabled or L1D flush
0401                 disabled.
0402 
0403   flush,nowarn  Same as 'flush', but hypervisors will not warn when a VM is
0404                 started in a potentially insecure configuration.
0405 
0406   off           Disables hypervisor mitigations and doesn't emit any
0407                 warnings.
0408                 It also drops the swap size and available RAM limit restrictions
0409                 on both hypervisor and bare metal.
0410 
0411   ============  =============================================================
0412 
0413 The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
0414 
0415 
0416 .. _mitigation_control_kvm:
0417 
0418 Mitigation control for KVM - module parameter
0419 -------------------------------------------------------------
0420 
0421 The KVM hypervisor mitigation mechanism, flushing the L1D cache when
0422 entering a guest, can be controlled with a module parameter.
0423 
0424 The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
0425 following arguments:
0426 
0427   ============  ==============================================================
0428   always        L1D cache flush on every VMENTER.
0429 
0430   cond          Flush L1D on VMENTER only when the code between VMEXIT and
0431                 VMENTER can leak host memory which is considered
0432                 interesting for an attacker. This still can leak host memory
0433                 which allows e.g. to determine the hosts address space layout.
0434 
0435   never         Disables the mitigation
0436   ============  ==============================================================
0437 
0438 The parameter can be provided on the kernel command line, as a module
0439 parameter when loading the modules and at runtime modified via the sysfs
0440 file:
0441 
0442 /sys/module/kvm_intel/parameters/vmentry_l1d_flush
0443 
0444 The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
0445 line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
0446 module parameter is ignored and writes to the sysfs file are rejected.
0447 
0448 .. _mitigation_selection:
0449 
0450 Mitigation selection guide
0451 --------------------------
0452 
0453 1. No virtualization in use
0454 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
0455 
0456    The system is protected by the kernel unconditionally and no further
0457    action is required.
0458 
0459 2. Virtualization with trusted guests
0460 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0461 
0462    If the guest comes from a trusted source and the guest OS kernel is
0463    guaranteed to have the L1TF mitigations in place the system is fully
0464    protected against L1TF and no further action is required.
0465 
0466    To avoid the overhead of the default L1D flushing on VMENTER the
0467    administrator can disable the flushing via the kernel command line and
0468    sysfs control files. See :ref:`mitigation_control_command_line` and
0469    :ref:`mitigation_control_kvm`.
0470 
0471 
0472 3. Virtualization with untrusted guests
0473 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0474 
0475 3.1. SMT not supported or disabled
0476 """"""""""""""""""""""""""""""""""
0477 
0478   If SMT is not supported by the processor or disabled in the BIOS or by
0479   the kernel, it's only required to enforce L1D flushing on VMENTER.
0480 
0481   Conditional L1D flushing is the default behaviour and can be tuned. See
0482   :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
0483 
0484 3.2. EPT not supported or disabled
0485 """"""""""""""""""""""""""""""""""
0486 
0487   If EPT is not supported by the processor or disabled in the hypervisor,
0488   the system is fully protected. SMT can stay enabled and L1D flushing on
0489   VMENTER is not required.
0490 
0491   EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
0492 
0493 3.3. SMT and EPT supported and active
0494 """""""""""""""""""""""""""""""""""""
0495 
0496   If SMT and EPT are supported and active then various degrees of
0497   mitigations can be employed:
0498 
0499   - L1D flushing on VMENTER:
0500 
0501     L1D flushing on VMENTER is the minimal protection requirement, but it
0502     is only potent in combination with other mitigation methods.
0503 
0504     Conditional L1D flushing is the default behaviour and can be tuned. See
0505     :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
0506 
0507   - Guest confinement:
0508 
0509     Confinement of guests to a single or a group of physical cores which
0510     are not running any other processes, can reduce the attack surface
0511     significantly, but interrupts, soft interrupts and kernel threads can
0512     still expose valuable data to a potential attacker. See
0513     :ref:`guest_confinement`.
0514 
0515   - Interrupt isolation:
0516 
0517     Isolating the guest CPUs from interrupts can reduce the attack surface
0518     further, but still allows a malicious guest to explore a limited amount
0519     of host physical memory. This can at least be used to gain knowledge
0520     about the host address space layout. The interrupts which have a fixed
0521     affinity to the CPUs which run the untrusted guests can depending on
0522     the scenario still trigger soft interrupts and schedule kernel threads
0523     which might expose valuable information. See
0524     :ref:`interrupt_isolation`.
0525 
0526 The above three mitigation methods combined can provide protection to a
0527 certain degree, but the risk of the remaining attack surface has to be
0528 carefully analyzed. For full protection the following methods are
0529 available:
0530 
0531   - Disabling SMT:
0532 
0533     Disabling SMT and enforcing the L1D flushing provides the maximum
0534     amount of protection. This mitigation is not depending on any of the
0535     above mitigation methods.
0536 
0537     SMT control and L1D flushing can be tuned by the command line
0538     parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
0539     time with the matching sysfs control files. See :ref:`smt_control`,
0540     :ref:`mitigation_control_command_line` and
0541     :ref:`mitigation_control_kvm`.
0542 
0543   - Disabling EPT:
0544 
0545     Disabling EPT provides the maximum amount of protection as well. It is
0546     not depending on any of the above mitigation methods. SMT can stay
0547     enabled and L1D flushing is not required, but the performance impact is
0548     significant.
0549 
0550     EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
0551     parameter.
0552 
0553 3.4. Nested virtual machines
0554 """"""""""""""""""""""""""""
0555 
0556 When nested virtualization is in use, three operating systems are involved:
0557 the bare metal hypervisor, the nested hypervisor and the nested virtual
0558 machine.  VMENTER operations from the nested hypervisor into the nested
0559 guest will always be processed by the bare metal hypervisor. If KVM is the
0560 bare metal hypervisor it will:
0561 
0562  - Flush the L1D cache on every switch from the nested hypervisor to the
0563    nested virtual machine, so that the nested hypervisor's secrets are not
0564    exposed to the nested virtual machine;
0565 
0566  - Flush the L1D cache on every switch from the nested virtual machine to
0567    the nested hypervisor; this is a complex operation, and flushing the L1D
0568    cache avoids that the bare metal hypervisor's secrets are exposed to the
0569    nested virtual machine;
0570 
0571  - Instruct the nested hypervisor to not perform any L1D cache flush. This
0572    is an optimization to avoid double L1D flushing.
0573 
0574 
0575 .. _default_mitigations:
0576 
0577 Default mitigations
0578 -------------------
0579 
0580   The kernel default mitigations for vulnerable processors are:
0581 
0582   - PTE inversion to protect against malicious user space. This is done
0583     unconditionally and cannot be controlled. The swap storage is limited
0584     to ~16TB.
0585 
0586   - L1D conditional flushing on VMENTER when EPT is enabled for
0587     a guest.
0588 
0589   The kernel does not by default enforce the disabling of SMT, which leaves
0590   SMT systems vulnerable when running untrusted guests with EPT enabled.
0591 
0592   The rationale for this choice is:
0593 
0594   - Force disabling SMT can break existing setups, especially with
0595     unattended updates.
0596 
0597   - If regular users run untrusted guests on their machine, then L1TF is
0598     just an add on to other malware which might be embedded in an untrusted
0599     guest, e.g. spam-bots or attacks on the local network.
0600 
0601     There is no technical way to prevent a user from running untrusted code
0602     on their machines blindly.
0603 
0604   - It's technically extremely unlikely and from today's knowledge even
0605     impossible that L1TF can be exploited via the most popular attack
0606     mechanisms like JavaScript because these mechanisms have no way to
0607     control PTEs. If this would be possible and not other mitigation would
0608     be possible, then the default might be different.
0609 
0610   - The administrators of cloud and hosting setups have to carefully
0611     analyze the risk for their scenarios and make the appropriate
0612     mitigation choices, which might even vary across their deployed
0613     machines and also result in other changes of their overall setup.
0614     There is no way for the kernel to provide a sensible default for this
0615     kind of scenarios.