• A
    x86/mce: Always use 64-bit timestamps · bc39f010
    Arnd Bergmann 提交于
    The machine check timestamp uses get_seconds(), which returns an
    'unsigned long' number that might overflow on 32-bit architectures (in
    the distant future) and is therefore deprecated.
    
    The normal replacement would be ktime_get_real_seconds(), but that needs
    to use a sequence lock that might cause a deadlock if the MCE happens at
    just the wrong moment. The __ktime_get_real_seconds() skips that lock
    and is safer here, but has a miniscule risk of returning the wrong time
    when we read it on a 32-bit architecture at the same time as updating
    the epoch, i.e. from before y2106 overflow time to after, or vice versa.
    
    This seems to be an acceptable risk in this particular case, and is the
    same thing we do in kdb.
    Signed-off-by: NArnd Bergmann <arnd@arndb.de>
    Signed-off-by: NBorislav Petkov <bp@suse.de>
    Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
    Acked-by: NThomas Gleixner <tglx@linutronix.de>
    Cc: Tony Luck <tony.luck@intel.com>
    Cc: linux-edac <linux-edac@vger.kernel.org>
    Cc: y2038@lists.linaro.org
    Link: http://lkml.kernel.org/r/20180618100759.1921750-1-arnd@arndb.de
    bc39f010
mce.c 57.2 KB