1. 23 9月, 2014 5 次提交
  2. 16 9月, 2014 7 次提交
  3. 15 9月, 2014 9 次提交
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 · 3630056d
      Linus Torvalds 提交于
      Pull crypto fixes from Herbert Xu:
       "This fixes the newly added drbg generator so that it actually works on
        32-bit machines.  Previously the code was only tested on 64-bit and on
        32-bit it overflowed and simply doesn't work"
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
        crypto: drbg - remove check for uninitialized DRBG handle
        crypto: drbg - backport "fix maximum value checks on 32 bit systems"
      3630056d
    • L
      Linux 3.17-rc5 · 9e82bf01
      Linus Torvalds 提交于
      9e82bf01
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs · 83373f70
      Linus Torvalds 提交于
      Pull vfs fixes from Al Viro:
       "double iput() on failure exit in lustre, racy removal of spliced
        dentries from ->s_anon in __d_materialise_dentry() plus a bunch of
        assorted RCU pathwalk fixes"
      
      The RCU pathwalk fixes end up fixing a couple of cases where we
      incorrectly dropped out of RCU walking, due to incorrect initialization
      and testing of the sequence locks in some corner cases.  Since dropping
      out of RCU walk mode forces the slow locked accesses, those corner cases
      slowed down quite dramatically.
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
        be careful with nd->inode in path_init() and follow_dotdot_rcu()
        don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu()
        fix bogus read_seqretry() checks introduced in b37199e6
        move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon)
        [fix] lustre: d_make_root() does iput() on dentry allocation failure
      83373f70
    • L
      vfs: avoid non-forwarding large load after small store in path lookup · 9226b5b4
      Linus Torvalds 提交于
      The performance regression that Josef Bacik reported in the pathname
      lookup (see commit 99d263d4 "vfs: fix bad hashing of dentries") made
      me look at performance stability of the dcache code, just to verify that
      the problem was actually fixed.  That turned up a few other problems in
      this area.
      
      There are a few cases where we exit RCU lookup mode and go to the slow
      serializing case when we shouldn't, Al has fixed those and they'll come
      in with the next VFS pull.
      
      But my performance verification also shows that link_path_walk() turns
      out to have a very unfortunate 32-bit store of the length and hash of
      the name we look up, followed by a 64-bit read of the combined hash_len
      field.  That screws up the processor store to load forwarding, causing
      an unnecessary hickup in this critical routine.
      
      It's caused by the ugly calling convention for the "hash_name()"
      function, and easily fixed by just making hash_name() fill in the whole
      'struct qstr' rather than passing it a pointer to just the hash value.
      
      With that, the profile for this function looks much smoother.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9226b5b4
    • L
      Merge branch 'parisc-3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux · 5910cfdc
      Linus Torvalds 提交于
      Pull parisc updates from Helge Deller:
       "The most important patch is a new Light Weigth Syscall (LWS) for 8,
        16, 32 and 64 bit atomic CAS operations which is required in order to
        be able to implement the atomic gcc builtins on our platform.
      
        Other than that, we wire up the seccomp, getrandom and memfd_create
        syscalls, fixes a minor off-by-one bug and a wrong printk string"
      
      * 'parisc-3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
        parisc: Implement new LWS CAS supporting 64 bit operations.
        parisc: Wire up seccomp, getrandom and memfd_create syscalls
        parisc: dino: fix %d confusingly prefixed with 0x in format string
        parisc: sys_hpux: NUL terminator is one past the end
      5910cfdc
    • A
      be careful with nd->inode in path_init() and follow_dotdot_rcu() · 4023bfc9
      Al Viro 提交于
      in the former we simply check if dentry is still valid after picking
      its ->d_inode; in the latter we fetch ->d_inode in the same places
      where we fetch dentry and its ->d_seq, under the same checks.
      
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      4023bfc9
    • A
      don't bugger nd->seq on set_root_rcu() from follow_dotdot_rcu() · 7bd88377
      Al Viro 提交于
      return the value instead, and have path_init() do the assignment.  Broken by
      "vfs: Fix absolute RCU path walk failures due to uninitialized seq number",
      which was Cc-stable with 2.6.38+ as destination.  This one should go where
      it went.
      
      To avoid dummy value returned in case when root is already set (it would do
      no harm, actually, since the only caller that doesn't ignore the return value
      is guaranteed to have nd->root *not* set, but it's more obvious that way),
      lift the check into callers.  And do the same to set_root(), to keep them
      in sync.
      
      Cc: stable@vger.kernel.org # 2.6.38+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      7bd88377
    • L
      Merge tag 'ntb-3.17' of git://github.com/jonmason/ntb · 02c1be3d
      Linus Torvalds 提交于
      Pull ntb driver bugfixes from Jon Mason:
       "NTB driver fixes for queue spread and buffer alignment.  Also, update
        to MAINTAINERS to reflect new e-mail address"
      
      * tag 'ntb-3.17' of git://github.com/jonmason/ntb:
        ntb: Add alignment check to meet hardware requirement
        MAINTAINERS: update NTB info
        NTB: correct the spread of queues over mw's
      02c1be3d
    • L
      Merge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 8ac19f0d
      Linus Torvalds 提交于
      Pull ARM irq chip fixes from Thomas Gleixner:
       "Another pile of ARM specific irq chip fixlets:
      
         - off by one bugs in the crossbar driver
         - missing annotations
         - a bunch of "make it compile" updates
      
        I pulled the lot today from Jason, but it has been in -next for at
        least a week"
      
      * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        irqchip: gic-v3: Declare rdist as __percpu pointer to __iomem pointer
        irqchip: gic: Make gic_default_routable_irq_domain_ops static
        irqchip: exynos-combiner: Fix compilation error on ARM64
        irqchip: crossbar: Off by one bugs in init
        irqchip: gic-v3: Tag all low level accessors __maybe_unused
        irqchip: gic-v3: Only define gic_peek_irq() when building SMP
      8ac19f0d
  4. 14 9月, 2014 14 次提交
    • T
      Merge tag 'irqchip-urgent-3.17' of git://git.infradead.org/users/jcooper/linux into irq/urgent · 938c04a8
      Thomas Gleixner 提交于
      irqchip fixes for v3.17 from Jason Cooper
      
       - GIC/GICV3: Various fixlets
       - crossbar: Fix off-by-one bug
       - exynos-combiner: Fix arm64 build error
      938c04a8
    • D
      ntb: Add alignment check to meet hardware requirement · 3cc5ba19
      Dave Jiang 提交于
      The NTB translate register must have the value to be BAR size aligned.
      This alignment check make sure that the DMA memory allocated has the
      proper alignment. Another requirement for NTB to function properly with
      memory window BAR size greater or equal to 4M is to use the CMA feature
      in 3.16 kernel with the appropriate CONFIG_CMA_ALIGNMENT and
      CONFIG_CMA_SIZE_MBYTES set.
      Signed-off-by: NDave Jiang <dave.jiang@intel.com>
      Signed-off-by: NJon Mason <jdmason@kudzu.us>
      3cc5ba19
    • J
      MAINTAINERS: update NTB info · 9ef6bf6c
      Jon Mason 提交于
      Update my contact info to my personal email address and add Dave Jiang.
      Signed-off-by: NJon Mason <jon.mason@intel.com>
      Signed-off-by: NDave Jiang <dave.jiang@intel.com>
      9ef6bf6c
    • J
      NTB: correct the spread of queues over mw's · a1413cfb
      Jon Mason 提交于
      The detection of an uneven number of queues on the given memory windows
      was not correct.  The mw_num is zero based and the mod should be
      division to spread them evenly over the mw's.
      Signed-off-by: NJon Mason <jon.mason@intel.com>
      a1413cfb
    • A
      fix bogus read_seqretry() checks introduced in b37199e6 · f5be3e29
      Al Viro 提交于
      read_seqretry() returns true on mismatch, not on match...
      
      Cc: stable@vger.kernel.org # 3.15+
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f5be3e29
    • A
      move the call of __d_drop(anon) into __d_materialise_unique(dentry, anon) · 6f18493e
      Al Viro 提交于
      and lock the right list there
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6f18493e
    • A
      [fix] lustre: d_make_root() does iput() on dentry allocation failure · f77ced66
      Al Viro 提交于
      double-free is a bad thing
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f77ced66
    • L
      Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of... · 1536340e
      Linus Torvalds 提交于
      Merge branches 'locking-urgent-for-linus' and 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
      
      Pull futex and timer fixes from Thomas Gleixner:
       "A oneliner bugfix for the jinxed futex code:
      
         - Drop hash bucket lock in the error exit path.  I really could slap
           myself for intruducing that bug while fixing all the other horror
           in that code three month ago ...
      
        and the timer department is not too proud about the following fixes:
      
         - Deal with a long standing rounding bug in the timeval to jiffies
           conversion.  It's a real issue and this fix fell through the cracks
           for quite some time.
      
         - Another round of alarmtimer fixes.  Finally this code gets used
           more widely and the subtle issues hidden for quite some time are
           noticed and fixed.  Nothing really exciting, just the itty bitty
           details which bite the serious users here and there"
      
      * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        futex: Unlock hb->lock in futex_wait_requeue_pi() error path
      
      * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        alarmtimer: Lock k_itimer during timer callback
        alarmtimer: Do not signal SIGEV_NONE timers
        alarmtimer: Return relative times in timer_gettime
        jiffies: Fix timeval conversion to jiffies
      1536340e
    • G
      parisc: Implement new LWS CAS supporting 64 bit operations. · 89206491
      Guy Martin 提交于
      The current LWS cas only works correctly for 32bit. The new LWS allows
      for CAS operations of variable size.
      Signed-off-by: NGuy Martin <gmsoft@tuxicoman.be>
      Cc: <stable@vger.kernel.org> # 3.13+
      Signed-off-by: NHelge Deller <deller@gmx.de>
      89206491
    • L
      vfs: fix bad hashing of dentries · 99d263d4
      Linus Torvalds 提交于
      Josef Bacik found a performance regression between 3.2 and 3.10 and
      narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long'
      accesses for dcache name comparison and hashing"). He reports:
      
       "The test case is essentially
      
            for (i = 0; i < 1000000; i++)
                    mkdir("a$i");
      
        On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
        dir/sec with 3.10.  This is because we spend waaaaay more time in
        __d_lookup on 3.10 than in 3.2.
      
        The new hashing function for strings is suboptimal for <
        sizeof(unsigned long) string names (and hell even > sizeof(unsigned
        long) string names that I've tested).  I broke out the old hashing
        function and the new one into a userspace helper to get real numbers
        and this is what I'm getting:
      
            Old hash table had 1000000 entries, 0 dupes, 0 max dupes
            New hash table had 12628 entries, 987372 dupes, 900 max dupes
            We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
      
        My test does the hash, and then does the d_hash into a integer pointer
        array the same size as the dentry hash table on my system, and then
        just increments the value at the address we got to see how many
        entries we overlap with.
      
        As you can see the old hash function ended up with all 1 million
        entries in their own bucket, whereas the new one they are only
        distributed among ~12.5k buckets, which is why we're using so much
        more CPU in __d_lookup".
      
      The reason for this hash regression is two-fold:
      
       - On 64-bit architectures the down-mixing of the original 64-bit
         word-at-a-time hash into the final 32-bit hash value is very
         simplistic and suboptimal, and just adds the two 32-bit parts
         together.
      
         In particular, because there is no bit shuffling and the mixing
         boundary is also a byte boundary, similar character patterns in the
         low and high word easily end up just canceling each other out.
      
       - the old byte-at-a-time hash mixed each byte into the final hash as it
         hashed the path component name, resulting in the low bits of the hash
         generally being a good source of hash data.  That is not true for the
         word-at-a-time case, and the hash data is distributed among all the
         bits.
      
      The fix is the same in both cases: do a better job of mixing the bits up
      and using as much of the hash data as possible.  We already have the
      "hash_32|64()" functions to do that.
      Reported-by: NJosef Bacik <jbacik@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99d263d4
    • L
      Make hash_64() use a 64-bit multiply when appropriate · 23d0db76
      Linus Torvalds 提交于
      The hash_64() function historically does the multiply by the
      GOLDEN_RATIO_PRIME_64 number with explicit shifts and adds, because
      unlike the 32-bit case, gcc seems unable to turn the constant multiply
      into the more appropriate shift and adds when required.
      
      However, that means that we generate those shifts and adds even when the
      architecture has a fast multiplier, and could just do it better in
      hardware.
      
      Use the now-cleaned-up CONFIG_ARCH_HAS_FAST_MULTIPLIER (together with
      "is it a 64-bit architecture") to decide whether to use an integer
      multiply or the explicit sequence of shift/add instructions.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      23d0db76
    • L
      Make ARCH_HAS_FAST_MULTIPLIER a real config variable · 72d93104
      Linus Torvalds 提交于
      It used to be an ad-hoc hack defined by the x86 version of
      <asm/bitops.h> that enabled a couple of library routines to know whether
      an integer multiply is faster than repeated shifts and additions.
      
      This just makes it use the real Kconfig system instead, and makes x86
      (which was the only architecture that did this) select the option.
      
      NOTE! Even for x86, this really is kind of wrong.  If we cared, we would
      probably not enable this for builds optimized for netburst (P4), where
      shifts-and-adds are generally faster than multiplies.  This patch does
      *not* change that kind of logic, though, it is purely a syntactic change
      with no code changes.
      
      This was triggered by the fact that we have other places that really
      want to know "do I want to expand multiples by constants by hand or
      not", particularly the hash generation code.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72d93104
    • L
      Merge tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm · 186cec31
      Linus Torvalds 提交于
      Pull device mapper fix from Mike Snitzer:
       "Fix a race in the DM cache target that caused dirty blocks to be
        marked as clean.  This could cause no writeback to occur or spurious
        dirty block counts"
      
      * tag 'dm-3.17-fix2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
        dm cache: fix race causing dirty blocks to be marked as clean
      186cec31
    • L
      Merge branch 'for-linus' of git://git.kernel.dk/linux-block · 645cc093
      Linus Torvalds 提交于
      Pull block fixes from Jens Axboe:
       "A small collection of fixes for the current rc series.  This contains:
      
         - Two small blk-mq patches from Rob Elliott, cleaning up error case
           at init time.
      
         - A fix from Ming Lei, fixing SG merging for blk-mq where
           QUEUE_FLAG_SG_NO_MERGE is the default.
      
         - A dev_t minor lifetime fix from Keith, fixing an issue where a
           minor might be reused before all references to it were gone.
      
         - Fix from Alan Stern where an unbalanced queue bypass caused SCSI
           some headaches when it does a series of add/del on devices without
           fully registrering the queue.
      
         - A fix from me for improving the scaling of tag depth in blk-mq if
           we are short on memory"
      
      * 'for-linus' of git://git.kernel.dk/linux-block:
        blk-mq: scale depth and rq map appropriate if low on memory
        Block: fix unbalanced bypass-disable in blk_register_queue
        block: Fix dev_t minor allocation lifetime
        blk-mq: cleanup after blk_mq_init_rq_map failures
        blk-mq: pass along blk_mq_alloc_tag_set return values
        blk-merge: fix blk_recount_segments
      645cc093
  5. 13 9月, 2014 5 次提交
    • L
      Merge tag 'stable/for-linus-3.17-b-rc4-arm-tag' of... · fc486b03
      Linus Torvalds 提交于
      Merge tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
      
      Pull Xen ARM bugfix from Stefano Stabellini:
       "The patches fix the "xen_add_mach_to_phys_entry: cannot add" bug that
        has been affecting xen on arm and arm64 guests since 3.16.  They
        require a few hypervisor side changes that just went in xen-unstable.
      
        A couple of days ago David sent out a pull request with a few other
        Xen fixes (it is already in master).  Sorry we didn't synchronized
        better among us"
      
      * tag 'stable/for-linus-3.17-b-rc4-arm-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
        xen/arm: remove mach_to_phys rbtree
        xen/arm: reimplement xen_dma_unmap_page & friends
        xen/arm: introduce XENFEAT_grant_map_identity
      fc486b03
    • R
      alarmtimer: Lock k_itimer during timer callback · 474e941b
      Richard Larocque 提交于
      Locks the k_itimer's it_lock member when handling the alarm timer's
      expiry callback.
      
      The regular posix timers defined in posix-timers.c have this lock held
      during timout processing because their callbacks are routed through
      posix_timer_fn().  The alarm timers follow a different path, so they
      ought to grab the lock somewhere else.
      
      Cc: stable@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Sharvil Nanavati <sharvil@google.com>
      Signed-off-by: NRichard Larocque <rlarocque@google.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      474e941b
    • R
      alarmtimer: Do not signal SIGEV_NONE timers · 265b81d2
      Richard Larocque 提交于
      Avoids sending a signal to alarm timers created with sigev_notify set to
      SIGEV_NONE by checking for that special case in the timeout callback.
      
      The regular posix timers avoid sending signals to SIGEV_NONE timers by
      not scheduling any callbacks for them in the first place.  Although it
      would be possible to do something similar for alarm timers, it's simpler
      to handle this as a special case in the timeout.
      
      Prior to this patch, the alarm timer would ignore the sigev_notify value
      and try to deliver signals to the process anyway.  Even worse, the
      sanity check for the value of sigev_signo is skipped when SIGEV_NONE was
      specified, so the signal number could be bogus.  If sigev_signo was an
      unitialized value (as it often would be if SIGEV_NONE is used), then
      it's hard to predict which signal will be sent.
      
      Cc: stable@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Sharvil Nanavati <sharvil@google.com>
      Signed-off-by: NRichard Larocque <rlarocque@google.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      265b81d2
    • R
      alarmtimer: Return relative times in timer_gettime · e86fea76
      Richard Larocque 提交于
      Returns the time remaining for an alarm timer, rather than the time at
      which it is scheduled to expire.  If the timer has already expired or it
      is not currently scheduled, the it_value's members are set to zero.
      
      This new behavior matches that of the other posix-timers and the POSIX
      specifications.
      
      This is a change in user-visible behavior, and may break existing
      applications.  Hopefully, few users rely on the old incorrect behavior.
      
      Cc: stable@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Sharvil Nanavati <sharvil@google.com>
      Signed-off-by: NRichard Larocque <rlarocque@google.com>
      [jstultz: minor style tweak]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      e86fea76
    • A
      jiffies: Fix timeval conversion to jiffies · d78c9300
      Andrew Hunter 提交于
      timeval_to_jiffies tried to round a timeval up to an integral number
      of jiffies, but the logic for doing so was incorrect: intervals
      corresponding to exactly N jiffies would become N+1. This manifested
      itself particularly repeatedly stopping/starting an itimer:
      
      setitimer(ITIMER_PROF, &val, NULL);
      setitimer(ITIMER_PROF, NULL, &val);
      
      would add a full tick to val, _even if it was exactly representable in
      terms of jiffies_ (say, the result of a previous rounding.)  Doing
      this repeatedly would cause unbounded growth in val.  So fix the math.
      
      Here's what was wrong with the conversion: we essentially computed
      (eliding seconds)
      
      jiffies = usec  * (NSEC_PER_USEC/TICK_NSEC)
      
      by using scaling arithmetic, which took the best approximation of
      NSEC_PER_USEC/TICK_NSEC with denominator of 2^USEC_JIFFIE_SC =
      x/(2^USEC_JIFFIE_SC), and computed:
      
      jiffies = (usec * x) >> USEC_JIFFIE_SC
      
      and rounded this calculation up in the intermediate form (since we
      can't necessarily exactly represent TICK_NSEC in usec.) But the
      scaling arithmetic is a (very slight) *over*approximation of the true
      value; that is, instead of dividing by (1 usec/ 1 jiffie), we
      effectively divided by (1 usec/1 jiffie)-epsilon (rounding
      down). This would normally be fine, but we want to round timeouts up,
      and we did so by adding 2^USEC_JIFFIE_SC - 1 before the shift; this
      would be fine if our division was exact, but dividing this by the
      slightly smaller factor was equivalent to adding just _over_ 1 to the
      final result (instead of just _under_ 1, as desired.)
      
      In particular, with HZ=1000, we consistently computed that 10000 usec
      was 11 jiffies; the same was true for any exact multiple of
      TICK_NSEC.
      
      We could possibly still round in the intermediate form, adding
      something less than 2^USEC_JIFFIE_SC - 1, but easier still is to
      convert usec->nsec, round in nanoseconds, and then convert using
      time*spec*_to_jiffies.  This adds one constant multiplication, and is
      not observably slower in microbenchmarks on recent x86 hardware.
      
      Tested: the following program:
      
      int main() {
        struct itimerval zero = {{0, 0}, {0, 0}};
        /* Initially set to 10 ms. */
        struct itimerval initial = zero;
        initial.it_interval.tv_usec = 10000;
        setitimer(ITIMER_PROF, &initial, NULL);
        /* Save and restore several times. */
        for (size_t i = 0; i < 10; ++i) {
          struct itimerval prev;
          setitimer(ITIMER_PROF, &zero, &prev);
          /* on old kernels, this goes up by TICK_USEC every iteration */
          printf("previous value: %ld %ld %ld %ld\n",
                 prev.it_interval.tv_sec, prev.it_interval.tv_usec,
                 prev.it_value.tv_sec, prev.it_value.tv_usec);
          setitimer(ITIMER_PROF, &prev, NULL);
        }
          return 0;
      }
      
      Cc: stable@vger.kernel.org
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Reviewed-by: NPaul Turner <pjt@google.com>
      Reported-by: NAaron Jacobs <jacobsa@google.com>
      Signed-off-by: NAndrew Hunter <ahh@google.com>
      [jstultz: Tweaked to apply to 3.17-rc]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      d78c9300