1. 29 4月, 2013 2 次提交
  2. 22 4月, 2013 3 次提交
  3. 17 4月, 2013 1 次提交
  4. 15 4月, 2013 2 次提交
  5. 07 4月, 2013 3 次提交
  6. 02 4月, 2013 3 次提交
  7. 26 3月, 2013 3 次提交
  8. 25 3月, 2013 3 次提交
  9. 22 3月, 2013 2 次提交
    • J
      [FIXED JENKINS-13154] AnnotationMapper bug was causing massive lock contention... · fdc090a3
      Jesse Glick 提交于
      [FIXED JENKINS-13154] AnnotationMapper bug was causing massive lock contention when saving fingerprints.
      fdc090a3
    • K
      [FIXED JENKINS-7813] · 8a3e909d
      Kohsuke Kawaguchi 提交于
      Fixed the throughput problem between master/slave communication.
      This fix contains two independent problems.
      
      One was in the remoting. During a large sustained data transfer
      (such as artifact archiving and large test reports), the way we
      were doing flow control and ACK-ing were penalizing us badly.
      I improved the flow control algorithm in remoting 1.23, and also
      increased advertised window size so that the transfer can saturate
      available bandwidth even when a latency is large. (And unless
      the reader side is excessivesly slow, this shouldn't increase
      any memory consumption.)
      
      The other fix was in trilead-ssh2, which is our SSH client
      implementation used by ssh-slaves plugin. The buffer size for flow
      control was too small. I improved the way buffering is done to reduce
      the memory footprint when the reader closely follows the writer, then I
      increased the advertised window size. Again, this shouldn't increase
      memory consumption (in fact it'll likely actually reduce them) unless
      the reader end gets abandoned.
      
      On my simulated latency-injected network, the sustained transfer rate is
      now on par with scp. We win for smaller files because of the TCP slow
      start penality that scp would incur, and we lose a bit as files get
      larger due to additional framing overhead.
      
      If you have manually extracted slave.jar and placed them on slaves, you
      need to update them to 2.23 to see the performance benefits.
      8a3e909d
  10. 18 3月, 2013 3 次提交
  11. 11 3月, 2013 3 次提交
  12. 07 3月, 2013 3 次提交
    • K
      [FIXED JENKINS-16606] · f373cd5c
      Kohsuke Kawaguchi 提交于
      Integrated the new version of Stapler that includes the fix.
      f373cd5c
    • K
      fixed all the regressions · 961517a2
      Kohsuke Kawaguchi 提交于
      961517a2
    • K
      Fixing a test regression. · fba6be7c
      Kohsuke Kawaguchi 提交于
      See 010b47d7307a11f0750fc01a00d99b96bbab9234 in stapler/stapler for the
      actual fix.
      
      I first discovered this on my local desktop yesterday as I was verifying
      the fix for json-lib 2.4. From what I can tell, this has been broken all
      along and it was working only because Facets are discovered in the lucky
      order; JellyFacet got a chance to get at it before GroovyFacet did.
      
      When I rebuilt from clean the problem stopped appearing, so I assumed
      that there are some factors that cause JellyFacet to get loaded before
      GroovyFacet (maybe the order in which Maven iterates dependencies or
      something like that.)
      
      But I'm no longer so sure now that I know J-on-J saw this problem at
      about the same time. That's just too many coincidences.
      
      But let's see if this would make J-on-J happy.
      fba6be7c
  13. 06 3月, 2013 2 次提交
  14. 04 3月, 2013 3 次提交
  15. 02 3月, 2013 1 次提交
  16. 28 2月, 2013 3 次提交