- 10 2月, 2015 1 次提交
-
-
由 Szabolcs Nagy 提交于
just defining the necessary constants: LD_B1B_MAX is 2^113 - 1 in base 10^9 KMAX is 2048 so the x array can hold up to 18432 decimal digits (the worst case is converting 2^-16495 = 5^16495 * 10^-16495 to binary, it requires the processing of int(log10(5)*16495)+1 = 11530 decimal digits after discarding the leading zeros, the conversion requires some headroom in x, but KMAX is more than enough for that) However this code is not optimal on archs with IEEE binary128 long double because the arithmetics is software emulated (on all such platforms as far as i know) which means big and slow strtod.
-
- 09 11月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this header evolved to facilitate the extremely lazy practice of omitting explicit includes of the necessary headers in individual stdio source files; not only was this sloppy, but it also increased build time. now, stdio_impl.h is only including the headers it needs for its own use; any further headers needed by source files are included directly where needed.
-
- 22 10月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this will prevent gnulib from wrapping our strtod to handle this useless feature.
-
- 18 8月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this affects at least the case of very long inputs, but may also affect shorter inputs that become long due to growth while upscaling. basically, the logic for the circular buffer indices of the initial base-10^9 digit and the slot one past the final digit, and for simplicity of the loop logic, assumes an invariant that they're not equal. the upscale loop, which can increase the length of the base-10^9 representation, attempted to preserve this invariant, but was actually only ensuring that the end index did not loop around past the start index, not that the two never become equal. the main (only?) effect of this bug was that subsequent logic treats the excessively long number as having no digits, leading to junk results.
-
- 08 6月, 2012 1 次提交
-
-
由 Rich Felker 提交于
-
- 30 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this caused misreading of certain floating point values that are exact multiples of large powers of ten, unpredictable depending on prior stack contents.
-
- 23 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
also be extra careful to avoid wrapping the circular buffer early
-
- 22 4月, 2012 2 次提交
-
-
由 Rich Felker 提交于
care is taken that the setting of errno correctly reflects underflow condition. scanning exact denormal values does not result in ERANGE, nor does scanning values (such as the usual string definition of FLT_MIN) which are actually less than the smallest normal number but which round to a normal result. only the decimal case is handled so far; hex float require a separate fix to come later.
-
由 Rich Felker 提交于
in principle this should just be an optimization, but it happens to also fix a nasty bug where values like 0.00000000001 were getting caught by the early zero detection path and wrongly scanned as zero.
-
- 21 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
bug detected by glib test suite
-
- 20 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
-
- 18 4月, 2012 2 次提交
-
-
由 Rich Felker 提交于
this was basically harmless, but could have resulted in misreading inputs with more than a few gigabytes worth of digits..
-
由 Rich Felker 提交于
this code worked in strtod, but not in scanf. more evidence that i should design a better interface for discarding multiple tail characters than just calling unget repeatedly...
-
- 16 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this off-by-one error was causing values with just one digit past the decimal point to be treated by the integer case. in many cases it would yield the correct result, but if expressions are evaluated in excess precision, double rounding may occur.
-
- 12 4月, 2012 7 次提交
-
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
when upscaling, even the very last digit is needed in cases where the input is exact; no digits can be discarded. but when downscaling, any digits less significant than the mantissa bits are destined for the great bitbucket; the only influence they can have is their presence (being nonzero). thus, we simply throw them away early. the result is nearly a 4x performance improvement for processing huge values. the particular threshold LD_B1B_DIG+3 is not chosen sharply; it's simply a "safe" distance past the significant bits. it would be nice to replace it with a sharp bound, but i suspect performance will be comparable (within a few percent) anyway.
-
由 Rich Felker 提交于
now that this is the first operation, it can rely on the circular buffer contents not being wrapped when it begins. we limit the number of digits read slightly in the initial parsing loops too so that this code does not have to consider the case where it might cause the circular buffer to wrap; this is perfectly fine because KMAX is chosen as a power of two for circular-buffer purposes and is much larger than it otherwise needs to be, anyway. these changes should not affect performance at all.
-
由 Rich Felker 提交于
upscaling by even one step too much creates 3-29 extra iterations for the next loop. this is still suboptimal since it always goes by 2^29 rather than using a smaller upscale factor when nearing the target, but performance on common, small-magnitude, few-digit values has already more than doubled with this change. more optimizations on the way...
-
- 11 4月, 2012 5 次提交
-
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
for example, "1000000000" was being read as "1" due to this loop exiting early. it's necessary to actually update z and zero the entries so that the subsequent rounding code does not get confused; before i did that, spurious inexact exceptions were being raised.
-
由 Rich Felker 提交于
note that there's no need for a precise cutoff, because exponents this large will always result in overflow or underflow (it's impossible to read enough digits to compensate for the exponent magnitude; even at a few nanoseconds per digit it would take hundreds of years).
-
由 Rich Felker 提交于
-
由 Rich Felker 提交于
the immediate benefit is a significant debloating of the float parsing code by moving the responsibility for keeping track of the number of characters read to a different module. by linking shgetc with the stdio buffer logic, counting logic is defered to buffer refill time, keeping the calls to shgetc fast and light. in the future, shgetc will also be useful for integrating the new float code with scanf, which needs to not only count the characters consumed, but also limit the number of characters read based on field width specifiers. shgetc may also become a useful tool for simplifying the integer parsing code.
-
- 10 4月, 2012 1 次提交
-
-
由 Rich Felker 提交于
this version is intended to be fully conformant to the ISO C, POSIX, and IEEE standards for conversion of decimal/hex floating point strings to float, double, and long double (ld64 or ld80 only at present) values. in particular, all results are intended to be rounded correctly according to the current rounding mode. further, this implementation aims to set the floating point underflow, overflow, and inexact flags to reflect the conversion performed. a moderate amount of testing has been performed (by nsz and myself) prior to integration of the code in musl, but it still may have bugs. so far, only strto(d|ld|f) use the new code. scanf integration will be done as a separate commit, and i will add implementations of the wide character functions later.
-