From 19b24bba1a02d617443e7316c324c93b56cd042d Mon Sep 17 00:00:00 2001 From: nothings Date: Sat, 6 Dec 2014 23:19:25 -0800 Subject: [PATCH] Update README.md --- tests/oversample/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tests/oversample/README.md b/tests/oversample/README.md index f2edeca..4b72edd 100644 --- a/tests/oversample/README.md +++ b/tests/oversample/README.md @@ -20,9 +20,9 @@ However, if you simply cache a single version of the bitmap and draw it at different subpixel positions with a GPU, you will get either the exact same result (if you use point-sampling on the texture) or linear filtering. Linear filtering will cause a sub-pixel -positioned bitmap to blur further, causing a visible desharpening +positioned bitmap to blur further, causing a visible de-sharpening of the character. (And, since the character wasn't hinted, it was -already blurrier than a hinted one would be, and not it gets even +already blurrier than a hinted one would be, and now it gets even more blurry.) You can avoid this by caching multiple variants of a character which @@ -32,7 +32,7 @@ horizontal offset, and always require characters to fall on integer positions vertically. When creating a texture atlas for use on GPUs, which support bilinear -filtering, there is a better approach than caching several indepdent +filtering, there is a better approach than caching several independent positions, which is to allow lerping between the versions to allow finer subpixel positioning. You can achieve these by interleaving each of the cached bitmaps, but this turns out to be mathematically -- GitLab