hckrnws
This looks a lot like some line anti aliasing I had to hack together many years ago when a customer started complaining loudly about the lack of hardware support for it. I think I had something like a week to put together three different alternatives for them to pick from, and this was the winner. It looked the best by far.
Years later my boss was telling me how satisfied he was that he could throw any problem in my general direction and it would be gone in no time. There is nothing like the risk of losing his work permit to motivate a young guy to work himself down to a crisp, all for peanuts.
Reminds me that I found an alternative way of sampling an SDF:
First take a sample in each corner of the pixel to be rendered (s1 s2 s3 s4), then compute:
coverage=0.5 + (s1+s2+s3+s4)/(abs(s1)+abs(s2)+abs(s3)+abs(s4))/2
It is a good approximation, and it keeps on working no matter how you scale and stretch the field.Relative to the standard method it is expensive to calculate. But for a modern GPU it is still a very light workload to do this once per screen pixel.
Technically that only requires calculating one extra row and column of pixels.
It is indeed scale invariant but I think you can do better, you should have enough to make it invariant to any linear transformation. The calculation will be more complex but that is nothing compared to evaluating the SDF
I do believe that it is already invariant to linear transformations the way you want, i.e. we can evaluate the corners of an arbitrary parallelogram instead of a square and get a similar coverage estimate.
Similar maybe but it can't be the same surely? Just pick some function like f(x,y) = x-1 and start rotating it around your centre pixel, the average (s1+s2+s3+s4) will be the same (since it's a linear function) but there's no way those absolute values will remain constant.
You should be pretty close though. For a linear function you can just calculate the distance to the 0 line, which is invariant to any linear transformation that leaves that line where it is (which is what you want). This is just the function value divided by the norm of the gradient. Both of which you can estimate from those 4 points. This gives something like
dx = (s2 - s1 + s4 - s3)
dy = (s3 - s1 + s4 - s2)
f = (s1+s2+s3+s4)/4
dist = f / sqrt(dx*dx + dy*dy)
My function approximate coverage of a square pixel, so indeed if you rotate a line around it at a certain distance that line will clip the corners at some angles and be clear of the pixel at other angles.
Would that be done in two passes? 1. Render the image shifted by 0.5 pixels in both directions (plus one additional row & column). 2. Apply above formula to each pixel (4 reads, 1 write).
That'd be one way of doing it.
You don't technically need 4 reads per pixel either, for instance you can process a 7x7 group with a 64-count thread group. Each thread does 1 read, and then fetches the other 3 values from its neighbours and calculates the average. Then the 7x7 subset of the 8x8 write their values.
You could integrate this into the first pass too, but then there would be duplication on the overlapped areas of each block. Depending on the complexity of the first pass, it still might be more efficient to do that than an extra pass.
Knowing that it's only the edges that are shared between threads, you could expand the work of each thread to do multiple pixels so that each thread group covers more pixels the reduce the number of pixels sampled multiple times. How much you do this by depends on register pressure, it's probably not worth doing more than 4 pixels per thread but YMMV.
You certainly could imagine doing that, but as long as the initial evaluation is fairly cheap (say a texture lookup), I don't see the extra pass being worth it.
Good article, but I believe it lacks information what specifically these magical dFdx, dFdy, and fwidth = abs(dFdx) + abs(dFdy) functions are computing.
The following stackexchange answer addresses that question rather well: https://gamedev.stackexchange.com/a/130933/3355 As you see, dFdx and dFdx are not exactly derivatives, these are discrete screen-space approximations of these derivativities. Very cheap to compute due to the weird execution model of pixel shaders running in hardware GPUs.
If you’ve ever sampled a texture in a shader, then you know what those are, so it’s probably fair to include them in the prerequisites for the article. But yes, those are intended to be approximate screen-space derivatives of whatever quantity you plug into them, and (I believe) on basically any hardware the approximation in question is a single-sided first difference, because the particular fragment (single-pixel contribution) you’re computing always exists in a 2×2 group of screen-space neighbours executing in lockstep.
The minute black area on the inner part of the sector getting perceptually boosted with the same ramp width like the outer area is effectively how an outline on a shape would behave, not two shapes with no stroke width. I would expect the output brightness should scale with the volume/depth under a pixel in the 3d visualization.
Is this intentional? To me this is an opiniated (aka artistic preference) feature preserving method not the perfect one.
Btw the common visualization has a source and an author:
https://iquilezles.org/articles/distfunctions2d/ https://www.shadertoy.com/playlist/MXdSRf
> The minute black area on the inner part of the sector
I'm not grasping what you're referring to here.
That Pac-Man “mouth” is collapsing to a constant width line like halfway in for the last for examples for me.
Having some weird mid-length discontinuity in the edge direction for me. Not just perceptually.
Maybe I’m misunderstanding something here or have a different idea what the goal of that exercise is, but I would expect some pixels to turn near white near the center towards that gap sector.
A very good example of SDF thinking, using signed distance fields in shaders. Both shaders and SDF are new to me and very interesting. Another example of what is being done is MSDF here: https://github.com/Chlumsky/msdfgen.
That what I wondering, for sharp narrow corners like with that pac-man mouth center and in font rendering composite/multichannel is probably the better approach for any situation where the there is potential for self-intersection of the distance field in concave situations. https://lambdacube3d.wordpress.com/2014/11/12/playing-around...
Instead of OKLAB isn't it simpler to just use a linear color space and only do gamma correction at the very end?
Simpler and worse in this application.
If the application is simulating a crisp higher-resolution image that was slightly blurred while downscaling to the output resolution, a linear color space is exactly the right choice. Yes, it means bright objects on a dark background will look larger than the reverse, but that's just a fact of human light sensitivity. If the blurring is purely optical, with no pixels in between, a small light in the dark can still create a large halo, whereas there's no corresponding anti-halo for dark spots in a well-lit room.
On the other hand, if you want something that looks roughly the same no matter which color you use, counteracting such oddities of perception is certainly unavoidable.
Comment was deleted :(
Mathematically, what you want to do here is to calculate the area of the pixel square (or circle; however you want to approximate it) that the shape covers. In this case a linear ramp actually approximates the true value better than smoothstep does. (I had the derivation worked out at some point; I don't have it handy, unfortunately.) Of course, beauty is in the eye of the beholder, and aesthetically one might prefer smoothstep.
By the way, since the article mentions ellipse distance approximations, the fastest way to approximate distance to an ellipse is to use a trick I came up with based on a paper from 1994 [1]: https://github.com/servo/webrender/blob/c4bd5b47d8f5cd684334... Unless it's changed recently, this is what Firefox uses for border radius.
Really interesting write-up! I'm not very familiar with signed distance functions but aliasing is a major part of my PhD and this is really insightful to me!
Crafted by Rajat
Source Code