hckrnws
Very cool book. I think a reason why ML has seen so much progress despite benchmark overfitting/abuse is that results are "regularized" by real world applications and the Lindy effect. Methods, or research, that abuse benchmarks aren't adopted by follow-up research so they tend not to survive. And they aren't adopted because people try them but then find out that they don't generalize to other/newer benchmarks. So the system works not because of specific benchmarks, but because of how the community as a whole deals with benchmarks.
If I'm recall correctly, this was also a keynote at MDS24? That was also a great talk, Hardt is an excellent speaker.
A little rule I live by is that if Moritz Hardt writes it, I will read it
Why is that?
Crafted by Rajat
Source Code