hckrnws
xg15
13d
(2021), still very interesting. Especially the "post-overfitting" training strategy is unexpected.
dev_hugepages
12d
This is talking about the double descent phenomenon (https://en.wikipedia.org/wiki/Double_descent)
luckystarr
13d
I remember vaguely that this was observed when training GPT-3 (probably?) as well. Just trained on and on, and the error went up and then down again. Like a phase transition in the model.
esafak
13d
The low sample efficiency of RL is well explained.
Comment was deleted :(
Crafted by Rajat
Source Code