原推:in case you were worried
about running out of language data
with which to tune
larger and larger AI models
Humanity produces a volume of spoken language
that is ~500x larger
than the corpus of internet text
that was used to train GPT-3
https://twitter.com/wintonARK/status/1521227362452471809