@tomshardware I'm with Apple (and folks like Gary Marcus and Rao Kambhampati) on this one, but let's assume Kwon is right. Is that actually better? Is it really better if the reason ChatGPT can't beat an Atari 2600 at tasks outside its training set isn't that ML is inherently limited to the specific things it's trained on, but rather because it needs more computing power to do it?