I find it interesting that DALL-E still doesn’t understand text - look at all the random Daschuund spelling in the generated images. It knows what the word should look like, but has no framework to interpret or distinguish text from the other elements of the image. It looks like trying to spell in a dream.
Definitely seems AGI related. Has to do with acing mathematical problems - I can see why a generative AI model that can learn, solve, and then extrapolate mathematical formulae could be a big breakthrough.