

Re: your last paragraph:
I think the future is likely going to be more task-specific, targeted models. I don’t have the research handy, but small, targeted LLMs can outperform massive LLMs at a tiny fraction of the compute costs to both train and run the model, and can be run on much more modest hardware to boot.
Like, an LLM that is targeted only at:
- teaching writing and reading skills
- teaching English writing to English Language Learners
- writing business emails and documents
- writing/editing only resumes and cover letters
- summarizing text
- summarizing fiction texts
- writing & analyzing poetry
- analyzing poetry only (not even writing poetry)
- a counselor
- an ADHD counselor
- a depression counselor
The more specific the model, the smaller the LLM can be that can do the targeted task (s) “well”.





Yeah; the response should be that a “reject all” button must be displayed next to the accept all button with equal prominence, and define prominence to mean the same size, with similar contrast to the accept all button and clearly labelled.