Phone operators weren’t call center staff, they were literally routers in human form. Secretaries were your email program, calendar, and your folders full of word documents.
I’m well aware of switchboard operators. Computers were originally a profession as well.
Secretaries are still all that, both using digital tools as well as physical. They weren’t replaced by any of those programs. They just changed how they do their job. They schedule your meetings for you now in their cell phone instead of on a desk-sized paper calendar mat.
The underlying point misses why people have problems with the current AI bubble. I’ll cheer when they replace CEOs with AI - it seems like the best job to be replaced with LLMs and would save companies billions of dollars that could be used to improve the lives of workers. There’s tons of AI being used for all kinds of cool things already like spotting cancer in MRIs.
The issue people have with AI isn’t the tech. It’s who’s making it and why. It’s not being used to make life easier and better, it’s being used to cut decent paying jobs and commodify part of the human experience, all while making big profits without paying the people whose work was stolen to make those profits.
It’s just a different flavor of the fast fashion industry stealing high fashion designs and churning out their cheap knockoffs from factories in China where they don’t have to worry about things like safety standards or paying their workers a living wage.
It’s based upon a giant theft and mass violation of copyright laws as well as the licenses of lots of open source software.
It’s ClippyGPT and much of the output is either hallucinations or trite non-sense that sounds like it was cooked up in the most bureaucratic weak-willed corporate boardroom.
Its massive energy footprint to inefficiently solve math equations (for instance) is completely and thoroughly ridiculous.
I don’t want to type bullshit into a chat bot in order to look something up…this is a step below even the absurd modern substitute for documentation
of “go watch this 2 hour YouTube video on my development framework”.
“Miniature model” and “fine-tuned model” results could have been much more easily achieved by just having functional site / domain search engines.
Further about the last point, I feel like the open source part of the industry chased Google until it got to Lucene and then decided that an open source altavista was completely fine and dandy and stopped pursuing the goal of making their own search engines functional. So people had to continue to use Google until now and when Google has enshittified into a crappy, worse AI model for search now all we have left are chat bots that are maybe slightly better than altavista, but frequently spout out inaccurate information that they guess would exist.
See the rest of my post: the people who are making it and why they’re making it.
I have no complaints about the people making LLMs that can spot tumors better than humans can, but I 100% agree with every single one of your points. The grifters and the AI fad of venture capitalism are ruining a useful technology and ruining the world and society along with it for a quick buck.
Are they though? LLMs specifically? Seems like a very strange use case for an LLM.
But yeah we’re mostly in accordance, I wanted to riff a little bit because as a long-time tech worker I actually do have some bones to pick with the tech itself. The in-exactitude of its output and the “let the prompter beware” approach to dealing with its obvious inadequacies pisses me off and it seems like the perfect product for the current “test in production” “MVP (minimally viable product)” “pre-order the incomplete version” state software is in generally. The marketing and finance assholes are nearly fully running the show at this point and it shows.
I think the usefulness of this particular technology (LLMs) is very overblown and I found its very early usages more harmful than helpful (i.e. autocorrect/autocomplete is wrong for me more often than it is right). It has decent applicability in some areas (machine translation for instance is pretty good), but the marketing department got hold of it and so now everything is AI this and AI that.
I think it’s basically just another over-hyped technology that will eventually shake out to be used only where it is useful enough to justify its cost. If the company has to show profits at any point it is either going to go the surveillance capitalism ad route, or it’ll have to increasingly charge more per query than the gibberish it generates is really worth. I don’t see most people paying for ChatGPT long-term so they’ll probably have to enshittify further beyond their current (already kind of shitty) state.
It would depend upon the type of business. Modern office buildings filled with “information workers” weren’t a thing 50 years ago so it is kind of difficult to compare.
Ultimately, the structure of the modern corporation was allowed to take on a lot more complexity due to the advent of computers. So, we have fewer roles where people do full-time work managing inboxes or whatever (though not zero, because that is essentially what my wife still does for work), but more roles have an “inbox management” or other secretarial component to them now.
In practically every job, it became the case that you’re also a part-time secretary. Assistants became mainly a luxury reserved for fat cats, and the rest of us plebs are buried in emails.
Phone operators weren’t call center staff, they were literally routers in human form. Secretaries were your email program, calendar, and your folders full of word documents.
I’m well aware of switchboard operators. Computers were originally a profession as well.
Secretaries are still all that, both using digital tools as well as physical. They weren’t replaced by any of those programs. They just changed how they do their job. They schedule your meetings for you now in their cell phone instead of on a desk-sized paper calendar mat.
Alright, since you find this such an important issue, consider the first bullet point cropped off of my humorous list of milestones.
Doesn’t change the underlying point.
The underlying point misses why people have problems with the current AI bubble. I’ll cheer when they replace CEOs with AI - it seems like the best job to be replaced with LLMs and would save companies billions of dollars that could be used to improve the lives of workers. There’s tons of AI being used for all kinds of cool things already like spotting cancer in MRIs.
The issue people have with AI isn’t the tech. It’s who’s making it and why. It’s not being used to make life easier and better, it’s being used to cut decent paying jobs and commodify part of the human experience, all while making big profits without paying the people whose work was stolen to make those profits.
It’s just a different flavor of the fast fashion industry stealing high fashion designs and churning out their cheap knockoffs from factories in China where they don’t have to worry about things like safety standards or paying their workers a living wage.
I have multiple issues with the tech:
It’s based upon a giant theft and mass violation of copyright laws as well as the licenses of lots of open source software.
It’s ClippyGPT and much of the output is either hallucinations or trite non-sense that sounds like it was cooked up in the most bureaucratic weak-willed corporate boardroom.
Its massive energy footprint to inefficiently solve math equations (for instance) is completely and thoroughly ridiculous.
I don’t want to type bullshit into a chat bot in order to look something up…this is a step below even the absurd modern substitute for documentation of “go watch this 2 hour YouTube video on my development framework”.
“Miniature model” and “fine-tuned model” results could have been much more easily achieved by just having functional site / domain search engines.
Further about the last point, I feel like the open source part of the industry chased Google until it got to Lucene and then decided that an open source altavista was completely fine and dandy and stopped pursuing the goal of making their own search engines functional. So people had to continue to use Google until now and when Google has enshittified into a crappy, worse AI model for search now all we have left are chat bots that are maybe slightly better than altavista, but frequently spout out inaccurate information that they guess would exist.
See the rest of my post: the people who are making it and why they’re making it.
I have no complaints about the people making LLMs that can spot tumors better than humans can, but I 100% agree with every single one of your points. The grifters and the AI fad of venture capitalism are ruining a useful technology and ruining the world and society along with it for a quick buck.
Are they though? LLMs specifically? Seems like a very strange use case for an LLM.
But yeah we’re mostly in accordance, I wanted to riff a little bit because as a long-time tech worker I actually do have some bones to pick with the tech itself. The in-exactitude of its output and the “let the prompter beware” approach to dealing with its obvious inadequacies pisses me off and it seems like the perfect product for the current “test in production” “MVP (minimally viable product)” “pre-order the incomplete version” state software is in generally. The marketing and finance assholes are nearly fully running the show at this point and it shows.
I think the usefulness of this particular technology (LLMs) is very overblown and I found its very early usages more harmful than helpful (i.e. autocorrect/autocomplete is wrong for me more often than it is right). It has decent applicability in some areas (machine translation for instance is pretty good), but the marketing department got hold of it and so now everything is AI this and AI that.
I think it’s basically just another over-hyped technology that will eventually shake out to be used only where it is useful enough to justify its cost. If the company has to show profits at any point it is either going to go the surveillance capitalism ad route, or it’ll have to increasingly charge more per query than the gibberish it generates is really worth. I don’t see most people paying for ChatGPT long-term so they’ll probably have to enshittify further beyond their current (already kind of shitty) state.
deleted by creator
Dude, secretaries and assistants still exist.
Yeah we have one for a building of 100+ people. I wonder how many we would’ve needed 50 years ago.
It would depend upon the type of business. Modern office buildings filled with “information workers” weren’t a thing 50 years ago so it is kind of difficult to compare.
You’d be surprised! We already had banks, insurances, newspapers and other kinds of information businesses. They did employ a huge lot of secretaries.
Ultimately, the structure of the modern corporation was allowed to take on a lot more complexity due to the advent of computers. So, we have fewer roles where people do full-time work managing inboxes or whatever (though not zero, because that is essentially what my wife still does for work), but more roles have an “inbox management” or other secretarial component to them now.
In practically every job, it became the case that you’re also a part-time secretary. Assistants became mainly a luxury reserved for fat cats, and the rest of us plebs are buried in emails.