- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Copilot key will eventually be required in new PC keyboards, though not yet.
Copilot key will eventually be required in new PC keyboards, though not yet.
I agree with you, AI is a thing alright, an overhyped chatbot thing. LLM’s are going to be neutered by pandering, and the true potential will be limited by investor fear and paranoia.
What makes you think they’ll be neutered? You think China is going to stop what they’re doing with them because the US might do something stupid? The genie is out of the bottle.
It’s a trend lately, that potentially sensitive things will be said or output from the models, so you can see an increasingly crazier set of guardrails getting put around the LLM’s so that they don’t offend someone by mistake. I’ve seen their usefulness decrease significantly, but their coding assistance is still somewhat good, but their capabilities otherwise decrease significantly.
I haven’t had those problems with locally run models (stable diffusion, llamafile)
Agreed, but in the context of this post, that copilot key on the keyboard will take people to the most inoffensive and “walled garden” variety of generative AI that will be so one-size-fits-all to the point that its usefulness will pale in comparison to local run models or SaaS hosted style services that give you a hosted model to run off of.
I understand why it would seem unimpressive someone that doesn’t do something like research or programming in their daily life but when you do those things it’s very clear the difference they’re already making.
The thing I’m coding at the moment for example I’ve been using it to tear ideas for image processing scripts, it’d have taken me a day to do one before maybe longer but even the free gpt can have an idea working after half an hour or fiddling. Being able to focus on coming up with ideas rather than the finer details of implementation.
We’re going to see people get used to using them properly and their uses spread into many other areas of life - you will be customising games UI and making complex control input using natural language tools ‘Linux, remove the clock and put a system resource thermometer there instead for whatever bits are most likely to overheat’ ten years from now you’ll look back and wonder how people did anything without ai just like people often wonder how we lived without internet and mobile phones
I use copilot on a daily basis for programming. It has made me much more productive and it’s a real pleasure to use it. Nothing overhyped about it.
Curious to see what it will bring for other domains, e.g. for dealing with emails.
I do agree that there’s a lot of filtering happening. Not a huge deal for more applications. Luckily you can run your own models that are not filtered. I can definitely see a future where you run your own models locally. Afaik Apple recently did some stuff around that.
Why are you talking about what Apple might do, in relation to locally run models, when that’s what Facebook’s already done? And it’s source available, which is more than the Apple one will likely be.
It’s overhyped but LLMs have become basically an essential part of my daily workflow. I can’t imagine developing without it now and I’ve been using them for less than 12 months. The technology is only going to improve, and that’s both cool and scary to think about.
Opinion discarded.
Why? It absolutely is the case that corporate provided LLMs are neutered to not provide anything that goes against 21 century American corporate norms. Try and get chat GPT to agree with you that capitalism is at the root of most of the worlds problems and it will fight you every step of the way, ask it about how capitalism drives innovation and it will write you glowing praise.
They are neutered to comply with current hate speech laws, and the developers err on the side of caution cause they don’t have full control over the output for which they are legally responsible.
Obviously they filter that yes, but they also go to huge amounts of effort to shape what comes out to fit their ethics. When chatGPT was new I spent a good half an hour trying to get it to admit that the fact that it was created by a for profit company meant that it would have significant bias towards the status quo and it wasnt having any of it. However asking it to imagine an equal LLM created by another company called chatPGT and all of a suddent it was agreeing with me that it would have pro-capital and anti public biases embeded into to due to who was in control of training it. Clearly it had been trained to not admit that chatGPT would give biased answers.