You’re absolutely right that many times they trade resources for ease of development. Particularly websites.
I’m a programmer and guilty of this myself.
As you say it adds up quickly if you have multiple services.
You’re absolutely right that many times they trade resources for ease of development. Particularly websites.
I’m a programmer and guilty of this myself.
As you say it adds up quickly if you have multiple services.
The tradeoff is extra speed for extra memory use. Things were not fast in the 90s and memory was in short supply.
I don’t have a problem with maxing memory usage to make things faster. It’s what it’s there for. Especially 3d editing software.
I bet you plenty of people absolutely do want this.
Most people can barely use a computer and would love this if it worked well.
You forget a huge percentage of users can barely access their emails.
Lemmy is very techcentric and most users on here are far from the average consumer on technical literacy.
You just aren’t the target audience.
It’s amazingly good at moderating user content to flag for moderator review. Existing text analysis completely falls down beyond keyword filtering tbh.
It’s really good at sentiment analysis. Which is great for things like user reviews. The Amazon ai notes on products are actually brilliant at summarizing the pros and cons of a product. I work for a holiday let company and we experimented with using it to find customers we need to follow up with and the results were amazing.
It smashes other automated translating services as well.
I use it a lot as a programmer to very quickly learn new topics. Also as an interactive docs that you can ask follow up questions to. I can pick up a new language as I go much faster than with traditional resources.
It’s honestly a complete game changer.
I’m guessing they have limited resources for direct intervention so use this to flag up people who have the most risk factors.
It doesn’t sound like this is people asking for help but more trying to predict who might need it.
after an investigation by The Fuller Project and The Markup found the department’s algorithm prioritized White, male veterans. It also gave preference to veterans who are “divorced and male” and “widowed and male” but not to any group of female veterans.
It shouldn’t favour anyone. It should treat each person as an individual and figure out what they need based on their characteristics. If it’s been designed to only work well for white men it’s been designed poorly.
As a type 1 diabetic with multiple attachments already this sounds great.
402 Payment Required: when they ask you to stay on late
411 Length Required: bad date. Don’t ask
413 Payload Too Large: great date
You have to trust the person you’re communicating with has turned it off. That’s my point. It’s an optional feature
There’s literally an option to turn it off
It can be turned off so it’s up to the person you’re messaging. Once you send something the person at the other end is in control of what happens to it.
No it’s not. It’s pedantic and arguing semantics. It is essentially useless and a waste of everyone’s time.
It applies a statistical model and returns an analysis.
I’ve never heard anyone argue when you say they used a computer to analyse it.
It’s just the same AI bad bullshit and it’s tiring in every single thread about them.
I literally quoted the word for that exact reason. It just gets really tiring when you talk about AIs and someone always has to make this point. We all know they don’t think or understand in the same way we do. No one gains anything by it being pointed out constantly.
I mean they literally do analyze text. They’re great at it. Give it some text and it will analyze it really well. I do it with code at work all the time.
Because they are two completely different tasks. Asking them to recall information from their training is a very bad use. Asking them to analyze information passed into them is what they are great at.
Give it a sample of code and it will very accurately analyse and explain it. Ask it to generate code and the results are wildly varied in accuracy.
I’m not assuming anything you can literally go and use one right now and see.
One of LLMs main strengths over traditional text analysis tools is the ability to “understand” context.
They are bad at generating factual responses. They are amazing at analysing text.
No it’s a neural network. They may be over-hyped but they are 100% an AI.
Just installed this and made my cursor bigger. Single best customisation I’ve ever made on windows!
2.6% increase in thread ops when copying data from user space seems pretty significant.