“curated wallpapers” including random generated stuff, and “shares profits” on a 50/50 basis, for a shitty app developed by what looks like three fivers in a trench coat.
“curated wallpapers” including random generated stuff, and “shares profits” on a 50/50 basis, for a shitty app developed by what looks like three fivers in a trench coat.
The point is, they don’t get “competent”. They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can’t improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it’s usually as fast to type it. With the added value that it’s easier to double check.
What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we’re seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting “competent”.
Also, you sweep out the privacy and licensing issues, which are big no-no too.
LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I’d say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it’d be great. Unfortunately, since you can’t trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.
Coding doesn’t allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.
It is perfectly possible to run anti-cheat that are roughly as good (or as bad, as it often turns out) without full admin privilege and running as kernel-level drivers. Coupled with server-side validation, which seems to be a dying breed, you’d also weed out a ton of cheaters while missing the most motivated of them.
As someone who lurks around in different communities (to some extent; Steam forums, reddit, lemmy, mastodon, and a few game-centered discord servers), the issue is not much against anti-cheat for online play. It’s the nature of these piece of software that is the issue. It would not be the same if the anti-cheat was also forced on solo gameplay, but it is not the case here.
(bonus points for systems that allow playing on non-protected servers, but that’s asking a bit too much from some publishers I suppose)
Aside from it being code you don’t want on your machine
Code you don’t want on your machine, that have sometimes more permissions than you yourself have on your own files, is completely opaque, and have the legitimacy to keep constant outgoing network data that you can’t audit.
Yes, aside for that, no reason at all. No problem with a huge risk on your privacy for moderate results that don’t particularly benefit you in the long run.
(and all that is assuming that they’re not nefarious to begin with, which is almost impossible to prove)
Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:
Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.
The ethos of Mozilla
That’s the thing that changed.
The goal is not always to “take control” of the whole system. A cryptolocker that makes all your files unreadable will happily run in user space.
Also, you’re forgetting that windows also have UAC, and that people will happily type the admin password of their device when asked to, because they’ve been conditioned to not care by badly made stuff. And, while win+r is unlikely to work in most Linux DE I know about, triggering a visual prompt that ask for your password is also a thing.
There is not much difference between common Linux distro and windows as far as seizing user files with malware is concerned, aside from the fact that no website will care to try telling you “press alt+space” instead of “win+r”.
Any decent person who would have been “overly optimistic” at the time would have supported epic, and just that. There was no need to go out of his way to trashtalk others like a whiny bitch, especially when at the time said “others” where the place they had a chance to make money before.
This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.
The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.
Does these “companies” includes the one that were outed for just doing computation on plain old processors and claiming they had made huge breakthrough in quantum computing?
Hey, anything that’s not Silver Energy is ok in my book as far as hell portals are concerned.
Say that to governments that wants to locally ban tiktok.
“worse” is debatable, but they certainly are an issue.
However, that doesn’t make it ok in Firefox either. Having a good reputation does not mean you can burn it away by trying your best to look the same as the bad guy you’re supposed to fight. Firefox mobile, for a very plain and simple example, have stuff like “future experiment” and telemetry enabled by default. Sure, I can disable them, but they should either be disabled by default, or have a one-time popup that provides the option on the first launch.
My position is that if a piece of software becomes increasingly intrusive and tedious to use with each “update”, it’s time to look somewhere else. Whether it’s Firefox, Chrome, or even OS like Windows. Having to fight back to get to a decent, usable state means that it’s no longer the right tool for you.
Fortunately, some people are doing the heavy lifting by providing what would be considered “vanilla” firefox with some good forks, as far as being a browser goes.
i don’t know why people are so allergic to firefox but it is the answer.
Basically because in the later year, the development of firefox took very curious directions, from trying to break some decades old, standard feature (only to revert when gmail users, of all things, complained en masse), to integrating many useless extensions (pocket anyone?) that you can’t remove and that are more and more difficult to disable. To say nothing of the occasional advertisement for irrelevant products. Basically, even if it’s on a smaller scale, using firefox today is starting to look like using windows: you have to fight it on every update to remove something they bork.
And I’m not even talking about the shit that happens at their mother business, Mozilla.
All of this is even more infuriating, because they could very easily not do it and still pursue their venture. Have Firefox, the web browser, be a thing, and have all the shit actually packaged as a separate extension. Heck, even sell or promote it as “Firefox+” or whatever. Just, don’t break the core feature to add “smart bookmarks” or whatever VPN ads.
Web browser made by a single or small dev tends to not support nearly as much of the web standards, which are many. Using the web today with partial support for some stuff is the nightmare we escaped when IE got deprecated, and some still have with Safari.
reCAPTCHA v2 visual challenge images are all pre-labeled and user input plays no role in image labeling
That’s funny, because when I’m faced with this, I keep adding/removing one of the image randomly and it keeps accepting them as ok.
From the outside it really seems that a large amount of the USA administration is actively working against the USA’s interests. Which sounds weird.
Sometimes I just get tired of having to fight against software to have it behave in a semi-decent way. The same way you technically “can” run a decent windows installation after removing/disabling/blocking a ton of stuff, I don’t really want a browser that can be trusted after you had to tinker with dozens of settings to just get back to basic non-intrusive behavior.
I said this in another thread on the same topic somewhere else, but considering user tracking as an inevitability that we have to accept means we’ve already lost on that front.
No one believed that it could work except some japanese guy
There is a difference in not knowing how to do a thing and someone coming out doing the thing, and knowing how something works, knowing it’s by design limitations, and still hoping it may work out.
You’re right, they aren’t google. Not for lack of trying though.
You see posts putting some shade over Mozilla, and your immediate reaction is “it feels almost coordinated”. Well, that may be. But it would be hard to distinguish a “coordinated attack” from a “that’s just the things they’re doing, and there’s report on it” article, no? Especially when most of it can be fact-checked.
In this particular case, those abandoned projects got picked up by other… sometimes. And sometimes not. But they were abandoned. There’s no denying that.
If you want some more hot water for Mozilla, since you’re talking about privacy and security, you’d be interested in their recent switch regarding these points. Sure, the PR is all about protecting privacy and users, but looking into the acts, the message is a bit more diluted. And there’s always a fair amount of people that are ready to do the opposite of what you claims; namely discarding all criticism because “Mozilla”, when the same criticism are totally fair play when talking about other big companies.
Being keen on maintaining user privacy, system security, and trust, is not the same as picking a “champion” and sticking to it until the end. Mozilla have been doing shady things for half a decade now, and they should not get a free pass because they’re still the lesser evil for now.