

Unfortunately, all I can say is that the link in the post works for me and it doesn’t appear to be forwarding to anything. Somewhat ironically, it’s on the Internet Archive, though, so maybe that works for you.


Unfortunately, all I can say is that the link in the post works for me and it doesn’t appear to be forwarding to anything. Somewhat ironically, it’s on the Internet Archive, though, so maybe that works for you.


There’s no need to pave a road to hell if you’re already in hell because you’ve surrendered to bad intentions and now they’re all that’s left. The logical conclusion from acceptance is that it’s in nobody’s interest to put anything on the web (or anything equivalent) and have it become even more of a consumption-only medium than it already has.
Also, what’s happening right now is, in fact, the flow of information being controlled; to primarily flow towards a few powerful entities, that is. You’re neglecting to consider the effects of power differentials. Those powerful entities need to be constrained for the flow of information to actually be free.
Granted, the solution proposed in the blog post seems a bit too technical and high-friction to really be feasibly, but at least people are thinking about it.


Yes, in principle the same would apply to the MIT license, but in practice it’s pretty much impossible to violate the terms of that license, so it would never get tested. LGPL on the other hand, could lead to real, practical problems. As for why they would insist on MIT, there’s more MIT-licensed code used in production than public domain code. They’re already cosplaying as programmers by producing this slop, who knows what else they’ll do for the sake of appearances?
Is it possible that the license change was the goal and the use of AI was the means to achieve it? Of course. Should I have expressed that what I proposed was only a possible reason? Yeah, probably. But putting it as something like “the devs claim yada yada…” would have been incorrect. While the way the original question was asked meant that obviously, any answers would be from the hypothetical perspective of the maintainers (which is why the fact that the new version of chardet violates the license of the original code is irrelevant, because they wouldn’t think so), I worded my comment as an assertion because it was an assertion. By me. Because it’s the only possibility that is consistent with current legal precedent. And whether or not that was the or a reason for the license change, it’s something that would have been a real issue had they kept the license.
Now, you accuse me of insulting people’s intelligence, but when two people respond to my comment in the same way, obviously the problem lies with me. But you have been very unclear in conveying what exactly that problem is. You went form pointing out that the generated code is still in violation of the original license, which while true, is again, irrelevant in this context (and I still don’t really understand why you would think that it was) to not liking how assertively I worded the consequences for the enforceability of the LGPL license when it comes to code than cannot have copyright, I guess?


I’m really having trouble understanding what’s failing to convey here.
Let’s assume for a moment that chardet is a completely new library that from the start consisted mostly of AI-generated code. Let’s also forget that in order to generate that code, the LLM had to absorb a lot of existing code, and its usage may very well itself be many license violations. In this hypothetical situation would there be any point in licensing that code under the LGPL? No, there wouldn’t be, because it wouldn’t be possible to be enforced. This is not a claim anyone is making, this is just the logical conclusion from it not being possible to copyright AI-generated works.
Of course in reality, chardet has been around for a long time and previously consisted of human-authored code, and this new version cannot be considered a product of clean room reverse engineering, both because the maintainers had access to the original code and because LLMs are the very opposite of a clean room, so they don’t have to right to change that license because it would be in violation of the license of the original code. But the maintainers clearly don’t see it that way, otherwise they wouldn’t have done it. So from their perspective, the fact that it’s pointless for AI-generated code to be licensed under the LGPL, could be a reason why they felt the need to change the license.
Was this really not obvious from the context set by the comment I was responding to? I swear, every time I decide to be not so damn wordy, I learn to regret it.


You’re the second person to bring this up in response to my comment. Was my writing really that ambiguous? I was merely answering a question about why the maintainers felt they needed to change the license with a possible reason. Whether they’re actually allowed to do that is completely besides the point in the context of this particular sub-thread.


You’re not wrong, but I don’t see how it’s relevant to what I’m trying to say. Whether or not they’re legally allowed to change the license has nothing to do with why they might want to change the license.


There has already been a ruling in the US that AI-generated art cannot be copyrighted because it lacks human authorship, so it stands to reason that the same is true for code. Even copyleft is ultimately dependent on copyright to be legally enforceable.
And even if all of the rest of the world were to decide otherwise about whether AI-generated works can be copyrighted (which I very much doubt would happen), given how much software development happens in the US, it would still make the license pretty toothless.


LGPL is unenforceable with AI-generated code. LGPL puts certain constraints on how the code can be used, but if someone were to use AI-generated code in a way that violates its LGPL license, all that person has to say is that it’s AI-generated code, so it’s in the public domain and they’re free to do with it whatever they want, and they would legally be right.


Eh, the BIOS is ultimately just some code on a ROM chip, it working the way it does was a design decision by IBM, it wasn’t because of the CPU they used. There’s nothing technical standing in the way of making a PC-like ecosystem based on ARM chips.
The reason phones don’t have this is because they’re (almost) always fully integrated systems where you don’t ever swap out any of its parts for different ones, so there’s no incentive to create a flexible system, both on the hardware and software side, that can deal with different hardware on-line over just baking in all the hardware-specific stuff off-line.


For DVDs it wasn’t uncommon for the Japanese releases to have higher video quality because they typically had fewer episodes on one disc, but for BDs, if there is any difference at all, it’s probably not worth the additional cost.
That leaves bonus features and packaging. Often Japan-exclusive bonus features aren’t terribly interesting, just stuff like audio commentary by the voice actors (if you’re lucky, staff like the director might show up too), but occasionally you get things like the famously overpriced Nichijou BDs, which are more bonus feature than show. It’s also still the only way of getting the soundtrack of that show, I think. Some also come with feelies, like each Puniru BD box came with a quarter of a deck of cards, a post card and some booklets.
Packaging-wise, there’s generally just more of it and with made-for-purpose art. While the days of two episodes per disc are over, it’s also not like modern US or European releases where they squeeze as many discs in the space of a standard case as they can. Some releases have special packaging, like the gorgeous CITY BDs, which is also not very common anymore for anime over here. Occasionally the opposite does happen, though, like the YAT Anshin! Uchuu Ryokou BD box that looks like a cheap VHS case and uses a literal screencap from the show as cover art, but on the whole if you like having your favourite show have some actual presence on your shelf and maybe be in a nice box, the Japanese release is often your only option nowadays.
English subtitles on Japanese releases, while they do occasionally happen, aren’t very common. If I recall correctly, the Nanoha movies had English subtitles of mixed quality and the Heybot BD box had subtitles in several languages… for only the first episode.


I don’t know about other local systems, but the Wero branding is already starting to show up when making iDeal payments. I think there’s a good chance that at least in the short-to-medium term, the transition is going to be largely seamless for stores, where support for the local system translates into support for Wero, and all they really have to do is change the name in the order form.


That sounds more like tinkering around the edges to me. Whipping companies like Twitter into behaving, while it absolutely needs to happen, won’t fundamentally change anything about the dependency of Europe to those companies and the pressure the US can exert through that dependency.


The kind of genetic engineering it would take for humans to be able to transform into other animals is very much in the realm of science-fiction, after all.


I have a FRITZ!Box 7583 and all I can really say about it is that it works well enough for me, using it at home. I haven’t really had any problems with it over the last 8 years.
That particular model also has a modem built-in, but they also sell models that are just purely routers.
According to the website, it does not. But it also doesn’t support GPU passthrough yet, so it’s not yet really an option for most games anyway.


Here’s a thought: instead of fighting this, make it a requirement to publish the prompts before making a speech. Speeches by politicians being low in information density is nothing new, and the usage of LLMs will undoubtedly make that worse, but it also means that they had to have written a terse description of the information they want to convey. If that were public, people could just read that and not waste time listening to speeches.
It would be ironic if the first use-case for LLMs that creates positive value for society involves ignoring its output, though.
In order to be able to get information on the web, people need to put it on the web first. And for that to happen, there needs to be something to motivate them to do so. What those motivations are is going to differ between people and situations, could be a pure desire to contribute to the commons, could be part of how they make their income, could be any other number of things. But if putting something on the web means accepting that you’re going to be helping vile companies achieve their goals and the way most people may see this information is in a perverse form, riddled with falsehoods and with no attribution (or maybe worse, mostly falsehoods attributed to you), and there’s nothing you can do about it, that’s going to put a damper on a lot of those motivations, and the ones that aren’t tend to be the less desirable ones.
And it’s not just information that’s on the web, it’s also collaborative efforts like open source software. Why do people release source code under licenses like the GPL? Because they believe those constraints lead to a better outcome than if they had just put it in the public domain. That their contributions to the commons lead to more contributions to the commons, even from people who may not be inclined or incentivised to do so. If it becomes trivial to undermine those licenses (and for the record, those licenses do get enforced and there have been companies that had to release the source code of their products because they violated the license), that may undermine the reasons for many to contribute to the project.
You can be all cool and cynical about how social contracts are made up and whatnot, but let’s be honest here; if someone beats you to a pulp because they didn’t like the way you looked at them, you’re not going to just coolly accept your broken nose and displaced ribs as just the way things work.