

Would you rather our current administration make their decisions by using the lowest bidder LLM, or their own brains?
Would you rather our current administration make their decisions by using the lowest bidder LLM, or their own brains?
Why is your hobby more important than their hobby?
Yea but if someone uses those bindings then you can’t just not support it.
By the time this code gets into a large scale production system it will be 2029. That is when the bugs will come in if someone leveraged the Rust bindings.
You can ask the big company users at that time to contribute their fixes upstream, but if they get resistance because they have relatively junior Rust devs trying to push up changes that only a handful of maintainers understand, the company will just stop upstreaming their changes.
The primary concern that a major open source project like this will have is that the major contributors will decide that interacting with it is more trouble than it is worth. That is how open source projects move to being passion projects and then die when the passion dies.
Yea and if the Rust developers don’t show up to the show? Rust is a baby and it has done so little on its own. This isn’t a neat little side project, this is code that a major vendor will want to take up and will demand be maintained. There are implications on a global scale.
It’s mostly in that linked thread. The high level of it is a guy wanted to push Rust code. The maintainer said no it would mean the API for this would be tied to Rust and that is unacceptable. It cause another big contributer to throw a fit and Linus said he can’t be everyone’s mom. They kept fighting for like 2 months apparently? Now Linus stepped in, looked at the code and said the Rust code clearly doesn’t impact the API in the way the maintainer was saying it just breaks itself if the maintainers allow changes to the API.
I kinda dislike the idea that it’s cool for people to contribute code that is so easy to break. I have a feeling after it happens a few times they are going to claim that it is being done intentionally and that the slap fights will carry on.
Linus shouldn’t have to get involved at all. Each part of the Kernel should be handled independently by the maintainers. Linus responding publicly to outside forces is fine but once he has to step in to handle public fights between individuals who are supposed to work together it is a problem.
Linux staying C focused is a valid thing to do. It is very hard to get folks to contribute to the kernel and if you cut out anyone who doesn’t know Rust, a language with at best 5% the adoption rate of C, you will run into spots where sections of the kernel are unmaintained due to no willing and qualified person covering it.
Adding Rust based functionality and support is great. Changing APIs to require maintainers to learn Rust to continue to maintain the code they are experts in is unacceptable.
I think this looks great. I’m not going to run a 20 foot USB cable accross my living room so wireless is pretty much a must. I think the only concern I have is if it discharges if I store it and if so what the bringup time would be.
You are in a theater group and steal a magical princess. Yada Yada Yada, you find out your twin brother is using magic life mist to build an army of dolls… Yada Yada yada, the princess turns a castle into a giant robot to fight the doll army… Yada Yada yada, you go to your alien space ship to find all of your other clones, yada yada yada your clone brother kills you and the only way to realive is to kill Necron the god of death and then the game ends.
Final Fantasy 9, the pinnacle of FF games doing this.
Another favorite for me though would be Breath of Fire.
You are a man, you become a dragon man, you find out you were always a dragon, find the goddess and have to chose between killing her or becoming a dragon god and killing your friends.
I mean the issue with TikTok is that they give the data and videos over to the Chinese government to be able to mass process it and search for vulnerabilities or enable social engineering attacks on the US government.
Meanwhile Twitter/X is totally different in that they give the data and videos over the US government to be able to mass process it and search for vulnerabilities or enable social engineering attacks on US citizens.
This is a patch from the hardware vendor so I am assuming that the ask is not that the hardware vendor take responsibility but that they not release buggy hardware. That is what I mean about the validation issue.
The attack vector is shared in the patch so it isn’t entirely a theory.
There is a comment from Linus about how this patch is only needed for some hardware and doesn’t apply to others but I don’t get his relevance there as different hardware validates against different use cases and their source logic might be entirely disparate.
So my validation talk is simply saying that bugs happen. My concern here is what more should a hardware vendor do beyond submitting a kernel patch? You can’t just not have the bug, and if you recall the part someone else will just keep theirs in the field and take all the market share and roll the dice that their bugs don’t get exploited.
Is this really the hardware vendor’s problem though? It’s the consumers problem.
I bring up full validation because the concern here is putting in a speculative fix. If the ask is, why was the hardware like that in the first place the answer is because it can’t be fully validated. If the ask is why should a speculative fix go into the Kernel it is because the consumers are not on top of tree and if a fix has a chance of never being exploited it needs to be pulled in years ahead so it goes into an LTR that customers migrate to BEFORE the issue comes up.
Fully validating hardware is an insane task that hasn’t been really done in years. It would mean 5 years between chip releases and a 2-5X in cost to produce, and people wouldn’t follow the validated configs anyways. If we followed the validated hardware spec we would have 50 min boot times and not go past a 3.5Ghz clock.
People have the choice today on if they want to run on validated hardware. You can opt in to get a 2.8Ghz part that supports 2666MT/s that is mostly tested and validated, or you can get a 5Ghz part that supports 6000MT/s that is only partially validated. They cost the same price. What do folks think people pick?
Every security feature ever made has basically started by absolutely dumping on S3 recovery. S3 recovery requires every device in the computer to give you a complete understanding of how to bring it up cold without engaging the boot flow. Sometimes devices don’t do this because they are lazy, other times they don’t do this for security reasons.
Every PC will be using AI as we move forward and thinking they won’t seems as head in the sand to me as thinking the Internet would be a fad. Remember how awful the Internet was in the 80s and 90s? AI is in a similar spot today.
Why would I read a manual when I can ask an AI to summarize it and give me pages so I can confirm? If I’m trying to do a task I know a million people have solved like Python code to translate XLSX and CSV to JSON and back, why wouldn’t I use AI for that?
Trusting AI outright and not reviewing the answers is silly, but doing research with AI is soooo much faster. Also the majority of articles and manuals you find online written in the past year used AI and you can have CoPilot spit it out to you WITH the original sources that the website/blog hides.
The idea that AI isn’t trustworthy is silly, because no one is trustworthy. You should always have been double checking things for yourself, but sitting and struggling through something for 2 days is foolish when AI could do 80% of the work for you in seconds.
Year of the Linux desktop pushed out a year due to Linux infighting and intolerable advocates for the 33rd year. Clearly the fault of the other distros as I use Arch.