• 0 Posts
  • 56 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • The flaw of the question is assuming there is a clear dividing line between species. Evolutionary change is a continuous process. We only have dividing lines where we see differences in long dead ones in the fossil record, or we see enough differences in living ones. The question has no answer, only a long explanation of how that isn’t how any of this works.


  • Even a hypothetically true artificial general intelligence would still not be a moral agent

    That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.

    It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.







  • I’m not comparing them, I’m saying that it’s inaccurate to ignore the effects that solar has.

    The chemicals in producing PV panels are toxic. Part of why production got shifted to countries like China is because without regulation on the waste disposal they are so much cheaper to make there. Sucks for the residents, but that’s capitalism.

    Energy is used to make PV. True of everything, but when solar is advertised it leans heavy on the free energy that the device generates, not how much it took to make it. But at least that energy can come from solar too…except it comes from fossil fuels.

    The heavy metals that make up part of the other 10% are the later waste problem. I don’t know if you can consider those metals inert since they are considered hazardous waste, but they can’t be discounted either. A recycling program to recover everything possible and then controlling the hazardous leftovers would make this less of a point, but we’re not doing that fully yet, so there are things going in the landfills now that could leach into the environment.

    All of this can be improved of course. I’m just introducing the fact that solar, like anything we do to keep our society at its level, has drawbacks too.

    Nuclear has its problems, as I mentioned. I didn’t pretend that solar is bad and nuclear is all flowers. But the issues it faces are much different and have their own solutions, and nuclear energy density and flexibility is far better than solar ever could be.

    I never understand why people pick their sides and then try to make other choices seem like the antithesis to help their cause. Why not find the best solutions for all of the non-fossil fuel sources, and have them all where they make the most sense? Diversity and redundancy is far better than a monopoly won by falsehoods.


  • Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it’s much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn’t understand the code it gives you, nor can it reason if it will function.

    Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.

    Basically, if you can help the LLM hone in its probabilities on the better data from the start, you’re more likely to get what may be good code.











  • I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.

    LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.


  • First thing I did was look at updates, both if there was something pending and in the history. There was a quality update on the 13th, but it mentions nothing specific about patching any vulnerabilities. I’ve got IPv6 off for all adapters now, so I’ll wait to see if anything more develops.

    At this point in my life I can probably move over to Linux, as I don’t play much games anyway and don’t have to use Office stuff. I’m just lazy. But I suppose when LTS expires (in Nov I think) I might go that route.