PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.

No, I’m not interested in voting for your candidate.

  • 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle





  • Honest question: Why does it matter if he’s a transphobe when choosing which Fediverse software to use?

    1. Because some people have actually financially supported him. I’m not trans, but I would be devastated to know that my money went to feed someone who wants to destroy me.
    2. I already have trouble convincing transgender people in my social circle that Lemmy as a software is safe for them to use even with the variety of trans-inclusive servers like yours, and will be safe and inclusive in the future.

    A great example of (2) is the fate of PolyMC. Thankfully, the other developers forked it into Prism, but transphobia put that whole project in jeopardy for a bit.

    The software is FOSS and anyone can make their own instance.

    IMO that’s why I’m not immediately dropping my account and running for the hills, but it’s still not good. Most people don’t have the technical skills or the interest in learning them to run their own instance.

    I really want to understand what I might be missing.

    IMO it’s that even though he does not personally control how Lemmy instances are run, and even though we do have a good degree of robustness to transphobia because the software is FOSS, it is still both morally and technically ill-advised to have a transphobe at the helm of an open-source software project.






  • Can AI systems have a religious or political bias? Yes, they can and do learn biases in their datasets, and this is probably the toughest problem to solve in AI research because it’s a social rather than technical problem.

    Can an AI agent be programmed to give responses with religious or political beliefs? Sure, just drop it into the system prompt.

    Can an AI agent have religious or political beliefs like a human? No, because AI agents as they stand are a comparatively crude ** ** machine that mimics how humans learn to perform a task that’s useful to the machine’s creator, not a human or other sentient being.

    So I’ve found Facebook pages maybe run by AI that keeps bringing up the same text and a number of times it’s political or religious content sometimes not AI pictures.

    If I wanted to do something like that, I would probably start with ordinary chatbot code and plug in a large language model to generate posts. I would probably have a system prompt like:

    You are an ordinary Facebook poster. You are a very religious and devout [insert religion here]. You are also a [insert desired ideology here]. Your religious and political views are core parts of personality and MUST be a part of everything you do. Your posts MUST be explicitly religious and political. Please respond to all users by trying to bring them in line with your religious and political beliefs. You must NEVER break character or reveal for any reason that you are an AI assistant.

    Then just feed people’s comments into the AI periodically as a prompt and spit out the response. If it is an AI agent, and not just a human propagandist, that’s probably the gist of how they’re doing it.


  • It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.

    GPTs are trained by “trying to fill in the next word”, or more simply could be described as a “spicy autocomplete”, whereas BERTs try to “fill in the blanks”. So it might be worth looking into other LLM architectures if you’re not in the market for an autocomplete.

    Personally, I’m going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.


  • LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn’t solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it’s worth keeping an eye on.

    Google using ai, everyone hates it

    Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.

    Literally if Google does literally anything, it sucks by default and it’s going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.





  • Yeah my position is really to recommend any FOSS OS in the large over proprietary ones. However, since my experience is primarily with Linux distributions, and I do think that Linux makes sense for a lot of use cases, I usually start by talking about “Linux” first.

    But, from my experience, if a “solution” to a problem “forces” the user to make a choice, then they’ll stick with what “currently works” over having to make a choice. So when I talk to people about Linux IRL, I typically direct them to Linux Mint directly, even though other distros exist and it actually doesn’t fit my use cases. Once they’re comfortable in the Linux ecosystem, they can switch to a different distro or OS family if they feel the need to do so.




  • I like this, but I think that upvotes correspond to things people enjoy, which may or may not be of high quality. I.e., shitposting subs would probably be rated “high quality” when, like… it’s literally the point to post shitty content.

    Also, as stated, that means we have to sum over the entire time history of the community. We would probably want to limit the time history of what is summed over, subject to a maximum for subs with high post counts (like the shitposting subs.

    IMO it’s a great suggestion, but I think it needs to be part of a weighted combination of factors.