in_my_honest_opinion
Cleveland Rapid Response could use donations if you can.
https://rhizomehouse.org/mutualaid/ is a good list of people you can help.
Shit is going down in Minnesota but they’ve never been more organized than they are now. Only we keep us safe.
If you want to help you can donate to a few groups.
https://www.gofundme.com/f/help-equip-twin-cities-legal-observers-with-ppe
https://nlgmn.org/mass-defense/
https://www.wfmn.org/funds/immigrant-rapid-response/
More comprehensive list here https://www.standwithminnesota.com/
How to organize a rapid response from a very high level with further detailed resources. https://southerncoalition.org/resources/rapid-response-101/
Good general advice on organizing, also a good resource to find groups near you that are likely aligned. https://www.fiftyfifty.one/organizer-resources
Feel free to reach out for any other resources.
- 0 Posts
- 74 Comments
in_my_honest_opinion@piefed.socialto
Privacy@lemmy.world•I traced $2 billion in nonprofit grants and 45 states of lobbying records to figure out who's behind the age verification billsEnglish
3·4 days agoI am lol. I’ll drop the codeberg here when it finishes.
in_my_honest_opinion@piefed.socialto
Privacy@lemmy.world•I traced $2 billion in nonprofit grants and 45 states of lobbying records to figure out who's behind the age verification billsEnglish
8·4 days agohttps://github.com/upper-up/meta-lobbying-and-other-findings/tree/main/data/processed
Feel free to write your own parsing script. The raw data is above.
in_my_honest_opinion@piefed.socialto
Selfhosted@lemmy.world•Onionphone - E2EE PTT Voice and ChatEnglish
12·7 days agoSounds good. I’ll pull the latest build to my graphenOS test mule.
I’ll target a secured db as a vault for contacts. That’s a really good idea.
in_my_honest_opinion@piefed.socialto
Selfhosted@lemmy.world•Onionphone - E2EE PTT Voice and ChatEnglish
6·7 days agoHow can I contribute?
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Apple introduces Macbook Neo - cheaper Macbooks starting at $599English
81·13 days agoAlways buy refurbished laptops, including MacBooks.
in_my_honest_opinion@piefed.socialto
Selfhosted@lemmy.world•Ideon: I'm building a self-hosted project cockpit on an infinite canvas (v0.5 update)English
39·16 days agoThere’s many good articles out there if you have the time. It boils down to stolen code, forced identification and enshittification.
https://sfconservancy.org/GiveUpGitHub/
https://laoutaris.org/blog/codeberg/
https://blog.joergi.io/posts/2025-09-20-migrate-from-github-to-codeberg/
in_my_honest_opinion@piefed.socialto
Fediverse@lemmy.world•This is a federated test post from a nodeBB forum.English
3·19 days agoDid you document the setup? I’m interested in hosting this.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
3·19 days agoYou scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
4·19 days agoAlmost like an LLM wrote it…
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
2·19 days agoI mean what you’re proposing was the initial push of gpt3. All the experts said, these GPTs will only hallucinate more with more resources and they’ll never do anything more than repeat their training data as a word salad posing as novelty. And on a very macro scale, they were correct.
The scaling problem
https://arxiv.org/abs/2001.08361The scaling hype
https://gwern.net/scaling-hypothesisUltimately, hype won out.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
1·19 days agowill never achieve AGI or anything like it
On this we absolutely agree. I’m targeting a more efficient interactive wiki essentially. Something you could package and have it run on local consumer hardware. Similar to this https://codeberg.org/BobbyLLM/llama-conductor but it would be fully transform native and there would only need to be one LLM for interaction with the end user. Everything else would be done in machine code behind the scenes.
I was unclear I guess, I was talking about injecting other models, running their prediction pipeline for the specific topic, and then dropped out of the window to be replaced by another expert. This functionality handled by a larger model that is running the context window. Not nested models, but interchangeable ones dependent on the vector of the tokens. So a qwq RAG trained on python talking to a qwen3 quant4 RAG trained on bash wrapped in deepseekR1 as the natural language output to answer the prompt “How do I best package a python app with uv on a linux server to run a backend for a …”
Currently this type of workflow is often handled with MCP servers from some sort of harness and as I understand it those still use natural language as they are all separate models. But my proposal leverages the stagnation in the field and leverages it as interoperability.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
1·19 days agoAh I see, however you do bring up another point. I really think we need a true collection of experts able to communicate without the need for natural language and then a “translation” layer to output natural language or images to the user. The larger parameters would allow the injection of experts into the pipeline.
Thanks for the clarification, and also for the idea. I think one thing we can all agree on is that the field is expanding faster than any billionaire or company understands.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
6·20 days agoSure, but giant context models are still more prone to hallucination and reinforcing confidence loops where they keep spitting out the same wrong result a different way.
in_my_honest_opinion@piefed.socialto
Technology@lemmy.world•Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apieceEnglish
6·20 days agoFundamentally no, linear progress requires exponential resources. The below article is about AGI but transformer based models will not benefit from just more grunt. We’re at the software stage of the problem now. But that doesn’t sign fat checks, so the big companies are incentivized to print money by developing more hardware.
https://timdettmers.com/2025/12/10/why-agi-will-not-happen/
Also the industry is running out of training data
https://arxiv.org/html/2602.21462v1
What we need are more efficient models, and better harnessing. Or a different approach, reinforced learning applied to RNNs that use transformers has been showing promise.
ZFS isn’t a backup, but it is a gateway drug
I see you’ve never worked in corporate IT
in_my_honest_opinion@piefed.socialto
Selfhosted@lemmy.world•Using Yattee, Invidious & Tailscale to give Google the fingerEnglish
2·21 days agohttps://github.com/mozilla/web-ext
If you’re so inclined, that’s mozilla native but it’ll port to most browsers. You can install local for dev and test before you sign and push to the official addons repo at https://addons.mozilla.org/en-US/firefox/extensions/
Mostly posting this as a note to myself and anyone else interested

I use WSL at work, I pin max RAM and only leave one CPU running for the host OS. It’s still a nightmare. This upcoming week I’m finally deploying Redhat IDM so that myself and others can use their smartcards and the ancient AD infra to get linux workstations and jumpboxes. Microsoft did me a massive favor by raising our licensing pricing so now it’s cheaper to replace Azure AD.