Cleveland Rapid Response could use donations if you can.
https://rhizomehouse.org/mutualaid/ is a good list of people you can help.

Shit is going down in Minnesota but they’ve never been more organized than they are now. Only we keep us safe.

If you want to help you can donate to a few groups.

https://www.gofundme.com/f/help-equip-twin-cities-legal-observers-with-ppe

https://nlgmn.org/mass-defense/

https://www.wfmn.org/funds/immigrant-rapid-response/

More comprehensive list here https://www.standwithminnesota.com/

How to organize a rapid response from a very high level with further detailed resources. https://southerncoalition.org/resources/rapid-response-101/

Good general advice on organizing, also a good resource to find groups near you that are likely aligned. https://www.fiftyfifty.one/organizer-resources

Feel free to reach out for any other resources.

  • 0 Posts
  • 74 Comments
Joined 2 months ago
cake
Cake day: January 21st, 2026

help-circle
  • I use WSL at work, I pin max RAM and only leave one CPU running for the host OS. It’s still a nightmare. This upcoming week I’m finally deploying Redhat IDM so that myself and others can use their smartcards and the ancient AD infra to get linux workstations and jumpboxes. Microsoft did me a massive favor by raising our licensing pricing so now it’s cheaper to replace Azure AD.














  • will never achieve AGI or anything like it

    On this we absolutely agree. I’m targeting a more efficient interactive wiki essentially. Something you could package and have it run on local consumer hardware. Similar to this https://codeberg.org/BobbyLLM/llama-conductor but it would be fully transform native and there would only need to be one LLM for interaction with the end user. Everything else would be done in machine code behind the scenes.

    I was unclear I guess, I was talking about injecting other models, running their prediction pipeline for the specific topic, and then dropped out of the window to be replaced by another expert. This functionality handled by a larger model that is running the context window. Not nested models, but interchangeable ones dependent on the vector of the tokens. So a qwq RAG trained on python talking to a qwen3 quant4 RAG trained on bash wrapped in deepseekR1 as the natural language output to answer the prompt “How do I best package a python app with uv on a linux server to run a backend for a …”

    Currently this type of workflow is often handled with MCP servers from some sort of harness and as I understand it those still use natural language as they are all separate models. But my proposal leverages the stagnation in the field and leverages it as interoperability.