• 0 Posts
  • 169 Comments
Joined 5 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • Interesting.

    Seems again, that this won’t affect enterprise systems because of things like user rights (users don’t run as admin) and GPO that controls the AV.

    No admin, it’s not getting changed. GPO means even as admin, it probably takes an additional confirmation.

    If it gets past both of those…

    For the average home user, this is why you don’t run as admin. That’s 98% of the reason you don’t see stuff like this on Linux: defaults have the initial user account not have root - you setup a root password during install, and a separate user account.


  • Start with one thing you want to do, the most important thing.

    Enumerate the requirements of that thing (machine to host it on, the kind of OS it requires, network connectivity, etc).

    You’re doing what I’ve always heard as “solutioning” - getting overwhelmed with potential solutions before clearly identifying the problem (e.g. Requirements).

    Solve that first thing, then move on to the next thing.

    Odds are you can get started with something much simpler than jumping feet first into solutions like Proxmox (which has nothing to do with your stated goals, it’s a storage/redundancy/virualization system). Forget about all that - if you eventually come to a point where you need those capabilities, you can deal with it then.

    I would start with redundant local data and a cloud backup. Three local drives with data sync’d or mirrored is much easier/cheaper to get going than spending time setting up a NAS that you don’t know you need…yet.

    Or, if you know you need a NAS, then start there and get that established, stable first. Then start your sailing efforts. Pretty much all NAS solutions today support some kinds of virtualization/containerization. I don’t recommend Proxmox as your start.

    Edit: I’ve run different flavors of Linux on a laptop for this, with an external drive that got sync’d to a second external drive and to a third external on another laptop. That mostly protected me from local/drive/system failures, at least.





  • USB isn’t good for RAID, it’s unstable.

    Do you currently have more than 8 or 12TB of data? Because you can buy drives that size today, no need for RAID under those capacities.

    I recently purchased an 8TB drive for ~$100 on Amazon. Yes, it’s used, but comes with a 3 year warranty. I’m fine with that warranty length, as drives don’t last forever, and I’ll be replacing drives due to growth anyway.

    Don’t overlook RAID 1 - mirroring. With large enough drives this is a viable first step to some redundancy (though it’s really intended more for failover). Simply replicating your data locally to multiple drives, and backing it up offsite should give a lot of redundancy.

    The big challenge with local redundancy is that it’s not backup, so replicated bad changes can wreck all local copies. Backup, however, gives you multiple copies of data and incremental changes (if configured that way).


  • Never had this happen, and I’ve carried laptops since the mid 90’s, and they’ve always been plugged in most of the time.

    Get to office, plug on, get home, plug in and sit overnight in the charger with no use.

    I’ve seen a few expanded batteries, but that’s across the hundreds of laptops in my support circle. It’s very rare.

    Every laptop I’ve had in the last 5 years has battery protection built in anyway. I’m running 2 laptops from 2019 that have it.

    Though you do make a good point, something to figure out if your laptop does this. And to keep an eye on the batteries anyway (like check battery health quarterly), and replace if it gets down significantly (I replace mine at 70% health).


  • RAID isn’t backup, or even redundancy, it’s for creating large storage pools. It’s at the mercy of the controller and all the hardware. In fact, the more disks you have, the more likely you are to be impacted by a failure.

    In a typical RAID 5, if one drive fails, the entire array is at risk until the drive can be replaced, and resilvered. During resilvering (rebuilding the drive with all the data it should have, parity, etc), the entire array is at even more risk because of the load on the other disks.

    With dual parity and hot spare (less data storage total), you get a little more security since the parity is doubled and the hot spare will be automatically resilvered if a drive fails, but that’s not without similar risks during that process.

    You still need backup.

    Here’s a real-world example of RAID risks. I have a 5-drive NAS with 5 1TB drives, which gives me roughly 4TB of usable space (1TB parity). It runs software RAID using ZFS (a highly resilient file system, that can build arrays using varying disk sizes, and has some self-healing capability). I’ve had a drive go bad, replacing took 30 hours to rebuild. During that time, the entire array is “degraded”, meaning no parity protecting the data because it’s currently rebuilding the parity. If another drive were to have failed during this read/write intensive period, I would have lost ALL the data.

    To protect against this, I have 2 other large drives which this data is replicated to. And then I use a cloud storage for backup (storj.io).

    This is a modified version of the 3-2-1 method that works for my risk assessment.

    Without offsite backup, you’re always at risk of local issues - fire, flood, etc. Or even just a massive power spike (though that’s not much of a risk, especially if you use a UPS).

    I’m actually building a second NAS to have easier local redundancy, and because I have a bunch of drives sitting around. With TrueNAS or Unraid, it’s pretty easy to repurpose old hardware. Though power is always a concern, so I’m looking for an inexpensive motherboard that has low power draw at idle.



  • For people who can’t or don’t want to run a VPN app, Tailscale has the Funnel feature, which can… Funnel traffic into your Tailscale net.

    I’ve only used it for light stuff so not sure how well it will work for video.

    There are other Mesh VPN solutions out there - I’ve used Hamachi for close to 20 years on Windows, and it just works. There’s a Linux client too, though I haven’t worked with it in years.

    Alternatively, you can setup a Raspberry Pi just for the Tailscale/Wireguard VPN, for say at your parents/friends houses. Cheap, simple solution, and it’ll handle DNS for the devices in the Tailscale mesh. This is something I’m doing for family/friends for unrelated/slightly related reasons (I’m reproducing the Backup to Friends feature that Crashplan used to have, so all of us can have multiple backups in our own “cloud”) , but they’ll get the side benefit of video, which won’t get backed up, just duplicated everywhere.





  • Is it?

    Have you performed the analysis on Plex reviews and know exactly how many 5 star reviews were posted by employees?

    Because I don’t, and that’s a problem, as well as a Google ToS violation.

    I hope their app gets dropped from the store, at least long enough for them to have to go groveling to Google to get in reinstated, especially since Google likes to drop OSS devs for undisclosed reasons, and make them jump through hoops to get back on.





  • I stopped trying to use Plex years ago (like 10) when that shit was just painful… AND they wanted to charge me for the luxury of that pain.

    I’m sure it got lots better, but it left such a bad taste in my mouth at the time I’ve gone without easy media watching instead, and tried all sorts of things.

    Hopefully Jellyfin keeps improving. I’d rather donate to them every year than pay a sub to Plex.

    Glad it’s worked for you though, and I mean that. When Plex worked for me, it was pretty good.