As the title states, how would you set it up? I’ve got an HP EliteDesk G5, what are the strengths and weaknesses of either:
- ProxMox with one VM running TrueNAS and another VM running Nextcloud
- TrueNAS on bare metal with Nextcloud running in docker
- Some other setup
I’d like to be able to easily expand and backup the storage available to Nextcloud as needed and I’d also like the ability to add additional VMs/containers/services as needed
If you go with TrueNAS, you’re stuck with TrueNAS/Docker. If you go ProxMox, you can theoretically do…anything. But of course that comes with some added complexity.
Good point, I like the ability to choose between VMs and containers. If I had TrueNAS in one VM and Nextcloud in another, how would you link Nextcloud to TrueNAS? SMB share?
Honestly I haven’t used Proxmox, but I assume they can share storage without having to set it up like a network drive? If not, SMB would work.
Proxmox uses scsi for disk images, which are single access only
Smb would be quite a lot of overhead, and it doesn’t natively support linux filesystem permissions. You’ll also run into issues with any older programs that rely on file locks to operate. nfs would be a much more appropriate choice. That said, apparmor in container images will usually prevent you from mounting remote nfs shares without jumping through hoops (that are in your way for a reason). You’ll be limited to doing that with virtual machines only, no openvz/containerd.
Fun fact, it was literally the problems of sharing media storage between multiple workflows that got me to stop using virtual machines in proxmox and start building custom docker containers instead.
You can do the NFS mount in the VM and share it as a volume with the docker container.
Apparmor will complain and block the nfs mount unless you disable apparmor for the container. Then in a lot of cases the container won’t be able to stop itself properly. At least that was my experience.
Nfs
There are things proxmox definitely can’t do, but chances are even if you know what they are, they probably still don’t apply to your workflows.
Most things are a tradeoff between extensibility and convenience. The next layer down is what I do, Debian with containerd + qemu-kvm +custom containers/vms, automated by hand in a bunch of bash functions. I found proxmox’s upgrade process to be a little on the scuffed side and I didn’t like the way that it handled domain timeouts. It seemed kind of inexcusable how long it would take to shut down sometimes, which is a real problem in a power event with a UPS. I also didn’t like that updates to proxmox core would clobber a lot of things under the hood you might configure by hand.
The main thing is just to think about what you want to do with it, and whether you value the learning that comes with working under the hood at various tiers. My setup before this was proxmox 6.0, and I arguably was doing just as much on that before as I am now. All I really have to show for going a level deeper is a better understanding of how things actually function and a skillset to apply at work. I will say though, my backups are a lot smaller now that I’m only backing up scripts, dockerfiles, and specific persistent data. Knowing exactly how everything works lets you be a lot more agile with backup and recovery confidence.
I would clearly prefer Proxmox, which gives you the greatest possible freedom in terms of what else you want to do. Then a VM or LXC for each service.
I looked into Proxmox briefly but then figured that since 99% of my workload was going to be docker containers and I’d need just a single VM for them it made no sense to run it.
So that’s what I did. Ubuntu + Portainer and a shed load of stacks.
Truenas scale running a helm packages of Nextcloud.
K8S is the future.
As much as I dislike being locked into the “ecosystem” of truecharts, you’re absolutely right that its the future.
You don’t actually have to learn either of them to get the system working
They actually did a really good job of making it user-friendly
But then if you don’t understand how it works, how do you make off-site backups?
How do you extract data if one day you want to use another os?
What if you want to backup the database?
That’s the beauty of it, when you install the images you select where the storage is on your drives. All you have to do is backup your array and you’ll have backups of the apps too!
it doesn’t seem to have a way to have shell access to the container, what if nextcloud breaks and need to type the
php occ
commandsReally? I swear I’ve done it.
Even without shell access, you can run the container and send the entry command. Or run a separate container locally with the data mounted.
i tried to see, i did not find a way to get shell access
but i’m not a truenas guru, not at all, i have like 1 week of experience and this k8s stuff with no docker support shocked me. Seems like docker has been removed in the latest release that got published one week ago
From the brief research I just did, this does seem like a good direction to take. However I’m doing a lot of learning right now and I’m trying to stick with just one or two technologies at a time and adding in Kubernetes and Helm is a little beyond me right now.
Then you’re similar to me. I setup a new truenas scale server with the intention to replace a debian server that’s running dozens of docker containers via docker compose. No, it can’t be done. Options are k8s and virtual machines. That’s it. I can’t even run borgmatic.
What I’m doing is sharing the storage via iscsi to the debian server (it’s like a virtual disk image) using 10gb fiber. But now I have two servers, and the truenas one can never afford a second of downtime, if that one turns off it’s like yanking drives from a running system.
Now, if I had time I could definitely learn k8s and rewrite all my docker compose yml files but I have no time, it feels like a completely different concept
The available applications out of the box on truenas scale are just 96 and only few of them are actually useful. There’s a way to add a second unofficial repository (true charts) that adds another 500 apps, but the list is weird. There are like 5 Minecraft servers but not a single standalone database. No mariadb, MySQL, mongo and so on.
In addition to that, the extra 400 apps that can be installed via truecharts come with ABSOLUTELY ZERO documentation. It doesn’t even explain the environmental variables. See by yourself: https://truecharts.org/charts/stable/actualserver/
In addition to that, the extra 400 apps that can be installed via truecharts come with ABSOLUTELY ZERO documentation. It doesn’t even explain the environmental variables.
That’s assuming those applications even work. Most of them are broken and its really hard to push fixes. WG comes to mind, totally broken because someone decided to hardcode
eth0
as interface name and modern systems use biosdevname.yeah i tried some of them and then i just gave up. For example, the way diskover data is configured by default is to index and show stats of an empty directory with a test file. And the free version allows to index a single directory, so…
Change the default directory to your main one? The documentation consists in:
DiskOver App for TrueNAS SCALE
yes, that’s it. Very useful
What a great documentation ahaha
You actually don’t need to learn either of them, they did a really good job of making the system user-friendly
Nobody should run k8s/k3s without understanding how they work lol, that’s a recipe for lost data.
How so?
As long as you set the app storage to the array, all you need to do is look after the backups on the array and it all works beautifully.
That’s something you should do anyways
Docker ftw
Personally I go with low resource box with lots of drives for a nas and a higher compute but low storage box for a hypervisor. That way the NAS storing all the bulk data uses as little power as it can and can just sit there doing the pretty well singular task of serving drives. Backup is all automatic via mirror raid and snapshots.
Would you use something other than TrueNAS, then?
I use Xigmanas, which is kind of the forgotten grandfather of TrueNas due to a fork/renaming years ago. I like that it’s more designed as a embedded image image, so it feels more clean and purpose built.
For the VMs I go with XCP-ng, It’s been a long time to recall exactly why I switched away from Proxmox but it had to do with the way that resources got shared by the VMs and the host where I wanted them to have a more distinct split.
They all have their +/- and eventually if it gets taken far enough as a hobby you’ll end up finding little adjustments to make that are specific to your needs and setup that make sense only to you.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LXC Linux Containers NAS Network-Attached Storage k8s Kubernetes container management package
[Thread #263 for this sub, first seen 6th Nov 2023, 10:30] [FAQ] [Full list] [Contact] [Source code]
Maybe you could install nextcloud in docker on a separate VM (I use Debian) and then mount a Truenas network share in docker.
Trying to use TrueNAS for anything but a file share is not going to work well in terms of flexablity
That’s where I’m at now. Same kind of issue as OP. Wanting more out of my bare metal!
Proxmox VMs shouldn’t have much of a performance penalty compared to bare metal. (Assuming you have virtualization and similar extensions enabled)
I have TrueNAS Scale running inside of ProxMox, but I plan to replace it with a Turnkey system on top of an LXC instead.
For as convenient as TrueNAS is, it is not a replacement for ProxMox. ProxMox is designed for business, and it shows in comparison. The logical layout, the backup options, the storage flexibility, etc.
In comparison, TrueNAS feels more homelab hobby. For reference, I could see ProxMox on a business install with enterprise support. TrueNAS, I’m not so sure.
well, there are many things to consider. TrueNAS’s ZFS is memory hungry, and is best used on it’s original BSD. Also, you may need SMART directly in your NAS, then you’ll need to PCI passthrough the disk controller if you are on proxmox. With that said, either directly running TrueNAS Scale or TrueNAS Core on proxmox isn’t ideal. Also, running database storage over NFS has great disadvantages, so I would really advice against going proxmox+truenas route.
IMO, a mature NAS system is only useful as it is designed to be: bare metal system for your disk management. If you really wanna ZFS, then use TrueNAS Scale. If you are a guru and can or are willing to setup things yourself and doesn’t care about RAID5/6, just use regular linux + docker/podman + btrfs.
If they really want just ZFS, Proxmox offers it.
It just doesn’t come with a built-in UI
well, it actually has a UI for managing ZFS volumes in proxmox lol. proxmox is very versatile I’ll admit. I use it also, but because I absolutely need the vm capability to run opnsense and debian on the same machine. If OP only needs a NAS with docker, he may not need that power. well who am I to decide. this is selfhosted so people can just try anything.
Yeah, fair, there is a UI, but it’s veeery basic, not at all comparable with TrueNAS
I’ve been running TrueNAS core for years. I used to have my applications in Jails on TrueNAS. If you just want to start out learning I think using SCALE and keeping your apps within TrueNAS is a good way to go.
I believe SCALE uses docker for its apps so that should make it easy to migrate your data in the future if you pick another platform.
None, because Proxmox is questionable open-source with annoyances and a mangled system that fails often. TrueNAS Scale is overkill and buggy, not even a simple WG container they can get right. Install it all barebones on Debian 12, setup Samba for shares, FileBrowser for a WebUI, ZFS or whatever filesystem with the appropriate tools and if you need some kind of isolation use LXC/LXD that are now both available from Debian repositories without Snaps.
WG comes to mind, totally broken because someone decided to hardcode eth0 as interface name and modern systems use biosdevname.