One thing i’ve learned over the years is the way I did something when I was learning is usually not the best way of doing it, and I later go back and redo everything now that I have gained experience and knowledge. I have decided to ditch google photos, and was lucky enough to snag a free rackmount 2017 server from work with an i7 installed and 2 6tb drives on the way. But now comes the hard part of deciding what software I am going to end up learning on and hopefully living with. First and foremost I want a photo backup service, and I have debated between immich and xpenology, I also know that I want to run Pihole and would really like to self host my own website documenting my projects, even if no one will ever look at it.
If you had to start from the beginning, which OS, which container manager and which containers would you build. I would love the recommendations from those who walked so that I can run
Never buy into a platform, they tend to waste a lot of my time on changing things to upgrade versions that have no point except to change the whole platform, and the things you like to use get deprecated. Or they never get to where you want to be. Your vision is not the maintainers vision. Learn to roll your own…everything.
Platform like what?
I already apply these rules myself, but these are the five major things I emphasize to everyone.
-
Don’t overcomplicate things. You don’t need Proxmox on every machine “just in case”. Sometimes, a system can be single purpose. Just using Debian is often good enough; if you need a single VM later, you can do that on any distro. This goes for adding services, too. Docker makes it very easy to spin things up to play with, but you should also know when to put things down. Don’t get carried away, you’ll just make more work for yourself and end up slacking and/or giving up.
-
Don’t put all your eggs in one basket if you can avoid it. For instance, something like Home Assistant should run on its own system. If you rely heavily on your NAS, your NAS should be a discrete system. You will eventually break something and not have the time or energy to fix it immediately. Anything you truly rely on should be resilient so that your tinkering doesn’t leave you high and dry.
-
Be careful who you let in. First, anybody with access to your systems is a potential liability to your security, and so you must choose your tenants carefully. Second, if others come to rely on your systems, that drastically reduces your window to tinker unless you have a dedicated testbench. Sharing your projects with others is fun and good experience, but it must be done cautiously and with properly set expectations. You don’t want to be on the receiving end of an angry phonecall because you took Nextcloud down while playing around.
-
Document when it’s fresh in your mind, not later. In fact, most of the time you should document it before you do it. If things don’t go according to plan, make minor adjustments. And update docs when things change. What you think is redundant info today might save your ass tomorrow.
-
Don’t rely on anything you don’t understand. If it works, and you don’t know how or why it works on at least a basic level, don’t simply accept it and move on. Figure it out. Don’t just copy and paste, don’t just buy a solution. If you don’t know it, you don’t control it.
-
Not much - it’s been a pretty organic learning journey.
Very much a crawl > walk > run thing. Can’t necessarily jump straight to the end.
I use Docker-compose for everything instead of Docker run
Definitely this. There’s actually a cool new docker interface that the uptime-kuma guy is working on. You paste in your docker run command and it creates a compose file.
https://github.com/louislam/dockge https://youtu.be/E805XcbTzgY?si=r6uFI2pvbJg1cXBe
Same here, but I take it one step further I do it all from the Stacks section of Portainer with my compose files.
I also migrated all my docker run containers to docker compose.
I was tired of looking at history for the exact docker run command for anything.
I got a 4TB drive thinking it would be plenty for plex. Oh boi was I wrong. Specially when shows started to come in.
A 12TB drive is on the way and I’ll leave the 4TB one for immich and nextcloud alone.
I don’t have any issues with ubuntu server and I like portainer so I don’t think I’d set things up differently. I should organize and backup the docker compose files though.
Do you use immich or nextcloud memories(nextcloud app) for backing up your main photo library?
I use immich for that. Haven’t tried nextcloud memories but I have no need to do so for now.
I moved to TrueNAS Scale from Core. My worst decision ever. Core was more stable, faster, and the jails actually worked as expected without some shit k3s cluster nonsense.
Well you just caused me to delay that upgrade for a few more months again!
Don’t listen to others. Just do what you want and have fun.
I have a bunch of crap deployed between two servers and a bunch of mix match hard drives.
If I could start over I’d plan much better and keep everything organized.
I’d run one server for all my dockers / apps.
Then the other server for storage / backup
Documentation.
Document how my drives are set up (like really… I don’t remember how they are configured XD I only know I haven’t ran out of space yet so everything is going the correct mount.)
Keep a propper list of what process uses which port.
Use containers from at the start.
That’s all I can think of atm.
I did start over recently - Dell R730 salvaged from work, 64 cores, 256GB RAM
Took the guts and moved it to a Machinist X99 motherboard in a Rosewill server case so I could put in silent fans and have room for 15 drives.
Proxmox hypervisor booting on 2x 2TB NVME drives.
Bought 6 x 16TB drives and a HBA to run them, then did HBA passthru to a virtual machine running Truenas which mounts everything as a samba share
Multiple other Ubuntu VM’s running docker compose. One VM runs utility / ARRS, 2nd VM runs Plex so that my media playback isn’t affected by utility. These servers mount the TrueNAS samba shares for file processing. That way they are booting/running on NVME but media is on spinning iron.
Also have LXC running Adguard Home for DNS based adblock/malware protection for the entire house. Also have a Raspberry PI as a 2nd DNS server.
Several Windows VM’s used for work (I connect to customer environments so I spin up a separate VM for each customer to keep their environments isolated.
I started with OMV and a NUC connected to a 2 drive terramaster DAS. This was a mistake. Ended up building a box with room for 12 drives running debian with snapraid and mergerfs. Using duplicati to keep all my yaml and config folders backed up
Probably wouldn’t use Apache2.
I want to use caddy and reverse proxy from there but I can’t bring myself to do it.
What is wrong with a Synology NAS or TrueNAS? Plenty of apps for photos on or using such.I am falling in love with NextCloud for many things reasonably integrated together. Can definitely self-host that and share it or parts of it with others and collaborate with them if you like.
Dockerize all the things. That and/or kubernetes. I hear nice things about Proxmox in home labs but haven’t got around to messing with it.I’m happy with OpenMediaVault for my purposes, but I probably would’ve taken the time to research and leverage ZFS instead of EXT4.
I’ve used Windows, OpenMedia, Synology, Virtual Machines, Rancher, Truenas Scale, but Unraid is by far the best.
For me here are the benefits;
Hardware compatibility is awesome. It works well with my Dell Server and even picks up on thermal/bios status. It boots from a thumb drive so your 2 disks will be dedicated to your data. Simple to backup config to zip file.
Software is awesome. It treats docker containers like apps so you just install it, config storage/networking, and it just works.
Updates are simple. Update containers with new images with a single click.
Bonus: It works well with NUT (UPS monitor) so my NAS shuts down automagically on power outage. The UPS is connected to a different host :D
Good luck!