I mean basically the title. Currently all my services are just running directly on my arch server and it has been working well enough for me and i am super comfortable working with it. A few months back I had a minor crash of the server where the system had become not functional. I was able to recover the server to the point that my services could run but i never got the graphical part of the server going again or nextcloud running.
At this point I’m just considering wiping the os to a fresh one and starting clean to get everything working correctly again. What I’m wondering is, is it worth learning docker and deploying all my services that way or should I just continue with the way i have been doing it for years now?
I will be running the various Arr apps, Emby, NextCloud, Qbit, Homepage?, and probably a few others that i can’t recall off the top of my head. Some of the services are accessed of site if that matters at all. I did briefly explore docker in the past but got stuck and my friend pushed me towards straight arch. Now I’m considering giving it another shot but wanted to hear folks here input on the pros and cons of either way
yeah docker is a pretty good option. worth trying out. just don’t randomly deploy community images from dockerhub like a dumbass. tons of them were created by people who demonstrably dont know what the fuck they are doing. but if you stick to official images, and make your own when there aren’t any official images, you’ll be fine.
Document. It doesnt matter what you use exactly, but document it. It will make recovery easier regardless of the underlaying server/software.
I’d personally recommend putting your provisioning steps for each service into Ansible playbooks. That way, you can spin them all up from zero any time, distribute them across different hosts, in vms or lxc containers, any way you like.
While I really like arch, I don’t think it’s a good distro to be running apps directly on due to the rolling release nature. It’s better to run apps directly in a distro like Debian or alpine. Yes, running the apps in a container will bring more stability and reliability, but you’re still risking botched updates to the host that could cause instability.
That being said, your immediate best move would be to use docker. Your best move long term is to move to a distribution thats not rolling release.
i ran an arch server for years. it’s really not a big deal for home use.
short answer yes. I think you already come to the same conclusion but just being intimidated by a new technology stack that you must learn fresh. Well don’t be! It isn’t hard, and it is definitely worth the effort!
I’ve ran arch for many years in many vms acting as servers. I’ve never had any more issues with arch than Ubuntu. With any system you can choose you need your keep backups or snap shots
Same here. Its my go-to for years.
Except I had encountered an issue relatively recently, where newest kernel had regression with virtual dvd under esxi hypervisor, causing higher cpu load than typical.
So I took time and switched all my shit to lts kernel, which I should have used from the get-go.
But other than that, which was solved easily by removing dvd or switching kernel, I had zero issues, and even had some deployments where i was updating ~2 years old arch install and it went smoothly…
That’s unfortunate. Most of time I just use lts kernel. I too am just using servers accessed via ssh and terminal.
Make an Arch container with everything?
I use Arch, btw.
I never used containers, K8s etc and built my server all from scratch using debian.
This year I switched to a hypervisor and use the proxmox supplied lxc containers.
Never without again. The convenience of spin a new one up, fiddle around without messing up the main sys, doing snapshots to clear some mess if needed makes selfhost so much easier.
No matter what software you use I would say containers.
Definitely worth the effort. If you want the services you run to be stable, that is :)
BTW; using Arch on your server is probably not the best idea! The reason why many people prefer release-based distros on their servers is because they are much less likely to have a dependency conflict. Also, while I love Arch — it’s just not for servers that are required to stay stable and reliable.
So, let’s get back to the question at hand: why Docker? It’ll be WAY easier for you to control everything. Every image has its own environment with dependencies that don’t interfere with other services’ requirements. Also, updating your services will be much easier and without needing that much attention — you won’t be risking breaking stuff that’s already running on your server.
The downside will be that you won’t have as much understanding of your system and everything that’s running on it. But that could be solved with a separate PC for tinkering or a VM :)
Good luck!
I agree with others and have an alternative view. How about you install a hypervisor like Proxmox and then you get the flexibility of running Docker, LXC containers or even VMs.
Personally, I run a mix of LXC containers and Docker. Why? I really like Docker but the all inclusive nature of the containers can make customization of settings difficult.
In contrast, LXCs are heavier than Docker containers but they act like a full Linux machine and so you can use all of your past system admin knowledge and customize away. They are mich lighter than a VM and so they are a nice middle ground.
Summary for simple self-contained apps, I use Docker and for more complex apps, I rely on LXC containers. With Proxmox you can easily use both and so it is the best of both worlds!
Use docker on arch. It is perfectly fine for one server. The need for release based distributions strictly comes only when managing many servers where updates should be unattended.
I’m going to assume everyone here saying use docker is fully conversant with docker already. As someone who already happily has multiple services hosted on multiple (extremely light) VMs I would say just leave docker alone. I have spend most of today trying to get some containers in docker working (reliably, which is what a lot of people miss). Yes getting docker up and running and containers working is simple, but if it all goes sideways tomorrow what are you going to do? What’s your backup plan? IME it’s much harder to get a docker stack back up and running using your own data than simply restoring a backup to a VM host. There are a couple of things I want to use that are docker only and there is something to trip you up at every turn. It’s another level of complexity you don’t need. If you have a working environment now then why would you need to add docker?
The only thing I would say would be to maybe use a different distro for hosting everything on, but overall “if it ain’t broke don’t fix it”.
Maybe I would go a small step further and go for rootless Podman.
For me the main selling point of docker is spinning up a stable version of a container and it pulls all its dependencies which are also working. Installing things directly on the OS, at least for me, is a war of “run service; lib XX not found; apt install XX; run service; can’t write to /foo/bar;” etc etc
In your case, I’d have a docker compose with all my services, and would just say
docker compose up
and pronto.