Hi,

I’ve written some microservices and am looking to deploy them. Since I am not very confident in cloud pricing (= too expensive right now), I am looking into ways of operating a very small server setup.

Lets say I have 5 services, one is the database, 3 should be run as jobs every minute, one service should be scaled based on load.

I am aware that basically described tasks that k8s or nomad would be very good at it. The issue with them is: While I am going to update the services, etc, I do not need a large cluster. I am very sure that I can start with one pod/node and maybe get a second one, if needed.

For this setup, k8s (or other flavors) is just overkill (learning and maintaining it). Nomad from hashicorp looks totally cool for that, but it is recommended to have 3 servers with crazy specs for each of them (doing quorum, leader follower, replication, etc.) Which is overkill when I plan to have 1 worker node in total :D

Nomad has `-dev` option running server and agent on the same node, but in production? I don’t know. Nomads server also uses his ip and other things for identity. When they change, the server instance is basically dead and loses its data. That’s why a quorum of 3 servers is recommended as a minimal prod setup.

Docker compose is not ideal, because I would like to update single containers without tearing everything down.

Also, cron for my periodic tasks is not part of docker or docker swarm except plugins, workarounds, or configuring a container running `cron` but then meddling with `flock`, etc.

I am aware that it actually does not sound like I need an orchestrator, but monitoring all the jobs and restarting a container manually sounds not optimal for me and maybe there is something out there, that helps me.

Since the tech community knows more than me, I would love to get some other opinions or point of views.

Thanks!

  • NiftyLogic@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nomad is totally fine to run on low-spec machines. On my homelab, I have the following running Nomad + Consul:

    • VM with 1GB as arbiter
    • 2 MFF PCs with 16GB and i5-6500T

    Totally fine to run client and server on the same machine in a non-enterprise setup.

    One stand-alone machine should also work, you just lose the failover capabilities.

    • HosonZes@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      After spending a couple of days with k8s I really want to get back to nomad. k8s complexity is way too high.

      The one thing about nomad is: There is much less documentation and community out there for nomad. Especially how to secure a nomad cluster in production. When you get stuck in internal details where you can’t get out, you might be in trouble.

      Have you had small production workloads running in public?

      • NiftyLogic@alien.top
        cake
        B
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Same here, had a deeper look at MicroK8s and decided to go the Nomad route…

        Unfortunately, I’m just running a homelab setup. With two publicly exposed services, but noting enterprise like.

        Does that count as “in producion”? If yes, what are your questions?

  • from-nibly@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you are just starting and messing around you can go a long way with a single node k3s cluster (I prefer nixos since it makes managing and replicating things REALLY repeatable, but it is it’s own rabbit hole)

    BUT if you need several 9s your going to need even more than just 1 server with k3s on it. Your gonna want redundancy, monitoring, and processes.

    1. 3 nodes while only using capacity of 2
    2. Shared volume infra like ceph or a nas
    3. Load balancing firewall like opnsense
    4. Multiplexed internet
    5. UPS for power issues
    6. Onsite backups + cloud backups
    7. Kube-prometheus-stack (or the contents of)
    8. KEDA (for auto scaling)

    (Not a day 0 recommendation)

    The reason kubernetes is complex (and hard to learn) is cause it kinda forces you to consider all kinds of reliability, and scaling issues, that you may not need for a while.

    If you only have one machine, it does feel like a bit much to NEED an autoscaler.

    You can create a vanilla cron job that runs a docker container command so you don’t have to “install” anything on your node. L

    You can use multiple docker compose files to manage stuff independently so you can upgrade stuff without affecting other things.

    I know you say you want auto scaling, but what are you autoscaling against? Like is something else scaling up at different intervals? I think a thing to question is if your extra instances ever need to scale down. Auto scaling is a cost saving measure and if you have static infrastructure with no other load then why ever scale down? Do your cron jobs take too many resources and you have to scale down your micro services? If so you’ve got way more to consider that just plain autoscaling, and maybe you need to scale your infrastructure in which case your back to questioning whether or not you need to scale down.

    I’m questioning your requirements only because if you are trying to just “get something done” k8s and nomad are going to be a distraction since you aren’t already familiar with them. If learning k8s or nomad is also part of your goal, then awesome, I would definitely suggest k3s.

    • HosonZes@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      sure and if you have static infrastructure with no other load then why ever scale down? Do your cron jobs take too many resources an

      Need to think about this.

      I am sure, that several docker files or compose projects will do some of the jobs.

      Main reason I am hesitant: I am pretty sure, services will fail, and need to be restarted. Docker can do this. I also know I have to meddle with cron jobs and I can built the cron inside the docker container that should run the service. But this is some sort of work-arounding a lack of a “job” feature in docker and it also feels wrong to mix the service with the infrastructure together (self-imposed pain :D)

      One of the continuous services is actually one, I do not know in advance, how it will perform. It processes many concurrent jobs, which is a good thing, but I also suspect, that I still will have to replicate this service on the same machine to utilize resources better, adding a small load balancer for it, etc. One service instance is concurrent but still limited to a certain degree. This part will need some monitoring but also tailoring. And for this I would prefer a proper solution, albeit one with reduced learning curve.

      Monitoring is a thing, too: I would love to have a dashboard to see my tiny services in action and vanilla docker does not provide it. There will be error streams I am going to log into an observability platform but I still feel, that I need more, than docker provides out of the box.

      Also: I love new tech, but I do not expect, that my tiny one server vps SaaS will instantly blow up, so I will have to autoscale it onto multiple federated cloud providers or a fleet of VPS machines. While I do not want to waste time learning a thing, I cannot properly handle alone, I also would like to learn a technology that helps me getting started right now but also provides a path for later growth. That’s why I hesitate to spend time with single docker containers and low level fiddling (the cron part) inside those, because I did not find evidence, that it provides the potential (even with docker swarm)

      • from-nibly@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        To clarify I was talking about using the hosts native cron not the containers cron. And simply execute the container at the given time.

        But if you want to learn something then go with k3s. It’s incredibly simple to get going.

  • subven1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How about some software for server management and app hosting like cloudron.io ? It is a complete and easy solution to host your own (docker based) apps or you can just install free apps from the build in app store. You can use Cloudrons base image to make use of addons (services) that are already build into Cloudron like: graphite, mailserver, mongodb, mysql, nginx, postgresql, sftp, turn, redis, ldap, oidc, recvmail, scheduler (cron), sendmail and tls or build an app on top of the LAMP app.

    Everything is automated from OS updates, plattform + app based backups (with persistence if needed) to proxy setup and certificates. Besides the webUI, Cloudron also provides a RESTful API to manage apps, users, groups, domains and other resources. It also has its own Build Service and Image Registry or you could host your own Gitlab/Gitea with just one click.

    Instead of real orchestration you maybe could use automation tools like n8n or Ctfreak to archive what you need.

    Cloudron is free for up to 2 apps so keep that in mind but it runs well on a VPS with as low as 2GB RAM and 25GB of disk space.

    • HosonZes@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      k space. You could rent 2 small servers, one for development and one to

      This is a new option. Thanks for this. Did you used this in production? I know I will have custom and tailored images running, because each of my containers (besides the database) will be my own services, and cloudron looks like it was rather designed to pick a ready solution, am I understanding it correctly?

      It also says, it keeps the systems up to date, which again is very high level, usually it is some sort of terraform to provision or ansible to configure the machine and abstracting those details makes it hard for me as a tech guy to understand, what they actually are doing.

      • subven1@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Did you used this in production?

        I run 3 Cloudron servers for many years and administer another 4 with some just beeing used inside a LAN.

        cloudron looks like it was rather designed to pick a ready solution, am I understanding it correctly?

        Most users will just pick apps from the store but others like myself use Cloudron to host their own services and custom app packages. It is actually pretty easy and there is a lot of help and templates at the Cloudron app packaging forum if you just start.

        It also says, it keeps the systems up to date, which again is very high level

        Cloudron uses neither Ansible nor Terraform and relies on scripts and crons. It uses automatic Ubuntu security updates, firewall and a bit of OS hardening to secure the plattform. You can take a look at the sources if you are curious.

  • Arioch5@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Check out k0s https://k0sproject.io/ The k0sctl tool can manage and upgrade the environment very easily with one command. You still have to learn about kubernetes but this with argocd is pretty low maintenance.

    • HosonZes@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Do you know this project? From the first look I do not see a difference to microk8s.

      What does it do differently?

      • Arioch5@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yea I know both microk8s and k0s. Today I use k0s, I used to use microk8s. My knowledge of it was very good and I’m about 2 years out of date FYI.

        Microk8s was designed for development and arguably works ok for production. K0s is designed for production but built to be zero dependencies and easy to operate.

        This shows up in:

        • Microk8s defaults to dqlite as it’s data store, k0s uses the much more performant and tested etcd.
        • K0s does scale and load testing and advertise what to expect, microk8s doesn’t.

        Microk8s uses snaps by canonical, k0s is statically complied go it can run almost anywhere.

        Both are backed by reputable open source companies. Canonical (Ubuntu) for microk8s and Mirantis (very active in OpenStack and k8s ecosystem) for k0s.

        Both are fine, I find k0s more appropriate outside of Dev environments but choose what you like.