With regards to app management the Unix way is the way. You’ll have individual tools that you can spin up or take down at will. When something needs maintenance, or it crashes, all your other services will stay running.
As to an attack surface, that can be mitigated by using a VPN if you need access from outside your network. Then you’re back to an individual tool doing one job really well (and probably being audited for that by outside parties).
Databases are a slightly different challenge but you can still think of them in a similar way regarding service availability. If all your services use a single instance then when that instance goes down for maintenance or backups all of your services will be offline.
Similarly there can be issues with compatibility between the service and the specific database (and/or version) which could necessitate a dedicated database for a certain service. But if each service (or possibly similar related services) use dedicated database instances then maintenance of that stack is simplified.
I’m of the mind that with the flexibility of containerized software stacks there’s no real reason to have a single monolithic database anymore, certainly not for small, self-hosted, applications that are not under heavy use.
The best place to start in my opinion is with the layer model of networking. The modern internet is based on the TCP/IP layer model, that’s a great place to start.
The links in that article under each layer section have more detailed information if you want to go deeper.