Do you have the examples of this so I can take a look? Was it ports forwarded that were opened to all cloudflare ranges, or tunnels and a backend exploit?
Do you have the examples of this so I can take a look? Was it ports forwarded that were opened to all cloudflare ranges, or tunnels and a backend exploit?
That’s both a really honest answer and a good reason to use it depending on the person. Nice work.
Try to not run containers as root?
I admit there is a level of trust needed in cloudflare, but I also need to trust the container makers, and the hardware manufacturers as well. I use cloudflare with O365 and jumpcloud for my auth sources and I’ve been thrilled. Different policies by subdomain, works great.
Honestly my load is so light I don’t bother monitoring performance. Uptime kuma for uptime, I used to use prtg and uptime robot when I ran a heavier stack before I switched to an all docker workload.
If I were running a business, I would absolutely do proper db-backups. In my case, most of my systems pretty simple (a base config), I can survive DB corruption. That being said, I’ve done mongo db and sqlite recoveries from the rsync’s with no issues. I do have backups of my freshrss feeds and config that gets backed up to github as an artifact. Same with my wireguard config.
I rsync my bind mounts to another partition that takes daily snapshots. Everything is pretty stable state (no heavy write intensive db. My compose files are all in GitHub, so my data and configs are all backed up.
Mine are updated automatically, but I also have a robust data backup strategy. If something goes wrong (and it has), I had to change my compose and restore the data.
Yeah, might be for the best.
Do you have any auth in cloudflare? If so, that mitigates a lot of zero-days. First they have to get past cloudflare, then a zero-day in your nginx.
I do agree, they should use the same address space for ingress and egress. Though tunnels I would hope would be immune, but perhaps not.