How to do Docker backup?

can someone tell me, how and what are the best ways to backup docker containers?

i use proxmox and backing up and rolling the backups are so damn easy for LXC containers, but i have had no easy experience with docker containers.

i was running docker on an debian based LXC before, i thought why not shift to alpine based LXC, but i was facing issues with systemd not being present in alpine. i am thinking of going back to debian based host for docker. but how should i export and import all of my containers and data and everything in a few clicks or commands??

Not sure about the best.

I’m using backrest docker container which is a web ui for restic. GitHub - garethgeorge/backrest: Backrest is a web UI and orchestrator for restic backup.

Again, I have been using backrest restore option if I end up messing up conf. of my docker containers.

1 Like

Restic is the way to go.

Don’t use any gui.

2 Likes

Put them in a Debian VM (from the start) and back that that up?

Migrating docker between hosts is not so simple. I check bash history to save the docker run commands to a text file and then manually backup the directories that are mounted as volumes.

I havent figured out docker compose yet.

1 Like

its really irritating to be honest, i am not well versed with these and take help from Gemini, everytime i have had to reinstall os on Pi, same docker doesnt work and this gemini gives different yaml files .. i had to copy the working ones and send to gmail as text

1 Like

I prefer to create root data directory for container in a disk volume that is backed up.

Compose or portainer helps in visualising things better.

So you just backup your compose file in a pvt repo.

Env files and data files on local nas or backblaze like service.

Backing up whole VM is kinda overkill but if it works then it works.

2 Likes

I may need to relearn about docker, I just learned about docker run and built everything on top of that.

Did you teach yourself or found something to help you learn?

1 Like

I was in similar situation few weeks ago used rsync to copy volumes between containers. All most all stacks were recovered.

Can share details by EOD

P.S I was also running portrainer

2 Likes

Youtube and reddit.

Christian,technotim,networkchuck.

And /r/selfhosted.

I think there are tonnes other boilerplates available on github but I prefer to mix and match mostly and keep it dead simple without any automation.

1 Like

yes backing up the whole lxc or the whole host is way overkill, i dont have too many containers, i kinda prefer lxc just because its easy to maintain, watch and backup.

i use docker compose, @rsaeon docker run is messy, compose creates stack, easier to maintain if alot of containers and similar kind, also easier to change things, for eg, in case you wanna change internal ports of a container you could just change the port from the yaml config and just “docker compose up -d”

i kinda go a step further, i expose the docker sock via tcp and have a vscode server hosted. i expose that sock to that code server and manage my docker compose aswell as my containers directly from it.

the problem comes if i decide to change hosts or if i wanna backup individual containers? at one point i was thinking to go debian or ubuntu way and just running docker for every app, but i couldn’t imagine manually backing up a plex volume just so i could retain the setup and progress of things. it’s just stupid. so i stayed with pve :slight_smile:

i am gonna change servers now, so i decided it’s time to change my docker host lol. it’s gonna be a small pain but i guess ill be okay with one time rsync thing. apart from that my docker containers are really barebones, no backup whatsoever, will have to go through setting up things again if i lose them.

some irony, i kinda went straight to some complex docker things like exposing docker sock and shit but never looked at what portainer provides lol, all i use my portainer is for checking my containers and restarting them. apart from that it always looks intimidating to me.

1 Like

It looks like I’ll need to sit and down and learn docker compose for my homelab, at least. I use docker networking with macvlan to give each container a unique IP so I can stick to using default ports, I’m hoping that’ll translate over as well.

Backing up an entire VM isn’t so bad with snapshots.

My webscrapers used to depend heavily on spinning up thousands of containers across multiple hosts with docker run programmatically and I had just copied that behaviour for the homelab.

1 Like

say i have plex installed as a docker container and i have other containers too which are big in size but not significantly as important as plex. and if i wanna have multiple timed backups ill have to backup the whole system, and in each backup it will include the other containers too. and then if i wanna revert to certain backup of plex, the changes made to other containers will also get reverted.

it can be done if all you wanna do is just one incremental backup. but the argument here is just that it’s not ideal

also did you know that you can configure ports such that the internal stays as is only it appears something else in the host machine. say i have dns hosted on docker at port 53, i can change the ports like 53:6969 so it will use host:6969 but will stay as port 53 inside the docker container.

I personally backup my lxcs to NAS.

But for your use case I guess you could commit the the compose, env and data directories to version control? Then you can just recreate the relevant container from previous config files.

Another solution I can think of is using rysnc or syncthing.

2 Likes

syncthing is great idea, im gonna look into restic for now, and for the migration ill use rsync (the only method i know right now)

if i still feel the need to go hardcore, ill do syncthing

1 Like