Curios as to why we wipe docker during the uninstall? Can we add an option to *not* do this? This leaves the system in a broken state. [root@metal ~]# container-storage-setup INFO: Wipe Signatures is set to true. Any signatures on /dev/sdb will be wiped. /dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa /dev/sdb: calling ioclt to re-read partition table: Success INFO: Device node /dev/sdb1 exists. /dev/sdb1: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31 Physical volume "/dev/sdb1" successfully created. Volume group "docker-vg" successfully created ERROR: Docker has been previously configured for use with devicemapper graph driver. Not creating a new thin pool as existing docker metadata will fail to work with it. Manual cleanup is required before this will succeed. INFO: Docker state can be reset by stopping docker and by removing /var/lib/docker directory. This will destroy existing docker images and containers and all the docker metadata. My preference is for the uninstall to leave docker alone. -Nick
even 3.9.30 has same "feature" which caused some issues (mostly unable to know how to wipe properly) during my POC. i think its best to leave docker storage outside installer.
(In reply to Nicholas Nachefski from comment #0) > Curios as to why we wipe docker during the uninstall? Can we add an option > to *not* do this? Our playbooks set it up and remove during uninstall, it seems fair to me. >This leaves the system in a broken state. What's the broken state you're observing?
Not really. We (the tiger team) request a secondary raw device from customers for the docker-pool (Option A). As such, we manually configure the container-storage-setup. What happens is that during an the uninstall you get "Device or Resource Busy" or something like that. So, i am unable to to re-install after the uninstall without manually fixing the docker daemon (on all nodes). The desire if for me to be able to deploy_cluster.yaml immediately after doing the uninstall.yaml. That is *not* the case right now.
I'm fine with addressing the need here, but this is relatively low priority for us so a pull request would speed things along.
Here's an error for you (during an uninstall): failed: [node01.ocp.nicknach.net] (item= docker-vg) => {"changed": true, "cmd": ["vgremove", "-f", "docker-vg"], "delta": "0:00:00.013604", "end": "2018-06-25 08:22:42.642895", "failed": true, "item": " docker-vg", "msg": "non-zero return code", "rc": 5, "start": "2018-06-25 08:22:42.629291", "stderr": " Logical volume docker-vg/docker-pool is used by another device.", "stderr_lines": [" Logical volume docker-vg/docker-pool is used by another device."], "stdout": "", "stdout_lines": []} Also, have you considered what happens on an Atomic install? Atomic comes pre-configured with docker running. Are you telling me that the uninstall playbook will rip out something that is pre-configured from the OS deployment? That seems wrong. -Nick
Lets consolidate discussion in Bug 1569553 *** This bug has been marked as a duplicate of bug 1569553 ***