Hide Forgot
Description of problem: cinder backup using NFS will be overwritten if same container is used Version-Release number of selected component (if applicable): openstack-cinder-7.0.2-2.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Attach the volume called rhel_7 to the instance. Create a file called first1.txt in the volume. 2. # cinder backup-create --name rhel_7_1 rhel_7 --force --container volumebackups 3. Create a file called second2.txt in the volume. 4. # cinder backup-create --name rhel_7_2 rhel_7 --force --container volumebackups 5. Create a file called third3.txt in the volume. 6. Detach the volume from the instance and restore the volume from rhel_7_1. 7. Attach the volume again. You will find there are first1.txt and second.txt in the volume. But rhel_7_1 should only contain first1.txt. Actual results: Volume backup is overwritten Expected results: There should be separate directories for different backups Additional info: In Horizon, if I don't specify the container, a default container will be used and volumes will be overwritten. backup_driver = cinder.backup.drivers.nfs backup_share = 10.72.32.49:/home/nfsshare
I tested this and can confirm it is a problem. At this time, container names specified on the command-line for this configuration should be unique rather than "volumebackups". This should be fixed by either: a) Making separate directories under this container name for each backup or b) At least failing new backups rather than overwriting data.
This problem is reproducible with posix driver, it means not only nfs but also glusterfs driver can also probably overwrite on the same container. It seems that swift driver can create unique prefix for container. https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/swift.py#L324 Distinguish containers by unique, separated id is absolutely required. Falling backup operation is also desirable to avoid the possibility of the id conflict.
(In reply to Junko IKEDA from comment #3) > This problem is reproducible with posix driver, it means not only nfs but > also glusterfs driver can also probably overwrite on the same container. > > It seems that swift driver can create unique prefix for container. > > https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/ > swift.py#L324 > > Distinguish containers by unique, separated id is absolutely required. > Falling backup operation is also desirable to avoid the possibility of the > id conflict. Dear IKEDA-san, Sorry for jumping in, I've also checked upstream code. In my understanding, for me it looks this NFS driver looks overriding Posix driver in __init__ with backup_path and Posix driver overrides chunk driver basically (and Posix driver still doesn't have it). I've checked this further, but as far as I've checked, I could not find any of BP for this feature yet too. If my current understanding is true, since we need fix in the upstream code in the first place then we'll backport it into downstream, so I think you may have to wait for upstream fix before requesting fix on rhbz and we'd do recommend to track upstream changes first. Kind Regards, Reference: - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/chunkeddriver.py - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/posix.py#L74,#L78 - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/nfs.py#L55,L57,L58 - https://specs.openstack.org/openstack/cinder-specs/specs/kilo/nfs-backup.html
All, Changing summary based on my understanding at #4 ,please feel free to revert back if I had a wrong idea and please provide your concern or upstream fix or anything else related to the fix if you'd have.
(In reply to Masaki Furuta from comment #5) > All, > > Changing summary based on my understanding at #4 ,please feel free to revert > back if I had a wrong idea and please provide your concern or upstream fix > or anything else related to the fix if you'd have. Hi Eric, We got additional concern from Customer as follows; """ As for this problem, we did additional test internally on our side and we found that the code doesn't guarantee namespace uniqueness for backup name and therefore the backup name wouldn't be unique per tenant. This means, we can not avoid this command will override backup from other tenants (and by other users). Furthermore, if this is true, we think this is sort of vulnerability that a malicious user can override existing backup by taking backup files with random name to try to delete existing backup data. """ Can I get your voice?, I really appreciate if we have answer to this customer's concern.
Moving to 10.0.z per discussion with eharney.
Hi Eric, Can I get any progress on this bug ? Shall we backport it in OSP8 ? Best Regards, Chen
Hi, Can I get any progress on this bug ? Both OSP10 and OSP8 will have the fix ? Best Regards, Chen
The fix for this bug is still under development upstream. Can we advise the customer to not use the "--container" option for now? Is it a requirement here?
Hi Eric, Thank you for your reply. As I said in the bug description, - In Horizon, if I don't specify the container, a default container will be used and volumes will be overwritten. This seems to be a security issue to the customer and you can not ask every end users to input a unique container name when backing up the volume. Best Regards, Chen
Hi, Do we have any progress about this bug ? Best Regards, Chen
Patch is in progress and linked in the external trackers field of this bug.
Hi Paul, Thank you for your reply. The Solution Architect informed me that "the customer requests to have the hotfix for OSP10, since they currently test OSP10." So it would be much appreciated if you could help us with the hotfix with OSP10 first and then OSP8 later. Thank you very much ! Best Regards, Chen
The fix was verified according to the steps described in the bug description all 3 different backups restored successfully
Tested and automated downstream
Verified, Version openstack-cinder-7.0.3-7.el7ost.noarch Reproduction steps provided expected results. Only first1.txt file was found on a restored volume.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1743