| Summary: | Cinder backup using NFS backend will be overwritten if same container is used | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Chen <cchen> | |
| Component: | openstack-cinder | Assignee: | Gorka Eguileor <geguileo> | |
| Status: | CLOSED ERRATA | QA Contact: | Avi Avraham <aavraham> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 8.0 (Liberty) | CC: | acanan, cchen, ccollett, egafford, eharney, geguileo, ikedajnk, kajinamit, kawamurayus, knakai, lkuchlan, lmiccini, mfuruta, pgrist, scohen, skinjo, srevivo, tbarron, tshefi, vcojot | |
| Target Milestone: | zstream | Keywords: | Triaged, ZStream | |
| Target Release: | 8.0 (Liberty) | Flags: | lkuchlan:
automate_bug+
|
|
| Hardware: | All | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-cinder-7.0.3-7.el7ost | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, backups on the NFS back end shared the same "backup" prefix. As a result, if multiple backups shared the same container, they would end up overwritten.
With this update, the backup prefix is now generated from the backup UUID and the volume ID, so that backup data is no longer overwritten.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1456381 1456384 1456387 (view as bug list) | Environment: | ||
| Last Closed: | 2017-07-12 13:16:54 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | 1456381, 1456384, 1456387 | |||
| Bug Blocks: | ||||
|
Description
Chen
2016-08-28 14:48:02 UTC
I tested this and can confirm it is a problem. At this time, container names specified on the command-line for this configuration should be unique rather than "volumebackups". This should be fixed by either: a) Making separate directories under this container name for each backup or b) At least failing new backups rather than overwriting data. This problem is reproducible with posix driver, it means not only nfs but also glusterfs driver can also probably overwrite on the same container. It seems that swift driver can create unique prefix for container. https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/swift.py#L324 Distinguish containers by unique, separated id is absolutely required. Falling backup operation is also desirable to avoid the possibility of the id conflict. (In reply to Junko IKEDA from comment #3) > This problem is reproducible with posix driver, it means not only nfs but > also glusterfs driver can also probably overwrite on the same container. > > It seems that swift driver can create unique prefix for container. > > https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/ > swift.py#L324 > > Distinguish containers by unique, separated id is absolutely required. > Falling backup operation is also desirable to avoid the possibility of the > id conflict. Dear IKEDA-san, Sorry for jumping in, I've also checked upstream code. In my understanding, for me it looks this NFS driver looks overriding Posix driver in __init__ with backup_path and Posix driver overrides chunk driver basically (and Posix driver still doesn't have it). I've checked this further, but as far as I've checked, I could not find any of BP for this feature yet too. If my current understanding is true, since we need fix in the upstream code in the first place then we'll backport it into downstream, so I think you may have to wait for upstream fix before requesting fix on rhbz and we'd do recommend to track upstream changes first. Kind Regards, Reference: - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/chunkeddriver.py - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/posix.py#L74,#L78 - https://github.com/openstack/cinder/blob/stable/mitaka/cinder/backup/drivers/nfs.py#L55,L57,L58 - https://specs.openstack.org/openstack/cinder-specs/specs/kilo/nfs-backup.html All, Changing summary based on my understanding at #4 ,please feel free to revert back if I had a wrong idea and please provide your concern or upstream fix or anything else related to the fix if you'd have. (In reply to Masaki Furuta from comment #5) > All, > > Changing summary based on my understanding at #4 ,please feel free to revert > back if I had a wrong idea and please provide your concern or upstream fix > or anything else related to the fix if you'd have. Hi Eric, We got additional concern from Customer as follows; """ As for this problem, we did additional test internally on our side and we found that the code doesn't guarantee namespace uniqueness for backup name and therefore the backup name wouldn't be unique per tenant. This means, we can not avoid this command will override backup from other tenants (and by other users). Furthermore, if this is true, we think this is sort of vulnerability that a malicious user can override existing backup by taking backup files with random name to try to delete existing backup data. """ Can I get your voice?, I really appreciate if we have answer to this customer's concern. Moving to 10.0.z per discussion with eharney. Hi Eric, Can I get any progress on this bug ? Shall we backport it in OSP8 ? Best Regards, Chen Hi, Can I get any progress on this bug ? Both OSP10 and OSP8 will have the fix ? Best Regards, Chen The fix for this bug is still under development upstream. Can we advise the customer to not use the "--container" option for now? Is it a requirement here? Hi Eric, Thank you for your reply. As I said in the bug description, - In Horizon, if I don't specify the container, a default container will be used and volumes will be overwritten. This seems to be a security issue to the customer and you can not ask every end users to input a unique container name when backing up the volume. Best Regards, Chen Hi, Do we have any progress about this bug ? Best Regards, Chen Patch is in progress and linked in the external trackers field of this bug. Hi Paul, Thank you for your reply. The Solution Architect informed me that "the customer requests to have the hotfix for OSP10, since they currently test OSP10." So it would be much appreciated if you could help us with the hotfix with OSP10 first and then OSP8 later. Thank you very much ! Best Regards, Chen The fix was verified according to the steps described in the bug description all 3 different backups restored successfully Tested and automated downstream Verified, Version openstack-cinder-7.0.3-7.el7ost.noarch Reproduction steps provided expected results. Only first1.txt file was found on a restored volume. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1743 |