Bug 1738651
| Summary: | VDO should provide accurate feedback when trying to start or stop an already started or stopped volume | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Andy Walsh <awalsh> |
| Component: | vdo | Assignee: | Joe Shimkus <jshimkus> |
| Status: | CLOSED ERRATA | QA Contact: | Filip Suba <fsuba> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.0 | CC: | awalsh, bgurney, jshimkus, nkshirsa |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 6.2.2.12 | Doc Type: | Bug Fix |
| Doc Text: |
Cause:
Incomplete message when starting already running vdo instance.
Consequence:
Implied success of any requested operation or expected modifications that only take place when actually starting an instance.
Fix:
Expand message to indicate no modifications to the vdo instance took place. The message is logged at warning level. The command exits with a success status code.
Result:
Starting VDO <instance-name>
VDO service <instance-name> already started; no changes made
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-04-28 16:43:10 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Let me correct the initial report that shows the counter for VDO instance incrementing. That is not accurate and is a typo from transcribing terminal output. Here is some output showing a bunch of starts and stops right after one another: [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 [root@localhost ~]# vdo stop --name vdo0 Stopping VDO vdo0 I tested a case where a VDO volume is stopped after being in normal operating mode, and trying to run "vdo start --forceRebuild": # vdo start --name=vdo1 --forceRebuild; echo $? Starting VDO vdo1 vdo: ERROR - Device vdo1 not read-only vdo: ERROR - Could not set up device mapper for vdo1 vdo: ERROR - vdoforcerebuild: forceRebuild failed on '/dev/disk/by-id/nvme-DEVID_REDACTED': VDO Status: The device is not in read-only mode This is good; it properly detects that a force-rebuild does not need to be performed. i think the problem is if vdo went readonly due to some issues with the storage. in that case, a force rebuild doesnt even show any errors if we run it with the device active but readonly. it seems it went through but it didnt actually, unless we actually stop it and force rebuild. vdo manager was checking if the VDO instance was already running without regard to the possibility that user requested a forced rebuild. This resulted in a successful completion result of the start without actually performing the forced rebuild providing a misleading, at best, indication to the user. This vdo manager modifies the existing VDO already started message to include a clause indicating no changes were made and now logs the message at warning-level. Also, the pre-existing VDO already stopped message is changed from info-level to warning-level to provide consistency for all start/stop scenarios. In all cases the actual start/stop exits with a success status. Mass migration to Filip. Verified with 6.2.2.117-13.el8. # vdo start --name vdo0 Starting VDO vdo0 vdo: WARNING - VDO service vdo0 already started; no changes made # vdo stop --name vdo0 Stopping VDO vdo0 vdo: WARNING - VDO service vdo0 already stopped Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1782 |
Description of problem: Attempting to start a VDO volume via `vdo start` that is already started should be an error condition that indicates that the volume is already started. Instead, VDO Manager outputs the startup messages again, which leads to confusion if the settings are being changed or if someone is, say, using --forceRebuild on their start operation. This also applies to the `vdo stop` process as well. Version-Release number of selected component (if applicable): vdo-6.2.1.134-11.el8 How reproducible: 100% Steps to Reproduce: 1. Create a VDO volume `vdo create --name vdo0 --device /dev/sda` 2. Verify that the VDO volume is started `vdo status -n vdo0 | grep 'Device mapper status'` (this should return a line similar to 'Device mapper status: 0 29302728 vdo /dev/loop0 normal - online online 1313712 4718592' 3. Try to start the VDO volume again `vdo start --name vdo0` Similarly: 1. Create a VDO volume `vdo create --name vdo0 --device /dev/sda` 2. Verify that the VDO volume is started `vdo status -n vdo0 | grep 'Device mapper status'` (this should return a line similar to 'Device mapper status: 0 29302728 vdo /dev/loop0 normal - online online 1313712 4718592' 3. Stop the VDO volume `vdo stop --name vdo0` 4. Try to stop the VDO volume again `vdo stop --name vdo0` Actual results: [root@localhost ~]# vdo create --name vdo0 --device /dev/loop0 Creating VDO vdo0 Starting VDO vdo0 Starting compression on VDO vdo0 VDO instance 0 volume is ready at /dev/mapper/vdo0 [root@localhost ~]# vdo status --name vdo0 | grep 'Device mapper status' Device mapper status: 0 29302728 vdo /dev/loop0 normal - online online 1313712 4718592 [root@localhost ~]# vdo start --name vdo0 Starting VDO vdo0 VDO instance 1 volume is ready at /dev/mapper/vdo0 Expected results: [root@localhost ~]# vdo start --name vdo0 vdo: ERROR - VDO volume vdo0 already started [root@localhost ~]# vdo stop --name vdo0 vdo: ERROR - VDO volume vdo0 not running Additional info: This issue was discovered while on a support call that required the --forceRebuild operation to be used. When we ran `vdo start --forceRebuild`, it appeared to work as needed from the command line, but it returned immediately, which raised suspicion. It turned out that the volume was already online and needed to be stopped before we could start with --forceRebuild.