Created attachment 948300 [details] Engine and VDSM logs Description of problem: When creating iSCSI storage domain, receive warning noting "The operation might be unrecoverable and destructive!" on only one of the hosts. The UI displays "The following LUNs are already in use" and then lists the LUNs, the list grows with Storage domain being added and then removed Version-Release number of selected component (if applicable): 3.5 vt5 How reproducible: 100% Steps to Reproduce: 1. Using a setup with 2 hosts that have the same set of iSCSI LUNs, attempt to add an iSCSI Storage Domain using all available LUNs (first using one host, and then the next host - removing the Storage domain in between) Actual results: In my case, one of the 2 hosts shows a warning noting 5 out of the 10 available LUNs are available, while the second host shows no such warning and creates the Storage domain right away. After removing the Storage domain, then trying again, the number of LUNs in use appears to increase (though not 100% consistent for this) Additional information: See attached logs
Hi Gilad, The warning is displayed based on the 'status' (used/free/unusable) property of each LUN. Can you please attach the output of getDeviceList for each host (after reproducing the described scenario)? Any difference between the hosts (vdsm build/os version/etc)?
Created attachment 949000 [details] LUN listing from both hosts, host capabilities and sanlock.log
According the logs [1], same lun appears as status 'free' in vdsd host, whereas, in status 'used' in vdsc host. Since the status should be identical from both hosts, moving component to vdsm for further investigation. @Gilad - can you please check the described scenario with another set of hosts to understand if the issue is isolated to a specific host? [1] * vdsd_deviceList: "{'GUID': '360060160f4a030000fa06fb98edbe311', ... 'status': 'free', ...}" * vdsc_deviceList: "{'GUID': '360060160f4a030000fa06fb98edbe311', ... 'status': 'used', ...}"
Created attachment 956673 [details] full set of logs from a new setup
Please note that this is fully reproducible on the original environment and also on a newly build vt9 environment (different hosts, engine), please find the full complement of logs in the new attachment
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.