Created attachment 977503 [details] logs Description of problem: "A candidate",to be recovered, NFS storage domain, is an NFS directory with a pre-existing oVirt storage domain files, that domain is usually configured as a member of an old DC which is now ruined or whatever. when Importing that "candidate", doing so, from oVirt UI, the flow is consisted from adding the domain, and changing it's allegiance configuration, the operation ends with a storage domain which is attached to host's storagepool, on maintenance mode. However, when executing that operation from Rest, the operation is not consisted from configuring the newly attached NFS to storagepool. oVirt-webadmin report this Domain as "Unattached", while vdsm report it to be attached, but to other storagepool: root@purple-vds3 ~ # vdsClient -s 0 getStorageDomainInfo 164ff806-fc33-4721-80b8-e54ab5cd3eb3 uuid = 164ff806-fc33-4721-80b8-e54ab5cd3eb3 version = 3 role = Regular remotePath = 10.35.160.108:/RHEV/ogofen/7 type = NFS class = Data pool = ['a5e9ab91-6049-426f-9e1b-b0164be74399'] <---this does not exist name = NFS_7 on setup when attempting to remove this domain, an Error window appears: "Error while executing action Remove Storage Domain: Cannot format attached storage domain" engine logs: 2015-01-07 20:11:26,961 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-2) [288f271c] Failed in FormatStorageDomainVDS method 2015-01-07 20:11:26,962 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-2) [288f271c] Command org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=391, mMessage=Cannot format attached storage domain: (u'93b859c3-2d11-4b84-b782-a6225e2f9464',)]] 2015-01-07 20:11:26,962 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-2) [288f271c] HostName = purple-vds3.qa.lab.tlv.redhat.com 2015-01-07 20:11:26,962 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-2) [288f271c] Command FormatStorageDomainVDSCommand(HostName = purple-vds3.qa.lab.tlv.redhat.com, HostId = efaefec5-2998-497b-8f43-9fea2bd0b01c, storageDomainId=93b859c3-2d11-4b84-b782-a6225e2f9464) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to FormatStorageDomainVDS, error = Cannot format attached storage domain: (u'93b859c3-2d11-4b84-b782-a6225e2f9464',), code = 391 2015-01-07 20:11:26,962 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand] (ajp-/127.0.0.1:8702-2) [288f271c] FINISH, FormatStorageDomainVDSCommand, log id: 3c02d205 2015-01-07 20:11:26,963 ERROR [org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand] (ajp-/127.0.0.1:8702-2) [288f271c] Command org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to FormatStorageDomainVDS, error = Cannot format attached storage domain: (u'93b859c3-2d11-4b84-b782-a6225e2f9464',), code = 391 (Failed with error CannotFormatAttachedStorageDomain and code 391) 2015-01-07 20:11:26,973 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-2) [288f271c] Correlation ID: 7b4bc7fc, Job ID: b0e0cb9b-4e76-4432-9ceb-8683254f5e2f, Call Stack: null, Custom Event ID: -1, Message: Failed to remove Storage Domain NFS_4. (User: admin) a User need to destroy this domain if he wants to immediately remove it. (when executing the same operation from UI: root@purple-vds3 ~ # vdsClient -s 0 getStorageDomainInfo 164ff806-fc33-4721-80b8-e54ab5cd3eb3 uuid = 164ff806-fc33-4721-80b8-e54ab5cd3eb3 version = 3 role = Regular remotePath = 10.35.160.108:/RHEV/ogofen/7 type = NFS class = Data pool = ['00000002-0002-0002-0002-00000000019b'] ---> the right name = NFS_7 storagepool id ) Version-Release number of selected component (if applicable): vt13.6 How reproducible: 100% Steps to Reproduce: 1.Import NFS via rest/py-sdk Actual results: The operation ends with a domain which is not attached nor detached to dc, removing this domain is CDA blocked Expected results: If we want to leave the current Rest behavior: * we need to implement a UI sign that will indicate that this domain was recovered/imported * domain's "cross data status" shouldn't be "unattached", that is just not right * a different kind of Error message should be prompt when attempting domain's removal, the current one is just incorrect and misleading. If we change Rest behavior, the flow shouldn't end without configuring the newly imported SD to the right storagepool id. Additional info:
The problem here is not the REST. The engine will fail when formatting a Storage Domain which has Storage Pool id at the Storage Domain's metadata. The user can work around this, for now, by using different approaches: 1. The user can remove the Storage Domain without formatting it. 2. If the user want to format the Storage Domain, it can attach it to the Data Center and detach it, after that the user will be able to format it. 3. The user can destroy the Storage Domain Generally, the user will get to this scenario when importing a Storage Domain (not to a specific Data Center), and then try to remove and format the Storage Domain. This is indeed a bug, The solution can be one of the following: 1. As suggested we can make it an RFE and add to the GUI indicator that the Storage Domain was imported and was not yet attached to a Data Center. 2. Add a CDA message that will indicate the following: "Cannot Remove Storage Domain. The Storage Domain metadata indicates it is attached (probably imported Storage Domain), and cannot be formatted. To remove the Storage Domain you can either remove it without the format option, or attach it to an existing Storage Domain and detach it again." 3. We can check if we can remove the limitation from VDSM of format Storage Domain.
This is amazing, I finally agree
I've suggested 3 alternatives at comment1, I prefer to go with choice number 2, to add an appropriate CDA message. Are there any objections?
Just to clarify - this is about a domain which has metadata claiming it's part of a storage pool, but in the engine side it isn't connected to any DC, right? If so, let's go with #2.
(In reply to Allon Mureinik from comment #4) > Just to clarify - this is about a domain which has metadata claiming it's > part of a storage pool, but in the engine side it isn't connected to any DC, > right? yes > > If so, let's go with #2.
Maor, please add doctext to explain this bz.
verified on 3.6 master
RHEV 3.6.0 has been released, setting status to CLOSED CURRENTRELEASE