Bug 920708
Summary: | [RESTAPI] Create Data Storage Domain request on non-empty mount results in attempt to import existing domain | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Gadi Ickowicz <gickowic> | ||||
Component: | ovirt-engine | Assignee: | Maor <mlipchuk> | ||||
Status: | CLOSED ERRATA | QA Contact: | Carlos Mestre González <cmestreg> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.2.0 | CC: | acanan, amureini, bazulay, bsettle, cmestreg, iheim, juwu, lpeer, mlipchuk, nlevinki, oramraz, rbalakri, Rhev-m-bugs, rlandman, scohen, yeylon | ||||
Target Milestone: | --- | Flags: | amureini:
needinfo+
scohen: Triaged+ |
||||
Target Release: | 3.5.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | storage | ||||||
Fixed In Version: | ovirt-engine-3.5.0_beta2 | Doc Type: | Bug Fix | ||||
Doc Text: |
Previously, creating a new storage domain would fail if the given path was to a pre-existing domain. With this update, importing existing domains and adding new domains are separated as two actions and users can now create a new storage domain (NFS) on a mount that has existing storage domains. See the Technical Guide, XML Representation of a Storage Domain for an example. Also see BZ#716511 for more information on this feature.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-02-11 17:52:24 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1142923, 1156165 | ||||||
Attachments: |
|
Description
Gadi Ickowicz
2013-03-12 15:16:55 UTC
Please note that the command AddExistingNFSStorageDomainCommand mentioned in bug's description was renamed to AddExistingFileStorageDomainCommand. Indeed the flow in UI is that import existing domain and add new domain are 2 different user actions, while in REST they are initiated by a single action of add domain - decided by the engine. The relevant code in REST is in BackendStorageDomainsResource in addDomain method. The logic should be aligned to have same behavior in UI and REST - by creating an option in REST to import an existing domain. Maor, this should be handled as part of your work for importing a storage domain. In a REST API perspective, we should have a new API for importing, and a /create/ request should /not/ attempt to import (although this needs to be double-checked with the REST maintainers) It should work now: Importing an existing Storage Domain: <storage_domain> <type>data</type> <storage> <type>nfs</type> <address>xx.xx.xx.xx</address> <path>/export/images/rnd/{some_path}</path> </storage> <host id="3dc5ba65-2f7a-414c-ad2b-635cdb1afbc7"/> </storage_domain> Storage was added. Got the following result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <storage_domain href="/ovirt-engine/api/storagedomains/4bc4bd4f-852b-458e-8975-d774bfc92cb9" id="4bc4bd4f-852b-458e-8975-d774bfc92cb9"> <name>dsadsad222</name> .... <type>data</type> <status> <state>unknown</state> </status> <master>false</master> <storage> <address>xx.xx.xx.xx</address> <type>nfs</type> <path>/export/images/rnd/{some_path}</path> </storage> <available>15032385536</available> <used>587336777728</used> <committed>0</committed> <storage_format>v3</storage_format> </storage_domain> Moving to ON_QA for verification hi Maor, This bug has been flagged for release notes. Please select the correct Doc Type and provide the doc text ASAP for this bug to make it in the 3.5 Beta Manager Release Notes. If this bug is not required for release notes, please set the require_release_note flag to -. Cheers, Julie I checked both for UI and REST, and in both the adding operation fails. setup: 1 DC shared, 1 Host. 1. add nfs_0 domain, and nfs_1 domain 2. put in maintenance nfs_0 3. destroy nfs_1 4. try to import that destroyed domain from either the UI (Import domain) or the REST (CREATE without a name for the storage domain) result: domain is added but cannot be attached to the DC In my run ovirt.log shows: START, ConnectStorageServerVDSCommand(HostName = 10.34.62.206, HostId = c6b2a203-3b1f-4cf6-91b2-2c001529452c, storagePoolId = 00000000-0000-0000-0000-000000000000 [...] The meta data of the Storage Domain might still indicate that it is attached to a different Storage Pool. and indeed the vdsm.log shows: Unknown pool id, pool not connected: (u'00000000-0000-0000-0000-000000000000',) while the poolId is really 00000002-0002-0002-0002-00000000002b (checked on vt5). Maor: can you comment if I'm understanding this correctly? Hi Carlos, there is a known issue with json regarding this, can u please try to reproduce this with Host which not using json (In reply to Carlos Mestre González from comment #7) > setup: 1 DC shared, 1 Host. > > 1. add nfs_0 domain, and nfs_1 domain > 2. put in maintenance nfs_0 > 3. destroy nfs_1 --> typo, should have been destroy nfs_0 since it's in maintenance, > 4. try to import that destroyed domain from either the UI (Import domain) or the REST (CREATE without a name for the storage domain) (In reply to Maor from comment #8) > Hi Carlos, there is a known issue with json regarding this, can u please try > to reproduce this with Host which not using json I'm doing a xml call (?) Just to emphasize, the REST API call returns the proper response object, and behaves exactly as the Web UI: thee storage domain is imported but it cannot be attached to the existing DC. So I'm guessing this is a backend bug not related to REST. sorry Maor, disabled json messaging and works as it suppose to. verified in vt5. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0158.html |