Bug 835543 - PRD33 - RFE: Allow to edit file (nfs/posix/local) domain connections (incl. advanced options)
PRD33 - RFE: Allow to edit file (nfs/posix/local) domain connections (incl. a...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs (Show other bugs)
3.1.0
x86_64 Linux
unspecified Severity urgent
: ---
: 3.3.0
Assigned To: Alissa
Elad
storage
: FutureFeature, Improvement
: 838897 (view as bug list)
Depends On: 950055
Blocks: 592278 1019470
  Show dependency treegraph
 
Reported: 2012-06-26 08:29 EDT by Haim
Modified: 2016-02-10 15:26 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Previously, it was not possible to alter the parameters of connections to storage domains (NFS, POSIX, and local) in Red Hat Enterprise Virtualization. This update provides administrators with greater management capabilities that allow them to alter connections to storage domains after they have been created. These capabilities are available both in the Administration Portal and via the REST API.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-01-21 12:11:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 12372 None None None Never
oVirt gerrit 13640 None None None Never
oVirt gerrit 15540 None None None Never

  None (edit)
Description Haim 2012-06-26 08:29:25 EDT
Description of problem:

once new connection is attempted to NFS storage server (validateStorageConnection), entry is permanently to storage_server_connection table and one cannot change its params, such as: NFS connection version (although it was changed from GUI).

my case:
- new domain dialogue
- provide valid:/mount/point and set NFS Version to auto-negotiate
- make an attempt to create this storage domain
  * attempt fails (not related to this issue, its a configuration fault on storage server side)
- make another attempt to create this storage domain, now set the NFS server to V3
  * attempt fails again, but now, it should have been working (server accepts only V3 requests, though backend doesn't send V3 since its not able to update the entry in data-base)

example of command with wrong mount opts (protocol_version exists):

Thread-27::INFO::2012-06-26 17:28:19,799::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '10.35.64.205:/myVol/sd', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '1ce488c0-b1d2-4652-bde2-1c6ffac82af1', 'port': ''}], options=None)

example of command with desired mount opts (protocol_version exists):

Thread-692::INFO::2012-06-26 17:46:54,040::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '10.35.64.205:/myVol/sd1', 'iqn': '', 'portal': '', 'user': '', 'protocol_version': '3', 'password': '******', 'id': '756f3b27-8a97-433c-a225-ca109e4742a6'}], options=None)

engine=# select * from storage_server_connections;
                  id                  |        connection         | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans 
--------------------------------------+---------------------------+-----------+----------+-----+------+--------+--------------+---------------+----------+-------------+-----------+-------------
 1ce488c0-b1d2-4652-bde2-1c6ffac82af1 | 10.35.64.205:/myVol/sd  |           |          |     |      |        |            1 |               |          |             |           |            
 756f3b27-8a97-433c-a225-ca109e4742a6 | 10.35.64.205:/myVol/sd1 |           |          |     |      |        |            1 |               |          |           3 |           |            
(2 rows)
Comment 1 Haim 2012-06-26 08:31:49 EDT
(In reply to comment #0)
> Description of problem:
> 
> once new connection is attempted to NFS storage server
> (validateStorageConnection), entry is permanently to
> storage_server_connection table and one cannot change its params, such as:
> NFS connection version (although it was changed from GUI).
> 
> my case:
> - new domain dialogue
> - provide valid:/mount/point and set NFS Version to auto-negotiate
> - make an attempt to create this storage domain
>   * attempt fails (not related to this issue, its a configuration fault on
> storage server side)
> - make another attempt to create this storage domain, now set the NFS server
> to V3
>   * attempt fails again, but now, it should have been working (server
> accepts only V3 requests, though backend doesn't send V3 since its not able
> to update the entry in data-base)
> 

s/(protocol_version exists)/(protocol_version not exists)/g

> example of command with wrong mount opts (protocol_version exists):
> 
> Thread-27::INFO::2012-06-26
> 17:28:19,799::logUtils::37::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=1,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
> '10.35.64.205:/myVol/sd', 'iqn': '', 'portal': '', 'user': '', 'password':
> '******', 'id': '1ce488c0-b1d2-4652-bde2-1c6ffac82af1', 'port': ''}],
> options=None)
>
Comment 2 Yair Zaslavsky 2012-06-26 08:40:44 EDT
This is a general connection management problem, not just advanced nfs options.
Comment 3 Itamar Heim 2012-07-23 08:12:24 EDT
ayal - can we even support this withough forcing domain to be in maint, so it will be refreshed on all hosts?
Comment 4 Ayal Baron 2012-07-24 18:23:24 EDT
*** Bug 838897 has been marked as a duplicate of this bug. ***
Comment 10 Yaniv Kaul 2013-03-04 03:08:20 EST
The title change is extremely misleading the user scenario: someone enters the wrong domain connections, fails to connect, and now he can't fix the connection and successfully connect.
Comment 13 Aharon Canan 2013-08-21 14:56:01 EDT
moving to verified following TCMS run -
https://tcms.engineering.redhat.com/run/76018
Comment 15 Charlie 2013-11-27 19:20:29 EST
This bug is currently attached to errata RHEA-2013:15231. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes 

Thanks in advance.
Comment 18 errata-xmlrpc 2014-01-21 12:11:23 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html

Note You need to log in before you can comment on or make changes to this bug.