Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 835543

Summary: PRD33 - RFE: Allow to edit file (nfs/posix/local) domain connections (incl. advanced options)
Product: Red Hat Enterprise Virtualization Manager Reporter: Haim <hateya>
Component: RFEsAssignee: Alissa <abonas>
Status: CLOSED ERRATA QA Contact: Elad <ebenahar>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.1.0CC: abaron, abonas, aburden, acanan, acathrow, adahms, amureini, iheim, jkt, lnatapov, lpeer, mlipchuk, Rhev-m-bugs, scohen, sputhenp, yeylon, zdover
Target Milestone: ---Keywords: FutureFeature, Improvement
Target Release: 3.3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Enhancement
Doc Text:
Previously, it was not possible to alter the parameters of connections to storage domains (NFS, POSIX, and local) in Red Hat Enterprise Virtualization. This update provides administrators with greater management capabilities that allow them to alter connections to storage domains after they have been created. These capabilities are available both in the Administration Portal and via the REST API.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-21 17:11:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 950055    
Bug Blocks: 592278, 1019470    

Description Haim 2012-06-26 12:29:25 UTC
Description of problem:

once new connection is attempted to NFS storage server (validateStorageConnection), entry is permanently to storage_server_connection table and one cannot change its params, such as: NFS connection version (although it was changed from GUI).

my case:
- new domain dialogue
- provide valid:/mount/point and set NFS Version to auto-negotiate
- make an attempt to create this storage domain
  * attempt fails (not related to this issue, its a configuration fault on storage server side)
- make another attempt to create this storage domain, now set the NFS server to V3
  * attempt fails again, but now, it should have been working (server accepts only V3 requests, though backend doesn't send V3 since its not able to update the entry in data-base)

example of command with wrong mount opts (protocol_version exists):

Thread-27::INFO::2012-06-26 17:28:19,799::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '10.35.64.205:/myVol/sd', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '1ce488c0-b1d2-4652-bde2-1c6ffac82af1', 'port': ''}], options=None)

example of command with desired mount opts (protocol_version exists):

Thread-692::INFO::2012-06-26 17:46:54,040::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '10.35.64.205:/myVol/sd1', 'iqn': '', 'portal': '', 'user': '', 'protocol_version': '3', 'password': '******', 'id': '756f3b27-8a97-433c-a225-ca109e4742a6'}], options=None)

engine=# select * from storage_server_connections;
                  id                  |        connection         | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans 
--------------------------------------+---------------------------+-----------+----------+-----+------+--------+--------------+---------------+----------+-------------+-----------+-------------
 1ce488c0-b1d2-4652-bde2-1c6ffac82af1 | 10.35.64.205:/myVol/sd  |           |          |     |      |        |            1 |               |          |             |           |            
 756f3b27-8a97-433c-a225-ca109e4742a6 | 10.35.64.205:/myVol/sd1 |           |          |     |      |        |            1 |               |          |           3 |           |            
(2 rows)

Comment 1 Haim 2012-06-26 12:31:49 UTC
(In reply to comment #0)
> Description of problem:
> 
> once new connection is attempted to NFS storage server
> (validateStorageConnection), entry is permanently to
> storage_server_connection table and one cannot change its params, such as:
> NFS connection version (although it was changed from GUI).
> 
> my case:
> - new domain dialogue
> - provide valid:/mount/point and set NFS Version to auto-negotiate
> - make an attempt to create this storage domain
>   * attempt fails (not related to this issue, its a configuration fault on
> storage server side)
> - make another attempt to create this storage domain, now set the NFS server
> to V3
>   * attempt fails again, but now, it should have been working (server
> accepts only V3 requests, though backend doesn't send V3 since its not able
> to update the entry in data-base)
> 

s/(protocol_version exists)/(protocol_version not exists)/g

> example of command with wrong mount opts (protocol_version exists):
> 
> Thread-27::INFO::2012-06-26
> 17:28:19,799::logUtils::37::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=1,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
> '10.35.64.205:/myVol/sd', 'iqn': '', 'portal': '', 'user': '', 'password':
> '******', 'id': '1ce488c0-b1d2-4652-bde2-1c6ffac82af1', 'port': ''}],
> options=None)
>

Comment 2 Yair Zaslavsky 2012-06-26 12:40:44 UTC
This is a general connection management problem, not just advanced nfs options.

Comment 3 Itamar Heim 2012-07-23 12:12:24 UTC
ayal - can we even support this withough forcing domain to be in maint, so it will be refreshed on all hosts?

Comment 4 Ayal Baron 2012-07-24 22:23:24 UTC
*** Bug 838897 has been marked as a duplicate of this bug. ***

Comment 10 Yaniv Kaul 2013-03-04 08:08:20 UTC
The title change is extremely misleading the user scenario: someone enters the wrong domain connections, fails to connect, and now he can't fix the connection and successfully connect.

Comment 13 Aharon Canan 2013-08-21 18:56:01 UTC
moving to verified following TCMS run -
https://tcms.engineering.redhat.com/run/76018

Comment 15 Charlie 2013-11-28 00:20:29 UTC
This bug is currently attached to errata RHEA-2013:15231. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes 

Thanks in advance.

Comment 18 errata-xmlrpc 2014-01-21 17:11:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html