Bug 881378 - [RHEV-RHS] Mounting volumes as "Export/NFS" fails with "Internal error, Storage Connection doesn't exist."
Summary: [RHEV-RHS] Mounting volumes as "Export/NFS" fails with "Internal error, Stora...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 2.1.2
Assignee: Bug Updates Notification Mailing List
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-11-28 19:49 UTC by Gowrishankar Rajaiyan
Modified: 2023-09-14 01:39 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.4.0.49rhs
Doc Type: Bug Fix
Doc Text:
Previously mounting a Red Hat Storage volume as an "Export/NFS" using Red Hat Enterprise Virtualization Manager failed. With this fix, the iptable rules are set properly and the Red Hat Storage volume NFS mount operations work as expected.
Clone Of:
Environment:
virt rhev integration
Last Closed: 2014-02-25 07:23:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description Gowrishankar Rajaiyan 2012-11-28 19:49:50 UTC
Description of problem:
Mounting a gluster volume as "Export/NFS" using RHEVM fails with the message "Error: Cannot add Storage. Internal error, Storage Connection doesn't exist.". On further investigation I found that we have "-A INPUT -p udp -m udp --dport 111   -j ACCEPT" in /etc/sysconfig/iptables. Replacing UDP with TCP and restarting firewall solves this.

Version-Release number of selected component (if applicable):
glusterfs-server-3.3.0rhsvirt1-8.el6rhs.x86_64
RHS-2.0-20121110.0-RHS-x86_64-DVD1.iso

How reproducible:
Always

Steps to Reproduce:
1. Create a DC.
2. Create a VDSM cluster and add hypervisor hosts.
3. Create a new storage domain.
4. Select "Domain Function / Storage Type" as Export/NFS.
5. Provide gluster volume as export path. (e.g.: FQDNofRHSnode:/volumename)
6. Click OK.
  
Actual results: Mount fails.
- Error on RHEVM: "Error: Cannot add Storage. Internal error, Storage Connection doesn't exist."

- Error in vdsm.log:
<snip>
Thread-5555::ERROR::2012-11-28 16:28:53,112::hsm::2167::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2164, in disconnectStorageServer
  File "/usr/share/vdsm/storage/storageServer.py", line 286, in disconnect
  File "/usr/share/vdsm/storage/storageServer.py", line 199, in disconnect
  File "/usr/share/vdsm/storage/mount.py", line 226, in umount
  File "/usr/share/vdsm/storage/mount.py", line 214, in _runcmd
MountError: (1, ';umount: /rhev/data-center/mnt/rhs-gp-srv6.lab.eng.blr.redhat.com:_Replicate: not found\n')
</snip>

- Error when mounted manually:
# mount -v -t nfs  rhs-gp-srv6.lab.eng.blr.redhat.com:/Replicate /tmp/shanks/ -o vers=3
mount.nfs: timeout set for Wed Nov 28 16:40:37 2012
mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - No route to host
mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - No route to host
mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - No route to host



Expected results: NFS mount of a gluster volume should be successful.


Additional info:
[root@rhs-gp-srv5 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:54321 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:161 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:24007 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:111     <<<<<<<
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:2049 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38465 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38466 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38467 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:39543 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:55863 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38468 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:963 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:965 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:4379 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:139 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:445 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpts:24009:24108 
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@rhs-gp-srv5 ~]# 



Workaround steps: 
1. In /etc/sysconfig/iptables:
replace "-A INPUT -p udp -m udp --dport 111   -j ACCEPT" with "-A INPUT -p tcp -m tcp --dport 111   -j ACCEPT"
2. service iptables restart
3. iptables -L -n
[root@rhs-gp-srv6 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:54321 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:161 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:24007 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:111      <<<<<<<
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38465 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38466 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38467 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:39543 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:55863 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38468 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:963 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:965 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:4379 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:139 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:445 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpts:24009:24108 
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@rhs-gp-srv6 ~]# 


[root@rhs-gp-srv1 ~]# mount -v -t nfs  rhs-gp-srv5.lab.eng.blr.redhat.com:/Replicate /tmp/shanks/ -o vers=3
mount.nfs: timeout set for Wed Nov 28 15:51:41 2012
mount.nfs: trying text-based options 'vers=3,addr=10.70.36.9'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 10.70.36.9 prog 100003 vers 3 prot TCP port 38467
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Unable to receive - No route to host
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 10.70.36.9 prog 100005 vers 3 prot TCP port 38465
rhs-gp-srv5.lab.eng.blr.redhat.com:/Replicate on /tmp/shanks type nfs (rw,vers=3)
[root@rhs-gp-srv1 ~]#

Comment 2 Amar Tumballi 2012-12-03 06:45:11 UTC
Is it the Host machine where we have iptables rules? (shanks: needinfo)
I am not aware of any firewalls on RHS images. 

If its Host, we may need a update into RHEV-H to get it fixed. Vijay, assigning the bug to you, please assign it to the appropriate component/project.

Comment 3 Anush Shetty 2012-12-03 10:57:52 UTC
Amar, both the RHS nodes and the hypervisor machines have iptables enabled by default.

Comment 4 Vijay Bellur 2012-12-11 06:29:54 UTC
Removing 2.0+ tag as this involves NFS.

Comment 5 Garik Khachikyan 2013-02-13 16:59:00 UTC
I was tried with: 
---
chown vdsm:kvm /exports/isos
chmod g+s /exports/isos
---

and actually it helped me to fix that error.

Comment 6 Brad Hubbard 2013-04-16 21:02:47 UTC
Setting severity and a release flag.

Comment 7 Scott Haines 2013-09-23 19:47:18 UTC
Targeting for 2.1.z (Big Bend) U1.

Comment 8 Amar Tumballi 2013-11-26 08:44:53 UTC
Can we run a round of test again? Its been more than 10months now since we last touched this.

Comment 9 Gowrishankar Rajaiyan 2013-12-10 16:18:06 UTC
The issue is for opening tcp port for 111 during boot strapping.

[root@rhss1 ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:54321 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:161 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:24007 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:8080 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:111 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38465 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38466 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:111 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38467 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:2049 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38469 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:39543 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:55863 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:38468 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:963 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:965 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:4379 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:139 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:445 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpts:24009:24108 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpts:49152:49251 
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@rhss1 ~]# 


I see that now. Please move this to ON_QA for formal verification.

Comment 10 Vivek Agarwal 2013-12-13 06:03:55 UTC
per last comment, moving this to ON_QA

Comment 11 Pavithra 2014-01-13 10:28:52 UTC
Shanks,

I've edited the doc text input Pranith gave for errata. Can you please verify the doc text for technical accuracy?

Comment 12 SATHEESARAN 2014-01-15 05:28:33 UTC
Tested with glusterfs-3.4.0.57rhs-1.el6rhs

Verified with following steps,
1. Created a distribute replicate volume with 2X2 bricks
2. Optimized the volume for virt store
(i.e) gluster volume set <vol-name> group virt
gluster volume set <vol-name> storage.owner-uid 36
gluster volume set <vol-name> storage.owner-gid 36

3. Start the volume
4. Create a EXPORT/NFS domain with the above created gluster volume

I could create a EXPORT/NFS domain with gluster volume,attached and activated it

Comment 14 errata-xmlrpc 2014-02-25 07:23:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Comment 15 Red Hat Bugzilla 2023-09-14 01:39:08 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.