Description of problem: Mounting a gluster volume as "Export/NFS" using RHEVM fails with the message "Error: Cannot add Storage. Internal error, Storage Connection doesn't exist.". On further investigation I found that we have "-A INPUT -p udp -m udp --dport 111 -j ACCEPT" in /etc/sysconfig/iptables. Replacing UDP with TCP and restarting firewall solves this. Version-Release number of selected component (if applicable): glusterfs-server-3.3.0rhsvirt1-8.el6rhs.x86_64 RHS-2.0-20121110.0-RHS-x86_64-DVD1.iso How reproducible: Always Steps to Reproduce: 1. Create a DC. 2. Create a VDSM cluster and add hypervisor hosts. 3. Create a new storage domain. 4. Select "Domain Function / Storage Type" as Export/NFS. 5. Provide gluster volume as export path. (e.g.: FQDNofRHSnode:/volumename) 6. Click OK. Actual results: Mount fails. - Error on RHEVM: "Error: Cannot add Storage. Internal error, Storage Connection doesn't exist." - Error in vdsm.log: <snip> Thread-5555::ERROR::2012-11-28 16:28:53,112::hsm::2167::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2164, in disconnectStorageServer File "/usr/share/vdsm/storage/storageServer.py", line 286, in disconnect File "/usr/share/vdsm/storage/storageServer.py", line 199, in disconnect File "/usr/share/vdsm/storage/mount.py", line 226, in umount File "/usr/share/vdsm/storage/mount.py", line 214, in _runcmd MountError: (1, ';umount: /rhev/data-center/mnt/rhs-gp-srv6.lab.eng.blr.redhat.com:_Replicate: not found\n') </snip> - Error when mounted manually: # mount -v -t nfs rhs-gp-srv6.lab.eng.blr.redhat.com:/Replicate /tmp/shanks/ -o vers=3 mount.nfs: timeout set for Wed Nov 28 16:40:37 2012 mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query failed: RPC: Remote system error - No route to host mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query failed: RPC: Remote system error - No route to host mount.nfs: trying text-based options 'vers=3,addr=10.70.36.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query failed: RPC: Remote system error - No route to host Expected results: NFS mount of a gluster volume should be successful. Additional info: [root@rhs-gp-srv5 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:24007 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 <<<<<<< ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2049 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38465 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38466 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38467 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:39543 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:55863 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38468 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:963 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:965 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4379 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:24009:24108 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@rhs-gp-srv5 ~]# Workaround steps: 1. In /etc/sysconfig/iptables: replace "-A INPUT -p udp -m udp --dport 111 -j ACCEPT" with "-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT" 2. service iptables restart 3. iptables -L -n [root@rhs-gp-srv6 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:24007 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 <<<<<<< ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38465 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38466 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38467 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:39543 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:55863 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38468 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:963 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:965 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4379 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:24009:24108 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@rhs-gp-srv6 ~]# [root@rhs-gp-srv1 ~]# mount -v -t nfs rhs-gp-srv5.lab.eng.blr.redhat.com:/Replicate /tmp/shanks/ -o vers=3 mount.nfs: timeout set for Wed Nov 28 15:51:41 2012 mount.nfs: trying text-based options 'vers=3,addr=10.70.36.9' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 10.70.36.9 prog 100003 vers 3 prot TCP port 38467 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: portmap query retrying: RPC: Unable to receive - No route to host mount.nfs: prog 100005, trying vers=3, prot=6 mount.nfs: trying 10.70.36.9 prog 100005 vers 3 prot TCP port 38465 rhs-gp-srv5.lab.eng.blr.redhat.com:/Replicate on /tmp/shanks type nfs (rw,vers=3) [root@rhs-gp-srv1 ~]#
Is it the Host machine where we have iptables rules? (shanks: needinfo) I am not aware of any firewalls on RHS images. If its Host, we may need a update into RHEV-H to get it fixed. Vijay, assigning the bug to you, please assign it to the appropriate component/project.
Amar, both the RHS nodes and the hypervisor machines have iptables enabled by default.
Removing 2.0+ tag as this involves NFS.
I was tried with: --- chown vdsm:kvm /exports/isos chmod g+s /exports/isos --- and actually it helped me to fix that error.
Setting severity and a release flag.
Targeting for 2.1.z (Big Bend) U1.
Can we run a round of test again? Its been more than 10months now since we last touched this.
The issue is for opening tcp port for 111 during boot strapping. [root@rhss1 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:24007 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38465 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38466 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38467 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2049 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38469 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:39543 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:55863 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38468 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:963 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:965 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4379 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:24009:24108 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:49152:49251 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@rhss1 ~]# I see that now. Please move this to ON_QA for formal verification.
per last comment, moving this to ON_QA
Shanks, I've edited the doc text input Pranith gave for errata. Can you please verify the doc text for technical accuracy?
Tested with glusterfs-3.4.0.57rhs-1.el6rhs Verified with following steps, 1. Created a distribute replicate volume with 2X2 bricks 2. Optimized the volume for virt store (i.e) gluster volume set <vol-name> group virt gluster volume set <vol-name> storage.owner-uid 36 gluster volume set <vol-name> storage.owner-gid 36 3. Start the volume 4. Create a EXPORT/NFS domain with the above created gluster volume I could create a EXPORT/NFS domain with gluster volume,attached and activated it
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days