Bug 881378
| Summary: | [RHEV-RHS] Mounting volumes as "Export/NFS" fails with "Internal error, Storage Connection doesn't exist." | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Gowrishankar Rajaiyan <grajaiya> |
| Component: | glusterd | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
| Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 2.0 | CC: | ashetty, bhubbard, gkhachik, grajaiya, pkarampu, psriniva, rhs-bugs, sasundar, vagarwal, vbellur, vraman |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 2.1.2 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0.49rhs | Doc Type: | Bug Fix |
| Doc Text: |
Previously mounting a Red Hat Storage volume as an "Export/NFS" using Red Hat Enterprise Virtualization Manager failed. With this fix, the iptable rules are set properly and the Red Hat Storage volume NFS mount operations work as expected.
|
Story Points: | --- |
| Clone Of: | Environment: |
virt rhev integration
|
|
| Last Closed: | 2014-02-25 07:23:08 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Gowrishankar Rajaiyan
2012-11-28 19:49:50 UTC
Is it the Host machine where we have iptables rules? (shanks: needinfo) I am not aware of any firewalls on RHS images. If its Host, we may need a update into RHEV-H to get it fixed. Vijay, assigning the bug to you, please assign it to the appropriate component/project. Amar, both the RHS nodes and the hypervisor machines have iptables enabled by default. Removing 2.0+ tag as this involves NFS. I was tried with: --- chown vdsm:kvm /exports/isos chmod g+s /exports/isos --- and actually it helped me to fix that error. Setting severity and a release flag. Targeting for 2.1.z (Big Bend) U1. Can we run a round of test again? Its been more than 10months now since we last touched this. The issue is for opening tcp port for 111 during boot strapping. [root@rhss1 ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:24007 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38465 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38466 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38467 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2049 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38469 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:39543 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:55863 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38468 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:963 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:965 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4379 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:24009:24108 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:49152:49251 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@rhss1 ~]# I see that now. Please move this to ON_QA for formal verification. per last comment, moving this to ON_QA Shanks, I've edited the doc text input Pranith gave for errata. Can you please verify the doc text for technical accuracy? Tested with glusterfs-3.4.0.57rhs-1.el6rhs Verified with following steps, 1. Created a distribute replicate volume with 2X2 bricks 2. Optimized the volume for virt store (i.e) gluster volume set <vol-name> group virt gluster volume set <vol-name> storage.owner-uid 36 gluster volume set <vol-name> storage.owner-gid 36 3. Start the volume 4. Create a EXPORT/NFS domain with the above created gluster volume I could create a EXPORT/NFS domain with gluster volume,attached and activated it Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |