Bug 2176297
| Summary: | Unable to bind Ingress to port 2049 when NFS service uses that port | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Goutham Pacha Ravi <gouthamr> |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | low | Docs Contact: | Akash Raj <akraj> |
| Priority: | low | ||
| Version: | 6.0 | CC: | adking, akraj, cephqe-warriors, lutzmex, mobisht, tserlin, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 7.1 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-18.2.1-2.el9cp | Doc Type: | Enhancement |
| Doc Text: |
.The haproxy daemon binds to its frontend port only on the VIP created by the accompanying keepalived
With this enhancement, the haproxy daemon will bind to its frontend port only on the VIP created by the accompanying keepalived, rather than on 0.0.0.0. Cephadm deployed haproxy will bind its frontend port to the VIP, allowing other services, such as an NFS daemon, to potentially bind to port 2049 on other IPs on the same node.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-06-13 14:20:12 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2267614, 2298578, 2298579 | ||
|
Description
Goutham Pacha Ravi
2023-03-07 22:34:40 UTC
we have an index containing three links that point to different sections on the same page. Each link is an <a> element with an href attribute that specifies the ID of the corresponding section. The sections are represented by <h2> elements with unique IDs (id="section1", id="section2", id="section3") that match the href attributes of the links in the index. When a user clicks on one of the links in the index, the page will scroll down to the corresponding section. can i use it for indexing ? <!DOCTYPE html> <html> <head> <title>Index with Links</title> </head> <body> <h1>Index</h1> <ul> <li><a href="#section1">Section 1</a></li> <li><a href="#section2">Section 2</a></li> <li><a href="#section3">Section 3</a></li> </ul> <h2 id="section1">Section 1</h2> <p>This is the content of Section 1.</p> <h2 id="section2">Section 2</h2> <p>This is the content of Section 2.</p> <h2 id="section3">Section 3</h2> <p>This is the content of Section 3.</p> </body> </html> http://fleetarmamentqaarisoft.tk http://fleetarmamentqaarisoft.gq http://fleetarmamentqaarisoft.cf http://elitearistocratqaarisoft.tk http://elitearistocratqaarisoft.ml http://elitearistocratqaarisoft.gq http://elitearistocratqaarisoft.cf http://directartlesslyqaarisoft.tk http://directartlesslyqaarisoft.ml http://directartlesslyqaarisoft.gq should be fixed by https://github.com/ceph/ceph/pull/53008 Verified this BZ with
# ceph --version
ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable)
# rpm -qa | grep nfs
libnfsidmap-2.5.4-20.el9.x86_64
nfs-utils-2.5.4-20.el9.x86_64
nfs-ganesha-selinux-5.7-1.el9cp.noarch
nfs-ganesha-5.7-1.el9cp.x86_64
nfs-ganesha-rgw-5.7-1.el9cp.x86_64
nfs-ganesha-ceph-5.7-1.el9cp.x86_64
nfs-ganesha-rados-grace-5.7-1.el9cp.x86_64
nfs-ganesha-rados-urls-5.7-1.el9cp.x86_64
[ceph: root@argo016 /]# ceph nfs cluster create cephfs --placement=argo016 --ingress --virtual-ip 10.8.128.91/24 --port 2049
[ceph: root@argo016 /]# ceph nfs cluster info cephfs
{
"cephfs": {
"backend": [
{
"hostname": "argo016",
"ip": "10.8.128.216",
"port": 12049
}
],
"monitor_port": 9049,
"port": 2049,
"virtual_ip": "10.8.128.91"
}
}
[ceph: root@argo016 /]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 17s ago 6w count:1
ceph-exporter 5/5 3m ago 6w *
crash 5/5 3m ago 6w *
grafana ?:3000 1/1 17s ago 6w count:1
ingress.nfs.cephfs 10.8.128.91:2049,9049 2/2 17s ago 31s argo016
mds.cephfs 2/2 3m ago 6w label:mds
mgr 3/3 3m ago 6w argo016;argo018;argo019
mon 5/5 3m ago 6w argo016;argo018;argo019;argo020;argo021
nfs.cephfs ?:12049 1/1 17s ago 31s argo016
node-exporter ?:9100 5/5 3m ago 6w *
node-proxy 0/5 - 3w *
osd.all-available-devices 12 3m ago 6w *
prometheus ?:9095 1/1 17s ago 6w count:1
rgw.rgw.1 ?:80 1/1 3m ago 6w label:rgw
[ceph: root@argo016 /]# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.aiwqmz argo016 *:2049,9049 running (45s) 35s ago 45s 13.8M - 2.4.22-f8e3218 5de324a87c1c 54b64b8381dd
keepalived.nfs.cephfs.argo016.napevt argo016 running (43s) 35s ago 43s 1765k - 2.2.8 2910e3c7c546 f312d03e4a7e
nfs.cephfs.0.0.argo016.eafnta argo016 *:12049 running (46s) 35s ago 46s 49.7M - 5.7 c757eefdb83e b038abfa4eda
[ceph: root@argo016 /]# ceph -s
cluster:
id: 6146f1e0-bfb2-11ee-94e2-ac1f6b0a1844
health: HEALTH_OK
services:
mon: 5 daemons, quorum argo016,argo020,argo019,argo018,argo021 (age 19h)
mgr: argo018.zjjbrp(active, since 19h), standbys: argo019.etueju, argo016.vmjvvi
mds: 1/1 daemons up, 1 standby
osd: 12 osds: 12 up (since 19h), 12 in (since 5w)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 9 pools, 241 pgs
objects: 21.79k objects, 20 GiB
usage: 63 GiB used, 18 TiB / 18 TiB avail
pgs: 241 active+clean
io:
client: 85 B/s rd, 0 op/s rd, 0 op/s wr
[ceph: root@argo016 /]# ceph nfs export create cephfs cephfs /ganesha1 cephfs --path=/
{
"bind": "/ganesha1",
"cluster": "cephfs",
"fs": "cephfs",
"mode": "RW",
"path": "/"
}
Client side mount
================
[root@argo021 mnt]# mkdir ganesha
[root@argo021 mnt]# mount -t nfs -o vers=4.1 10.8.128.91:/ganesha1 /mnt/ganesha/
[root@argo021 mnt]# cd /mnt/ganesha/
[root@argo021 ganesha]# touch f1
[root@argo021 ganesha]# cd ..
[root@argo021 mnt]# umount /mnt/ganesha
[root@argo021 mnt]# mount -t nfs -o vers=3 10.8.128.91:/ganesha1 /mnt/ganesha/
Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service.
[root@argo021 mnt]# cd /mnt/ganesha/
Moving this BZ to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925 |