This bug was initially created as a copy of Bug #2139484 I am copying this bug because the original bug was closed as WONT_FIX; the issue is low-priority right now with the OpenStack team deciding to test a deployment with NFS daemons listening on a non-standard port. Description of problem: When Ceph NFS is consumed via the ingress service, binding the ingress to port 2049 would allow clients to mount exports directly without having to know/specify a different port. However, we're unable to do so - "TCP Port(s) '2049,9049' required for haproxy already in use" # ceph nfs cluster create cephfs --placement=devstack.localdomain --ingress --virtual-ip 192.168.24.75/24 --port 2049 NFS Cluster Created Successfully # ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID crash.devstack devstack.localdomain running (2h) 4m ago 2h 7201k - 17.2.3 0912465dcea5 b79b84f7c463 mgr.devstack.jykurw devstack.localdomain running (2h) 4m ago 2h 432M - 17.2.3 0912465dcea5 e70c1bd70f43 mgr.devstack.localdomain.yworig devstack.localdomain *:9283 running (2h) 4m ago 2h 539M - 17.2.3 0912465dcea5 4b0232569f76 mon.devstack.localdomain devstack.localdomain running (2h) 4m ago 2h 440M 2048M 17.2.3 0912465dcea5 1e53d1de366a nfs.cephfs.0.0.devstack.kfvcel devstack.localdomain *:12049 running (4m) 4m ago 4m 9353k - 4.0 0912465dcea5 dab8ed3fd5bb osd.0 devstack.localdomain running (2h) 4m ago 2h 101M 4096M 17.2.3 0912465dcea5 88a226cee3e7 # ceph -W cephadm --watch-debug cluster: id: 15b994ed-4341-4522-94e9-56e75279659a health: HEALTH_WARN Failed to place 2 daemon(s) data: pools: 2 pools, 9 pgs objects: 5 objects, 449 KiB usage: 21 MiB used, 10 GiB / 10 GiB avail pgs: 9 active+clean io: client: 767 B/s rd, 511 B/s wr, 0 op/s rd, 0 op/s wr Failure logs: 2022-09-20 07:42:53,430 7faa31849740 INFO Deploy daemon haproxy.nfs.cephfs.devstack.ultjer ... 2022-09-20 07:42:53,751 7faa31849740 DEBUG stat: 0 0 2022-09-20 07:42:53,906 7faa31849740 INFO Verifying port 2049 ... 2022-09-20 07:42:53,907 7faa31849740 WARNING Cannot bind to IP 0.0.0.0 port 2049: [Errno 98] Address already in use 2022-09-20 07:42:53,907 7faa31849740 INFO Verifying port 9049 ... 2022-09-20 07:42:53,907 7faa31849740 ERROR ERROR: TCP Port(s) '2049,9049' required for haproxy already in use netstat shows the following: LISTEN 0 64 0.0.0.0:2049 0.0.0.0:* LISTEN 0 128 :12049 *: users:(("ganesha.nfsd",pid=611540,fd=35)) LISTEN 0 64 [::]:2049 [::]:* 1. a process is bound on 2049, and it's not haproxy 2. ganesha, which is bound on $port + [1], is bound on '*', which is a limitation for the "ceph nfs cluster" cli 3. Problem occurs even when using a spec instead of the CLI: service_type: ingress service_id: nfs.cephfs placement: count: 1 spec: backend_service: nfs.cephfs frontend_port: 2049 monitor_port: 8000 virtual_ip: 192.168.24.75/24" Version-Release number of selected component (if applicable): RHCS 6.x+
we have an index containing three links that point to different sections on the same page. Each link is an <a> element with an href attribute that specifies the ID of the corresponding section. The sections are represented by <h2> elements with unique IDs (id="section1", id="section2", id="section3") that match the href attributes of the links in the index. When a user clicks on one of the links in the index, the page will scroll down to the corresponding section. can i use it for indexing ? <!DOCTYPE html> <html> <head> <title>Index with Links</title> </head> <body> <h1>Index</h1> <ul> <li><a href="#section1">Section 1</a></li> <li><a href="#section2">Section 2</a></li> <li><a href="#section3">Section 3</a></li> </ul> <h2 id="section1">Section 1</h2> <p>This is the content of Section 1.</p> <h2 id="section2">Section 2</h2> <p>This is the content of Section 2.</p> <h2 id="section3">Section 3</h2> <p>This is the content of Section 3.</p> </body> </html> http://fleetarmamentqaarisoft.tk http://fleetarmamentqaarisoft.gq http://fleetarmamentqaarisoft.cf http://elitearistocratqaarisoft.tk http://elitearistocratqaarisoft.ml http://elitearistocratqaarisoft.gq http://elitearistocratqaarisoft.cf http://directartlesslyqaarisoft.tk http://directartlesslyqaarisoft.ml http://directartlesslyqaarisoft.gq
should be fixed by https://github.com/ceph/ceph/pull/53008
Verified this BZ with # ceph --version ceph version 18.2.1-67.el9cp (e63e407e02b2616a7b4504a4f7c5a76f89aad3ce) reef (stable) # rpm -qa | grep nfs libnfsidmap-2.5.4-20.el9.x86_64 nfs-utils-2.5.4-20.el9.x86_64 nfs-ganesha-selinux-5.7-1.el9cp.noarch nfs-ganesha-5.7-1.el9cp.x86_64 nfs-ganesha-rgw-5.7-1.el9cp.x86_64 nfs-ganesha-ceph-5.7-1.el9cp.x86_64 nfs-ganesha-rados-grace-5.7-1.el9cp.x86_64 nfs-ganesha-rados-urls-5.7-1.el9cp.x86_64 [ceph: root@argo016 /]# ceph nfs cluster create cephfs --placement=argo016 --ingress --virtual-ip 10.8.128.91/24 --port 2049 [ceph: root@argo016 /]# ceph nfs cluster info cephfs { "cephfs": { "backend": [ { "hostname": "argo016", "ip": "10.8.128.216", "port": 12049 } ], "monitor_port": 9049, "port": 2049, "virtual_ip": "10.8.128.91" } } [ceph: root@argo016 /]# ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 17s ago 6w count:1 ceph-exporter 5/5 3m ago 6w * crash 5/5 3m ago 6w * grafana ?:3000 1/1 17s ago 6w count:1 ingress.nfs.cephfs 10.8.128.91:2049,9049 2/2 17s ago 31s argo016 mds.cephfs 2/2 3m ago 6w label:mds mgr 3/3 3m ago 6w argo016;argo018;argo019 mon 5/5 3m ago 6w argo016;argo018;argo019;argo020;argo021 nfs.cephfs ?:12049 1/1 17s ago 31s argo016 node-exporter ?:9100 5/5 3m ago 6w * node-proxy 0/5 - 3w * osd.all-available-devices 12 3m ago 6w * prometheus ?:9095 1/1 17s ago 6w count:1 rgw.rgw.1 ?:80 1/1 3m ago 6w label:rgw [ceph: root@argo016 /]# ceph orch ps | grep nfs haproxy.nfs.cephfs.argo016.aiwqmz argo016 *:2049,9049 running (45s) 35s ago 45s 13.8M - 2.4.22-f8e3218 5de324a87c1c 54b64b8381dd keepalived.nfs.cephfs.argo016.napevt argo016 running (43s) 35s ago 43s 1765k - 2.2.8 2910e3c7c546 f312d03e4a7e nfs.cephfs.0.0.argo016.eafnta argo016 *:12049 running (46s) 35s ago 46s 49.7M - 5.7 c757eefdb83e b038abfa4eda [ceph: root@argo016 /]# ceph -s cluster: id: 6146f1e0-bfb2-11ee-94e2-ac1f6b0a1844 health: HEALTH_OK services: mon: 5 daemons, quorum argo016,argo020,argo019,argo018,argo021 (age 19h) mgr: argo018.zjjbrp(active, since 19h), standbys: argo019.etueju, argo016.vmjvvi mds: 1/1 daemons up, 1 standby osd: 12 osds: 12 up (since 19h), 12 in (since 5w) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 9 pools, 241 pgs objects: 21.79k objects, 20 GiB usage: 63 GiB used, 18 TiB / 18 TiB avail pgs: 241 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr [ceph: root@argo016 /]# ceph nfs export create cephfs cephfs /ganesha1 cephfs --path=/ { "bind": "/ganesha1", "cluster": "cephfs", "fs": "cephfs", "mode": "RW", "path": "/" } Client side mount ================ [root@argo021 mnt]# mkdir ganesha [root@argo021 mnt]# mount -t nfs -o vers=4.1 10.8.128.91:/ganesha1 /mnt/ganesha/ [root@argo021 mnt]# cd /mnt/ganesha/ [root@argo021 ganesha]# touch f1 [root@argo021 ganesha]# cd .. [root@argo021 mnt]# umount /mnt/ganesha [root@argo021 mnt]# mount -t nfs -o vers=3 10.8.128.91:/ganesha1 /mnt/ganesha/ Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service. [root@argo021 mnt]# cd /mnt/ganesha/ Moving this BZ to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925