Bug 2097490
| Summary: | [RFE] Support HAProxy's PROXY protocol | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Goutham Pacha Ravi <gouthamr> |
| Component: | NFS-Ganesha | Assignee: | Frank Filz <ffilz> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | medium | Docs Contact: | Rivka Pollack <rpollack> |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | akraj, amk, cephqe-warriors, ffilz, gfidente, gouthamr, hyelloji, kdreyer, kkeithle, mbenjamin, mkasturi, msaini, rpollack, sostapov, tchandra, tserlin, vdas, vimartin |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | 7.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | nfs-ganesha-5.1-1.el9cp | Doc Type: | Enhancement |
| Doc Text: |
.Support HAProxy's PROXY protocol
With this enhancement, HAProxy uses load balancing servers. This allows load balancing and also enables client restrictions on access.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-12-13 15:18:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2024129, 2176300, 2237662 | ||
|
Description
Goutham Pacha Ravi
2022-06-15 19:28:03 UTC
From the perspective of the upstream effort for this, I will conduct most of the design through the github issue and a Google Doc I have started. We can use this for more internal/downstream considerations. patch is posted upstream, will be in nfs-ganesha-5 Hi,
When tried argument " --ingress-mode" while creating cephfs nfs cluster seeing below error
[root@ceph-amk-fs-tools-dcnfas-node7 ~]# ceph nfs cluster create cephfs --ingress --virtual-ip=10.0.209.126/30 --ingress-mode=haproxy-protocol
Invalid command: Unexpected argument '--ingress-mode=haproxy-protocol'
nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--port <int>] : Create an NFS Cluster
Error EINVAL: invalid command
[root@ceph-amk-fs-tools-dcnfas-node7 ~]# ceph versions
{
"mon": {
"ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 2
},
"osd": {
"ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 12
},
"mds": {
"ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
},
"overall": {
"ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 20
}
}
[root@ceph-amk-fs-tools-dcnfas-node7 ~]#
Regards,
Amarnath
Missed the 6.1 z1 window, retargeting it to 6.1 z2. Verified this BZ with
# rpm -qa | grep nfs
libnfsidmap-2.5.4-18.el9.x86_64
nfs-utils-2.5.4-18.el9.x86_64
nfs-ganesha-selinux-5.5-1.el9cp.noarch
nfs-ganesha-5.5-1.el9cp.x86_64
nfs-ganesha-rgw-5.5-1.el9cp.x86_64
nfs-ganesha-ceph-5.5-1.el9cp.x86_64
nfs-ganesha-rados-grace-5.5-1.el9cp.x86_64
nfs-ganesha-rados-urls-5.5-1.el9cp.x86_64
# ceph --version
ceph version 18.2.0-43.el9cp (1aeeec9f1ff5ae66acacb620ef975527114c8f6e) reef (stable)
1. Ingress Mode - haproxy-protocol
# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=haproxy-protocol
# ceph nfs cluster info cephfs
{
"cephfs": {
"backend": [
{
"hostname": "argo016",
"ip": "10.8.128.216",
"port": 12049
},
{
"hostname": "argo018",
"ip": "10.8.128.218",
"port": 12049
},
{
"hostname": "argo019",
"ip": "10.8.128.219",
"port": 12049
}
],
"monitor_port": 9049,
"port": 2049,
"virtual_ip": "10.8.128.233"
}
}
# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.jhjtwm argo016 *:2049,9049 running (111s) - 111s 14.4M - <unknown> bda92490ac6c b523fe94a30d
haproxy.nfs.cephfs.argo018.jsdlco argo018 *:2049,9049 running (112s) - 112s 14.4M - <unknown> bda92490ac6c f862893a64c9
haproxy.nfs.cephfs.argo019.luzpnw argo019 *:2049,9049 running (109s) - 109s 16.4M - <unknown> bda92490ac6c efac9d62edcb
keepalived.nfs.cephfs.argo016.qgrowd argo016 running (105s) - 105s 1770k - 2.2.4 b79b516c07ed 589fd86c4dd3
keepalived.nfs.cephfs.argo018.knkctj argo018 running (108s) - 108s 1757k - 2.2.4 b79b516c07ed b64ee1f64868
keepalived.nfs.cephfs.argo019.dsygzd argo019 running (106s) - 106s 1765k - 2.2.4 b79b516c07ed ae51978eb0b4
nfs.cephfs.0.0.argo018.vtjlmz argo018 *:12049 running (2m) - 2m 47.3M - 5.5 1f29ade573da faa320d9052c
nfs.cephfs.1.0.argo019.qomocx argo019 *:12049 running (119s) - 119s 49.5M - 5.5 1f29ade573da c9c8c928df0e
nfs.cephfs.2.0.argo016.jfjcee argo016 *:12049 running (114s) - 114s 51.4M - 5.5 1f29ade573da 758eba5eac42
2. Ingress Mode - haproxy-standard
# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=haproxy-standard
# ceph nfs cluster info cephfs
{
"cephfs": {
"backend": [
{
"hostname": "argo016",
"ip": "10.8.128.216",
"port": 12049
},
{
"hostname": "argo018",
"ip": "10.8.128.218",
"port": 12049
},
{
"hostname": "argo019",
"ip": "10.8.128.219",
"port": 12049
}
],
"monitor_port": 9049,
"port": 2049,
"virtual_ip": "10.8.128.233"
}
}
# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.grlovq argo016 *:2049,9049 running (18s) - 18s 15.8M - <unknown> bda92490ac6c b7389fdacd52
haproxy.nfs.cephfs.argo018.npuehw argo018 *:2049,9049 running (19s) - 19s 13.8M - <unknown> bda92490ac6c 42ea2d02d6f5
haproxy.nfs.cephfs.argo019.pelnma argo019 *:2049,9049 running (16s) - 16s 15.8M - <unknown> bda92490ac6c c9707087b3bf
keepalived.nfs.cephfs.argo016.vcclop argo016 running (12s) - 12s 1770k - 2.2.4 b79b516c07ed 084af7083aa0
keepalived.nfs.cephfs.argo018.ucdbke argo018 running (15s) - 15s 1765k - 2.2.4 b79b516c07ed e05996589274
keepalived.nfs.cephfs.argo019.lszqzf argo019 running (13s) - 13s 1770k - 2.2.4 b79b516c07ed 766bfc0034b3
nfs.cephfs.0.0.argo018.mxcsei argo018 *:12049 running (30s) - 30s 51.2M - 5.5 1f29ade573da b89f099285f8
nfs.cephfs.1.0.argo019.zxnexe argo019 *:12049 running (25s) - 25s 51.1M - 5.5 1f29ade573da 8fd8f6df882b
nfs.cephfs.2.0.argo016.gmxoir argo016 *:12049 running (21s) - 21s 48.9M - 5.5 1f29ade573da e9998cf5d133
3. Ingress Mode - keepalive-only
# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=keepalive-only
# ceph nfs cluster info cephfs
{
"cephfs": {
"backend": [
{
"hostname": "argo018",
"ip": "10.8.128.218",
"port": 2049
}
],
"port": 9049,
"virtual_ip": "10.8.128.233"
}
}
# ceph orch ps | grep nfs
keepalived.nfs.cephfs.argo018.seteei argo018 *:9049 running (116s) - 116s 3976k - 2.2.4 b79b516c07ed 0295a4883ca3
nfs.cephfs.0.0.argo018.divtdk argo018 *:2049 running (117s) - 117s 70.2M - 5.5 1f29ade573da 25668ce87805
4. Ingress Mode - Default
# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=default
# ceph nfs cluster info cephfs
{
"cephfs": {
"backend": [
{
"hostname": "argo016",
"ip": "10.8.128.216",
"port": 12049
},
{
"hostname": "argo018",
"ip": "10.8.128.218",
"port": 12049
},
{
"hostname": "argo019",
"ip": "10.8.128.219",
"port": 12049
}
],
"monitor_port": 9049,
"port": 2049,
"virtual_ip": "10.8.128.233"
}
}
# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.xgumba argo016 *:2049,9049 running (26s) - 25s 15.8M - <unknown> bda92490ac6c 96fa269db7e9
haproxy.nfs.cephfs.argo018.clhbvl argo018 *:2049,9049 running (27s) - 27s 15.9M - <unknown> bda92490ac6c 98534dc35353
haproxy.nfs.cephfs.argo019.hlkxyk argo019 *:2049,9049 running (24s) - 24s 15.8M - <unknown> bda92490ac6c edaa2bf020af
keepalived.nfs.cephfs.argo016.umllks argo016 running (20s) - 20s 1774k - 2.2.4 b79b516c07ed cfb9aad257c0
keepalived.nfs.cephfs.argo018.gxzxxh argo018 running (23s) - 23s 1761k - 2.2.4 b79b516c07ed 5688750a38f0
keepalived.nfs.cephfs.argo019.rjkcvb argo019 running (21s) - 21s 1765k - 2.2.4 b79b516c07ed 1ce1a44088c6
nfs.cephfs.0.0.argo018.nuyova argo018 *:12049 running (38s) - 38s 49.1M - 5.5 1f29ade573da ad43576affad
nfs.cephfs.1.0.argo019.ngokad argo019 *:12049 running (33s) - 33s 49.0M - 5.5 1f29ade573da 0f1817fcd2ef
nfs.cephfs.2.0.argo016.adppvo argo016 *:12049 running (29s) - 29s 49.0M - 5.5 1f29ade573da d8c2cd2c3842
Yes, this needs to be in 7.0, do type and doc text look like they have already been provided. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780 |