Bug 2097490 - [RFE] Support HAProxy's PROXY protocol
Summary: [RFE] Support HAProxy's PROXY protocol
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NFS-Ganesha
Version: 5.2
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 7.0
Assignee: Frank Filz
QA Contact: Manisha Saini
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2024129 2176300 2237662
TreeView+ depends on / blocked
 
Reported: 2022-06-15 19:28 UTC by Goutham Pacha Ravi
Modified: 2023-12-13 15:19 UTC (History)
18 users (show)

Fixed In Version: nfs-ganesha-5.1-1.el9cp
Doc Type: Enhancement
Doc Text:
.Support HAProxy's PROXY protocol With this enhancement, HAProxy uses load balancing servers. This allows load balancing and also enables client restrictions on access.
Clone Of:
Environment:
Last Closed: 2023-12-13 15:18:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github nfs-ganesha ntirpc issues 252 0 None open [RFE] Support HAProxy's PROXY protocol 2022-06-15 19:28:02 UTC
Red Hat Issue Tracker RHCEPH-4549 0 None None None 2022-06-15 19:46:18 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:19:05 UTC

Description Goutham Pacha Ravi 2022-06-15 19:28:03 UTC
Description of problem:

To protect from node and service failures, it is desirable to deploy NFS-Ganesha in active/active highly available configurations. The Ceph community has designed an architecture which puts one or more nfs-ganesha servers behind an ingress service. The ingress service comprises of haproxy and keepalived [1].

While the solution works well, it is less secure because users cannot enforce client IP restrictions in this architecture. HAProxy terminates client connections and NFS-Ganesha only sees the traffic as originating from the HAProxy node/s, which would invalidate any IP address specified in the CLIENT block of NFS-Ganesha export configs.

It would be feasible to use the IP address of the HAProxy node in the CLIENT block, but this will allow everyone with access to the HAProxy node to have access to the filesystem exported.

One solution could be supporting the PROXY protocol [2][3] where packets originating from HAProxy would have an additional header that contains the source client's IP address. NFS-Ganesha could parse this IP address and return information directly to it.

Open source projects that are benefactors of this solution would include Ceph, OpenStack Manila and Rook among a host of others.

[1] https://docs.ceph.com/en/latest/cephadm/services/nfs/
[2] https://www.haproxy.com/blog/haproxy/proxy-protocol/
[3] http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

Comment 1 Frank Filz 2022-06-16 18:09:36 UTC
From the perspective of the upstream effort for this, I will conduct most of the design through the github issue and a Google Doc I have started.

We can use this for more internal/downstream considerations.

Comment 5 Kaleb KEITHLEY 2023-01-24 22:22:16 UTC
patch is posted upstream, will be in nfs-ganesha-5

Comment 15 Amarnath 2023-06-06 10:22:35 UTC
Hi,

When tried argument " --ingress-mode" while creating cephfs nfs cluster seeing below error

[root@ceph-amk-fs-tools-dcnfas-node7 ~]# ceph nfs cluster create cephfs --ingress --virtual-ip=10.0.209.126/30 --ingress-mode=haproxy-protocol
Invalid command: Unexpected argument '--ingress-mode=haproxy-protocol'
nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--port <int>] :  Create an NFS Cluster
Error EINVAL: invalid command
[root@ceph-amk-fs-tools-dcnfas-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-70.el9cp (fe62dcdbb2c6e05782a3e2b67d025b84ff5047cc) quincy (stable)": 20
    }
}
[root@ceph-amk-fs-tools-dcnfas-node7 ~]#

Regards,
Amarnath

Comment 20 Scott Ostapovicz 2023-07-12 13:34:10 UTC
Missed the 6.1 z1 window, retargeting it to 6.1 z2.

Comment 30 Manisha Saini 2023-09-25 19:33:46 UTC
Verified this BZ with

# rpm -qa | grep nfs
libnfsidmap-2.5.4-18.el9.x86_64
nfs-utils-2.5.4-18.el9.x86_64
nfs-ganesha-selinux-5.5-1.el9cp.noarch
nfs-ganesha-5.5-1.el9cp.x86_64
nfs-ganesha-rgw-5.5-1.el9cp.x86_64
nfs-ganesha-ceph-5.5-1.el9cp.x86_64
nfs-ganesha-rados-grace-5.5-1.el9cp.x86_64
nfs-ganesha-rados-urls-5.5-1.el9cp.x86_64

# ceph --version
ceph version 18.2.0-43.el9cp (1aeeec9f1ff5ae66acacb620ef975527114c8f6e) reef (stable)

1. Ingress Mode - haproxy-protocol

# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=haproxy-protocol

# ceph nfs cluster info cephfs
{
  "cephfs": {
    "backend": [
      {
        "hostname": "argo016",
        "ip": "10.8.128.216",
        "port": 12049
      },
      {
        "hostname": "argo018",
        "ip": "10.8.128.218",
        "port": 12049
      },
      {
        "hostname": "argo019",
        "ip": "10.8.128.219",
        "port": 12049
      }
    ],
    "monitor_port": 9049,
    "port": 2049,
    "virtual_ip": "10.8.128.233"
  }
}

# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.jhjtwm     argo016  *:2049,9049       running (111s)          -  111s    14.4M        -  <unknown>        bda92490ac6c  b523fe94a30d
haproxy.nfs.cephfs.argo018.jsdlco     argo018  *:2049,9049       running (112s)          -  112s    14.4M        -  <unknown>        bda92490ac6c  f862893a64c9
haproxy.nfs.cephfs.argo019.luzpnw     argo019  *:2049,9049       running (109s)          -  109s    16.4M        -  <unknown>        bda92490ac6c  efac9d62edcb
keepalived.nfs.cephfs.argo016.qgrowd  argo016                    running (105s)          -  105s    1770k        -  2.2.4            b79b516c07ed  589fd86c4dd3
keepalived.nfs.cephfs.argo018.knkctj  argo018                    running (108s)          -  108s    1757k        -  2.2.4            b79b516c07ed  b64ee1f64868
keepalived.nfs.cephfs.argo019.dsygzd  argo019                    running (106s)          -  106s    1765k        -  2.2.4            b79b516c07ed  ae51978eb0b4
nfs.cephfs.0.0.argo018.vtjlmz         argo018  *:12049           running (2m)            -    2m    47.3M        -  5.5              1f29ade573da  faa320d9052c
nfs.cephfs.1.0.argo019.qomocx         argo019  *:12049           running (119s)          -  119s    49.5M        -  5.5              1f29ade573da  c9c8c928df0e
nfs.cephfs.2.0.argo016.jfjcee         argo016  *:12049           running (114s)          -  114s    51.4M        -  5.5              1f29ade573da  758eba5eac42

2. Ingress Mode - haproxy-standard

# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=haproxy-standard

# ceph nfs cluster info cephfs
{
  "cephfs": {
    "backend": [
      {
        "hostname": "argo016",
        "ip": "10.8.128.216",
        "port": 12049
      },
      {
        "hostname": "argo018",
        "ip": "10.8.128.218",
        "port": 12049
      },
      {
        "hostname": "argo019",
        "ip": "10.8.128.219",
        "port": 12049
      }
    ],
    "monitor_port": 9049,
    "port": 2049,
    "virtual_ip": "10.8.128.233"
  }
}

# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.grlovq     argo016  *:2049,9049       running (18s)          -  18s    15.8M        -  <unknown>        bda92490ac6c  b7389fdacd52
haproxy.nfs.cephfs.argo018.npuehw     argo018  *:2049,9049       running (19s)          -  19s    13.8M        -  <unknown>        bda92490ac6c  42ea2d02d6f5
haproxy.nfs.cephfs.argo019.pelnma     argo019  *:2049,9049       running (16s)          -  16s    15.8M        -  <unknown>        bda92490ac6c  c9707087b3bf
keepalived.nfs.cephfs.argo016.vcclop  argo016                    running (12s)          -  12s    1770k        -  2.2.4            b79b516c07ed  084af7083aa0
keepalived.nfs.cephfs.argo018.ucdbke  argo018                    running (15s)          -  15s    1765k        -  2.2.4            b79b516c07ed  e05996589274
keepalived.nfs.cephfs.argo019.lszqzf  argo019                    running (13s)          -  13s    1770k        -  2.2.4            b79b516c07ed  766bfc0034b3
nfs.cephfs.0.0.argo018.mxcsei         argo018  *:12049           running (30s)          -  30s    51.2M        -  5.5              1f29ade573da  b89f099285f8
nfs.cephfs.1.0.argo019.zxnexe         argo019  *:12049           running (25s)          -  25s    51.1M        -  5.5              1f29ade573da  8fd8f6df882b
nfs.cephfs.2.0.argo016.gmxoir         argo016  *:12049           running (21s)          -  21s    48.9M        -  5.5              1f29ade573da  e9998cf5d133


3. Ingress Mode - keepalive-only

# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=keepalive-only

# ceph nfs cluster info cephfs
{
  "cephfs": {
    "backend": [
      {
        "hostname": "argo018",
        "ip": "10.8.128.218",
        "port": 2049
      }
    ],
    "port": 9049,
    "virtual_ip": "10.8.128.233"
  }
}

# ceph orch ps | grep nfs
keepalived.nfs.cephfs.argo018.seteei  argo018  *:9049            running (116s)          -  116s    3976k        -  2.2.4            b79b516c07ed  0295a4883ca3
nfs.cephfs.0.0.argo018.divtdk         argo018  *:2049            running (117s)          -  117s    70.2M        -  5.5              1f29ade573da  25668ce87805


4. Ingress Mode - Default

# ceph nfs cluster create cephfs argo016,argo018,argo019 --ingress --virtual-ip=10.8.128.233/21 --ingress-mode=default

# ceph nfs cluster info cephfs
{
  "cephfs": {
    "backend": [
      {
        "hostname": "argo016",
        "ip": "10.8.128.216",
        "port": 12049
      },
      {
        "hostname": "argo018",
        "ip": "10.8.128.218",
        "port": 12049
      },
      {
        "hostname": "argo019",
        "ip": "10.8.128.219",
        "port": 12049
      }
    ],
    "monitor_port": 9049,
    "port": 2049,
    "virtual_ip": "10.8.128.233"
  }
}

# ceph orch ps | grep nfs
haproxy.nfs.cephfs.argo016.xgumba     argo016  *:2049,9049       running (26s)          -  25s    15.8M        -  <unknown>        bda92490ac6c  96fa269db7e9
haproxy.nfs.cephfs.argo018.clhbvl     argo018  *:2049,9049       running (27s)          -  27s    15.9M        -  <unknown>        bda92490ac6c  98534dc35353
haproxy.nfs.cephfs.argo019.hlkxyk     argo019  *:2049,9049       running (24s)          -  24s    15.8M        -  <unknown>        bda92490ac6c  edaa2bf020af
keepalived.nfs.cephfs.argo016.umllks  argo016                    running (20s)          -  20s    1774k        -  2.2.4            b79b516c07ed  cfb9aad257c0
keepalived.nfs.cephfs.argo018.gxzxxh  argo018                    running (23s)          -  23s    1761k        -  2.2.4            b79b516c07ed  5688750a38f0
keepalived.nfs.cephfs.argo019.rjkcvb  argo019                    running (21s)          -  21s    1765k        -  2.2.4            b79b516c07ed  1ce1a44088c6
nfs.cephfs.0.0.argo018.nuyova         argo018  *:12049           running (38s)          -  38s    49.1M        -  5.5              1f29ade573da  ad43576affad
nfs.cephfs.1.0.argo019.ngokad         argo019  *:12049           running (33s)          -  33s    49.0M        -  5.5              1f29ade573da  0f1817fcd2ef
nfs.cephfs.2.0.argo016.adppvo         argo016  *:12049           running (29s)          -  29s    49.0M        -  5.5              1f29ade573da  d8c2cd2c3842

Comment 32 Frank Filz 2023-10-27 23:46:17 UTC
Yes, this needs to be in 7.0, do type and doc text look like they have already been provided.

Comment 34 errata-xmlrpc 2023-12-13 15:18:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780


Note You need to log in before you can comment on or make changes to this bug.