Bug 2244099 - CVE-2023-38408 vulnerabilities found in ODF 4.12.8 containers
Summary: CVE-2023-38408 vulnerabilities found in ODF 4.12.8 containers
Keywords:
Status: CLOSED DUPLICATE of bug 2244765
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: build
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Boris Ranto
QA Contact: Petr Balogh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-10-13 19:26 UTC by George Law
Modified: 2023-12-05 19:33 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-03 09:24:25 UTC
Embargoed:


Attachments (Terms of Use)

Description George Law 2023-10-13 19:26:43 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Previous slack discussion (IBM slack #cae-team-collaboration channel) regarding CVE-2023-38408  indicated that this vulnerability should be fixed in the next update of ODF (4.12.8 was released on 9/27/2023).

Based on what was said in slack (screenshot has now been deleted), this is the information relayed to the customer :

~~~
Engineering has described their build process when a new ODF version is released. 

Before a new ODF version is laid down on the underlying ubi8 container image, the container image is first updated (e.g. dnf update)  and then the new ODF code is placed on top of the container.

Thus, the ODF 4.12.8 containers should contain those openssh fixes for CVE-2023-38408.
~~~

After updating to OCP 4.12.35 and ODF 4.12.8, the customer finds that they are still vulnerable to  CVE-2023-38408


Version of all relevant components (if applicable):
This is my versions from clusterbot but matches the customers versions in case TS014156544

~~~
$ oc get clusterversion;oc get csv; 
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.35   True        False         9m35s   Cluster version is 4.12.35
NAME                                    DISPLAY                       VERSION        REPLACES                                PHASE
mcg-operator.v4.12.8-rhodf              NooBaa Operator               4.12.8-rhodf   mcg-operator.v4.12.7-rhodf              Succeeded
ocs-operator.v4.12.8-rhodf              OpenShift Container Storage   4.12.8-rhodf   ocs-operator.v4.12.7-rhodf              Succeeded
odf-csi-addons-operator.v4.12.8-rhodf   CSI Addons                    4.12.8-rhodf   odf-csi-addons-operator.v4.12.7-rhodf   Succeeded
odf-operator.v4.12.8-rhodf              OpenShift Data Foundation     4.12.8-rhodf   odf-operator.v4.12.7-rhodf              Succeeded

~~~

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Customer is in danger of falling out of compliance with open violations where the CVSS >=9
 
Is there any workaround available to the best of your knowledge?

Not known

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Can this issue reproducible?

yes

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. using cluster bot, launch a 4.12.35 cluster "launch 4.12.35 aws,xlarge"
2. install the ODF operator and then the storage cluster.
3. Once all the ODF related pods come up and the cluster is healthy, verify the versions on the nodes/inside the pods 
    a. 'oc debug node/X --  rpm -qa |grep openssh' 
    b. 'oc rsh rook-ceph* rpm -qa |grep openssh' 


Actual results:

Node level:
~~~ 
$ oc debug node/ip-10-0-204-217.us-west-1.compute.internal --  rpm -qa |grep openssh
Temporary namespace openshift-debug-l268q is created for debugging node...
Starting pod/ip-10-0-204-217us-west-1computeinternal-debug ...
To use host binaries, run `chroot /host`
openssh-clients-8.0p1-15.el8_6.x86_64
openssh-8.0p1-15.el8_6.x86_64
~~~

Pod level :
~~~
$  for i in $(oc get pods -o wide |grep ^rook-ceph|grep -v prepare|awk '{print $1}'); do echo $i; oc rsh $i rpm -qa |grep openssh; done
rook-ceph-crashcollector-384c8773f444b2dab41b30c101699d0e-smdtl
Defaulted container "ceph-crash" out of: ceph-crash, make-container-crash-dir (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-crashcollector-7f10271c7c2b06241b5f9ea34c05a67a-5l7rj
Defaulted container "ceph-crash" out of: ceph-crash, make-container-crash-dir (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-crashcollector-baf349f64060b968d8d4ba90e0aba295-rm2qg
Defaulted container "ceph-crash" out of: ceph-crash, make-container-crash-dir (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-54dfc595t79c8
Defaulted container "mds" out of: mds, log-collector, chown-container-data-dir (init)
Error from server (BadRequest): pod rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-54dfc595t79c8 does not have a host assigned
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7f976f56ltrd9
Defaulted container "mds" out of: mds, log-collector, chown-container-data-dir (init)
Error from server (BadRequest): pod rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7f976f56ltrd9 does not have a host assigned
rook-ceph-mgr-a-c4886d9c7-zgpj5
Defaulted container "mgr" out of: mgr, log-collector, chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-mon-a-58cc8b955f-vd4lk
Defaulted container "mon" out of: mon, log-collector, chown-container-data-dir (init), init-mon-fs (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-mon-b-5577dcfbdf-fnrb6
Defaulted container "mon" out of: mon, log-collector, chown-container-data-dir (init), init-mon-fs (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-mon-c-549ffb6575-ndlwm
Defaulted container "mon" out of: mon, log-collector, chown-container-data-dir (init), init-mon-fs (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-operator-56f69c4547-mt7p5
openssh-8.0p1-19.el8_8.x86_64
openssh-clients-8.0p1-19.el8_8.x86_64
rook-ceph-osd-0-79d58b8b67-sszv6
Defaulted container "osd" out of: osd, log-collector, blkdevmapper (init), activate (init), expand-bluefs (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-osd-1-5497c6b679-8ppn5
Defaulted container "osd" out of: osd, log-collector, blkdevmapper (init), activate (init), expand-bluefs (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64
rook-ceph-osd-2-68c656988f-nllc4
Defaulted container "osd" out of: osd, log-collector, blkdevmapper (init), activate (init), expand-bluefs (init), chown-container-data-dir (init)
openssh-8.0p1-17.el8_7.x86_64
openssh-clients-8.0p1-17.el8_7.x86_64

~~~


Expected results:

Expected to see the fixed versions (e.g. openssh-8.0p1-19.el8_8.x86_64.rpm) reported on the errata link https://access.redhat.com/errata/RHSA-2023:4419

Additional info:

Comment 15 Boris Ranto 2023-11-03 09:24:25 UTC
This is just about updating the rhceph image. As such, I'm closing this as a duplicate of 4.12.z bz to update the rhceph image.

*** This bug has been marked as a duplicate of bug 2244765 ***


Note You need to log in before you can comment on or make changes to this bug.