Description of problem (please be detailed as possible and provide log snippests): When you run must gather command with help flag [-h or --help], you expect to understand the parameters of the command. we don't need to run pre-install.sh script https://github.com/red-hat-storage/odf-must-gather/blob/main/collection-scripts/gather#L131 I created a PR to fix it and tested it with a private image https://github.com/red-hat-storage/odf-must-gather/pull/119 official/dev-image output: $ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.15 -- /usr/bin/gather --help [must-gather ] OUT Using must-gather plug-in image: quay.io/rhceph-dev/ocs-must-gather:latest-4.15 When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: f61ece64-4f45-4a47-84de-664e52699404 ClusterVersion: Stable at "4.15.0-0.nightly-2024-02-16-235514" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-9vswq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-dd4c6 created [must-gather ] OUT pod for plug-in image quay.io/rhceph-dev/ocs-must-gather:latest-4.15 created [must-gather-wqfg7] POD 2024-02-21T14:49:26.414179251Z collection started at: 02:49:26 PM [must-gather-wqfg7] POD 2024-02-21T14:49:27.543933438Z checking for existing must-gather resource [must-gather-wqfg7] POD 2024-02-21T14:49:27.731376476Z deleting existing must-gather resource [must-gather-wqfg7] POD 2024-02-21T14:49:27.842523619Z pod "must-gather-f9r82-helper" deleted [must-gather-wqfg7] POD 2024-02-21T14:49:58.988980023Z creating helper pod [must-gather-wqfg7] POD 2024-02-21T14:50:02.969590209Z pod/must-gather-wqfg7-helper created [must-gather-wqfg7] POD 2024-02-21T14:50:02.987557402Z debugging node compute-0 [must-gather-wqfg7] POD 2024-02-21T14:50:02.990396932Z debugging node compute-1 [must-gather-wqfg7] POD 2024-02-21T14:50:02.990396932Z debugging node compute-2 [must-gather-wqfg7] POD 2024-02-21T14:50:03.370440263Z Starting pod/compute-2-debug ... [must-gather-wqfg7] POD 2024-02-21T14:50:03.380873825Z To use host binaries, run `chroot /host` [must-gather-wqfg7] POD 2024-02-21T14:50:03.383539785Z pod/must-gather-wqfg7-helper labeled [must-gather-wqfg7] POD 2024-02-21T14:50:03.406772235Z Starting pod/compute-0-debug ... [must-gather-wqfg7] POD 2024-02-21T14:50:03.406772235Z To use host binaries, run `chroot /host` [must-gather-wqfg7] POD 2024-02-21T14:50:03.427746383Z waiting for 139 140 141 142 to terminate [must-gather-wqfg7] POD 2024-02-21T14:50:03.451455208Z Starting pod/compute-1-debug ... [must-gather-wqfg7] POD 2024-02-21T14:50:03.452883213Z To use host binaries, run `chroot /host` [must-gather-wqfg7] POD 2024-02-21T14:51:03.550235663Z pod/must-gather-wqfg7-helper condition met [must-gather-wqfg7] POD 2024-02-21T14:51:03.973637847Z pod/compute-2-debug condition met [must-gather-wqfg7] POD 2024-02-21T14:51:04.057370249Z pod/compute-0-debug condition met [must-gather-wqfg7] POD 2024-02-21T14:51:04.132884944Z pod/compute-1-debug condition met [must-gather-wqfg7] POD 2024-02-21T14:51:04.452424237Z pod/compute-2-debug labeled [must-gather-wqfg7] POD 2024-02-21T14:51:04.555627817Z pod/compute-0-debug labeled [must-gather-wqfg7] POD 2024-02-21T14:51:04.595275460Z pod/compute-1-debug labeled [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z Usage: /usr/bin/gather [OPTIONS] [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z Options: [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -d, --dr Collect DR logs [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -p, --provider Collect logs for provider/consumer cluster [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -n, --nooba Collect nooba logs [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -c, --ceph Collect ceph commands and pod logs [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -cl, --ceph-logs Collect ceph daemon, kernel, journal logs and crash reports [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -ns, --namespaced Collect namespaced resources [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -cs, --clusterscoped Collect clusterscoped resources [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z -h, --help Print this help message [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z Description: [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z ODF must-gather can run in modular mode and can collect JUST [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z the resources you require to collect. You can use the args [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z listed above to achieve that. If no arg is supplied the script [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z will run in FULL collection mode and will gather all the resources [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z from your cluster. This might take longer on some environments. [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z Note: Provide each arg separately and do not chain them like: [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z $ /usr/bin/gather -dpnc # Wrong [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z $ /usr/bin/gather -d -p -n -c # Correct [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z Examples: [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z $ /usr/bin/gather -d -n --ceph # Collect DR, nooba and ceph logs only. [must-gather-wqfg7] POD 2024-02-21T14:51:04.605349365Z $ /usr/bin/gather -h # Print help [must-gather-wqfg7] OUT waiting for gather to complete [must-gather-wqfg7] OUT downloading gather output [must-gather-wqfg7] OUT receiving incremental file list [must-gather-wqfg7] OUT ./ [must-gather-wqfg7] OUT gather-debug.log [must-gather-wqfg7] OUT [must-gather-wqfg7] OUT sent 46 bytes received 248 bytes 84.00 bytes/sec [must-gather-wqfg7] OUT total size is 178 speedup is 0.61 [must-gather ] OUT namespace/openshift-must-gather-9vswq deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-dd4c6 deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: f61ece64-4f45-4a47-84de-664e52699404 ClusterVersion: Stable at "4.15.0-0.nightly-2024-02-16-235514" ClusterOperators: All healthy and stable ************************************************************************************ My private image: $ oc adm must-gather --image=quay.io/oviner/ocs-must-gather:mg-help -- /usr/bin/gather --help [must-gather ] OUT Using must-gather plug-in image: quay.io/oviner/ocs-must-gather:mg-help When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: f61ece64-4f45-4a47-84de-664e52699404 ClusterVersion: Stable at "4.15.0-0.nightly-2024-02-16-235514" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-g6rlm created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-ds7xd created [must-gather ] OUT pod for plug-in image quay.io/oviner/ocs-must-gather:mg-help created [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z Usage: /usr/bin/gather [OPTIONS] [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z Options: [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -o, --odf Collect ODF logs [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -d, --dr Collect DR logs [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -pc, --provider Collect openshift-storage-client logs from a provider/consumer cluster [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -n, --nooba Collect nooba logs [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -c, --ceph Collect ceph commands and pod logs [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -cl, --ceph-logs Collect ceph daemon, kernel, journal logs and crash reports [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -ns, --namespaced Collect namespaced resources [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -cs, --clusterscoped Collect clusterscoped resources [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z -h, --help Print this help message [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z Description: [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z ODF must-gather can run in modular mode and can collect JUST [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z the resources you require to collect. You can use the args [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z listed above to achieve that. If no arg is supplied the script [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z will run in FULL collection mode and will gather all the resources [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z from your cluster. This might take longer on some environments. [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z Note: Provide each arg separately and do not chain them like: [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z $ /usr/bin/gather -dpnc # Wrong [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z $ /usr/bin/gather -d -p -n -c # Correct [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z Examples: [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z $ /usr/bin/gather -d -n --ceph # Collect DR, nooba and ceph logs only. [must-gather-f86l6] POD 2024-02-21T15:13:48.902117277Z $ /usr/bin/gather -h # Print help [must-gather-f86l6] OUT waiting for gather to complete [must-gather-f86l6] OUT downloading gather output [must-gather-f86l6] OUT receiving incremental file list [must-gather-f86l6] OUT ./ [must-gather-f86l6] OUT [must-gather-f86l6] OUT sent 27 bytes received 40 bytes 26.80 bytes/sec [must-gather-f86l6] OUT total size is 0 speedup is 0.00 [must-gather ] OUT namespace/openshift-must-gather-g6rlm deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-ds7xd deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: f61ece64-4f45-4a47-84de-664e52699404 ClusterVersion: Stable at "4.15.0-0.nightly-2024-02-16-235514" ClusterOperators: All healthy and stable Version of all relevant components (if applicable): ODF4.15 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Bug fixed, we dont call pre-install script for help flag OCP Version: 4.16.0-0.nightly-2024-04-26-145258 ODF Version: odf-operator.v4.16.0-89.stable $ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.16 -- /usr/bin/gather --help [must-gather ] OUT Using must-gather plug-in image: quay.io/rhceph-dev/ocs-must-gather:latest-4.16 When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 6a8e0ddb-e4df-45c4-abb2-5eb930ef1d74 ClientVersion: 4.15.9 ClusterVersion: Stable at "4.16.0-0.nightly-2024-04-26-145258" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-997zv created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-hz5ph created [must-gather ] OUT pod for plug-in image quay.io/rhceph-dev/ocs-must-gather:latest-4.16 created [must-gather-sqsjv] POD 2024-04-30T11:15:54.199070834Z volume percentage checker started..... [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z Usage: /usr/bin/gather [OPTIONS] [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z Options: [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -o, --odf Collect ODF logs [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -d, --dr Collect DR logs [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -pc, --provider Collect openshift-storage-client logs from a provider/consumer cluster [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -n, --noobaa Collect noobaa logs [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -c, --ceph Collect ceph commands and pod logs [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -cl, --ceph-logs Collect ceph daemon, kernel, journal logs and crash reports [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -ns, --namespaced Collect namespaced resources [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -cs, --clusterscoped Collect clusterscoped resources [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z -h, --help Print this help message [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z Description: [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z ODF must-gather can run in modular mode and can collect JUST [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z the resources you require to collect. You can use the args [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z listed above to achieve that. If no arg is supplied the script [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z will run in FULL collection mode and will gather all the resources [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z from your cluster. This might take longer on some environments. [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z Note: Provide each arg separately and do not chain them like: [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z $ oc adm must-gather --image=<odf-must-gather-image> -- /usr/bin/gather -dpnc # Wrong [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z $ oc adm must-gather --image=<odf-must-gather-image> -- /usr/bin/gather -d -p -n -c # Correct [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z Examples: [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z $ oc adm must-gather --image=<odf-must-gather-image> -- /usr/bin/gather -d -n --ceph # Collect DR, noobaa and ceph logs only. [must-gather-sqsjv] POD 2024-04-30T11:15:54.207417525Z $ oc adm must-gather --image=<odf-must-gather-image> -- /usr/bin/gather -h # Print help [must-gather-sqsjv] POD 2024-04-30T11:15:54.218475528Z volume usage percentage 0 [must-gather-sqsjv] OUT waiting for gather to complete [must-gather-sqsjv] OUT downloading gather output [must-gather-sqsjv] OUT receiving incremental file list [must-gather-sqsjv] OUT ./ [must-gather-sqsjv] OUT [must-gather-sqsjv] OUT sent 27 bytes received 42 bytes 27.60 bytes/sec [must-gather-sqsjv] OUT total size is 0 speedup is 0.00 [must-gather ] OUT namespace/openshift-must-gather-997zv deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-hz5ph deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 6a8e0ddb-e4df-45c4-abb2-5eb930ef1d74 ClientVersion: 4.15.9 ClusterVersion: Stable at "4.16.0-0.nightly-2024-04-26-145258" ClusterOperators: All healthy and stable
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4591