Bug 1866805
| Summary: | The aide-pod reinit does not work as expected | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | xiyuan |
| Component: | File Integrity Operator | Assignee: | Jakub Hrozek <jhrozek> |
| Status: | CLOSED ERRATA | QA Contact: | xiyuan |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.6 | CC: | jhrozek, josorior, mrogers, nkinder, pdhamdhe |
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-27 16:25:25 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
update Priority/Severity to h/h as the reinit is really import function for FIO. Hi Matt and Jakub, It is more about the issue when DB re-init(or a database clean-up otherwise) will be triggered. From current behaviors, in below scenarios, the DB re-init WILL NOT happen: 1. the FI delete and recreate(with same name or different names); 2. aide config change from empty to non-empty; 3. aide config change from non-empty to empty. However, when aide config changes from non-empty aideconfig1 to non-empty aideconfig2, the DB re-init will happen. If you think all above scenarios are working as expected, can we document somewhere so a user will know it is working as expected. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |
Description of problem: The aide-pod reinit does not work as expected Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-08-05-203353 How reproducible: always Steps to Reproduce: 1. install file-integrity-operator: $ git clone git:openshift/file-integrity-operator.git $ oc login -u kubeadmin -p <pw> $ oc create -f file-integrity-operator/deploy/ns.yaml $ oc project openshift-file-integrity $ for l in `ls -1 file-integrity-operator/deploy/crds/*crd.yaml`; do $ oc create -f $l; done $ oc create -f file-integrity-operator/deploy/ 2. create a fileintegrity without a configmap: $ oc apply -f - <<EOF apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: example-fileintegrity namespace: openshift-file-integrity spec: # Change to debug: true to enable more verbose logging from the logcollector # container in the aide pods debug: false config: {} EOF 3. try to add a folder on one of the node: $ oc debug no/ip-10-0-48-120.us-east-2.compute.internal Starting pod/ip-10-0-48-120us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.48.120 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# mkdir /root/test sh-4.4# ls -ltr /root lrwxrwxrwx. 3 root root 12 Aug 3 03:46 /root -> var/roothome sh-4.4# ls -ltr /root/ total 0 drwxr-xr-x. 2 root root 6 Aug 6 11:57 test2020 drwxr-xr-x. 2 root root 6 Aug 6 13:00 test sh-4.4# exit exit sh-4.2# exit exit 4. check the aide scan logs on one node: $ oc logs pod/aide-ds-example-fileintegrity-sgxq6 -c aide ... Added files: --------------------------------------------------- added: /hostroot/etc/kubernetes/aide.latest-result.log added: /hostroot/etc/selinux/targeted/active/file_contexts.local added: /hostroot/etc/selinux/targeted/contexts/files/file_contexts.local.bin added: /hostroot/etc/selinux/targeted/semanage.read.LOCK added: /hostroot/etc/selinux/targeted/semanage.trans.LOCK added: /hostroot/root/.bash_history added: /hostroot/root/test added: /hostroot/root/test2020 5. delete the fileintegrity created in step 2 $ oc delete fileintegrity example-fileintegrity 6. create a new fileintegrity again: $ oc apply -f - <<EOF apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: example-fileintegrity namespace: openshift-file-integrity spec: # Change to debug: true to enable more verbose logging from the logcollector # container in the aide pods debug: false config: {} EOF Actual results: There is log for the added folder test on in the aide scan pod on the node. $ oc logs aide-ds-example-fileintegrity-xxn7c -c aide running AIDE check.. WARNING:Input and output database urls are the same. AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-08-06 13:05:58 Summary: Total number of files: 34048 Added files: 8 Removed files: 1 Changed files: 1 --------------------------------------------------- Added files: --------------------------------------------------- added: /hostroot/etc/kubernetes/aide.latest-result.log added: /hostroot/etc/selinux/targeted/active/file_contexts.local added: /hostroot/etc/selinux/targeted/contexts/files/file_contexts.local.bin added: /hostroot/etc/selinux/targeted/semanage.read.LOCK added: /hostroot/etc/selinux/targeted/semanage.trans.LOCK added: /hostroot/root/.bash_history added: /hostroot/root/test added: /hostroot/root/test2020 Expected results: After reinit, there should be no log info for the folder created in step 3