Description of problem:
- Editing settings on the default SCCs does not warn anything. So some customers changed settings and experienced critical issues during update.
- Although the docs warns it (in a very small part), we would like to request more better way to notice by the customer. (It is really difficult to find the docs.)
Version-Release number of selected component (if applicable):
- OCP 3.6 (confirmed with OCP 3.9 as well)
How reproducible: 100%
Steps to Reproduce:
1. Set nfs to volumes in restricted scc
# oc edit scc restricted
// nfs was added to restricted
# oc get scc restricted
NAME ... VOLUMES
restricted ... [configMap downwardAPI emptyDir nfs persistentVolumeClaim projected secret]
NOTE: We know that changing some values in default SCC is bad practice. But there are no warning message and no users can predict that the "additive" settings will be dropped by the update.
2. Update cluster. (e.g 3.5 to 3.6)
- During the update, "oc adm reconcile-sccs --confirm --additive-only=true" is executed by the playbook.
- The SCC is reconciled and all pods using nfs stopped running.
- We would like to request some warning messages when customers edit the default sccs. (It caused critical outage.)
a. Making default sccs "read only".
b. When users edit default sccs, OpenShift causes Warning message
c. OpenShift diagnostics gives the notification
- https://github.com/openshift/origin/pull/19610 is the proposed patch for c), as it is the easiest way atm.
In order to preserve customized SCCs during upgrades, do not edit settings on the default SCCs other than priority, users, groups, labels, and annotations.
Isn't this just a matter of providing clearer documentation ?
Updating our docs is absolutely necessary. But, on top of docs update, do you have any good idea for the preventive measures?
I have opened tickets respectively:
bz#1577830 ... [DOCS] SCC section should clearly state that updating dfault SCCs could cause critical problem
bz#1578217 ... oc adm diagnostics should support clusterscc option
If we don't have any other idea to prevent this issue, please close this ticket - bz#1575450 and continue above two.
I think the two new ticket you opened are indeed a better way to address this issue.