Bug 1772993
| Summary: | rbd block devices attached to a host are visible in unprivileged container pods | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Martin Bukatovic <mbukatov> | ||||
| Component: | Node | Assignee: | Peter Hunt <pehunt> | ||||
| Node sub component: | CRI-O | QA Contact: | MinLi <minmli> | ||||
| Status: | CLOSED ERRATA | Docs Contact: | |||||
| Severity: | medium | ||||||
| Priority: | low | CC: | aos-bugs, dwalsh, eparis, jokerman, mpatel, nagrawal, pehunt, rphillips, tsweeney | ||||
| Version: | 4.2.0 | ||||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.8.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: |
Cause:
`/sys/dev` was being mounted into non-privileged containers
Consequence:
Non-privileged containers could see block devices with `lsblk`
Fix:
Mask `/sys/dev` for non-privileged containers
Result:
Non-privileged containers no longer successfully `lsblk`
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-07-27 22:32:19 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Martin Bukatovic
2019-11-15 18:41:59 UTC
I think the problem has been explained in the bug #1769037: Cri-o bind-mounts host /sys into the container, lsblk then displays whatever it finds there. I think this should belong to the Container runtimes. PR with fix https://github.com/cri-o/cri-o/pull/4072 @Martin Bukatovic Can you tell me how you install OCP/OCS in detail? like jenkins job? image site? Or other install method like installing repo via command-line? I can't find OCS operator in latest OCP 4.6 operatorHub, perhaps OCS4.6 not released yet. (In reply to MinLi from comment #6) > Can you tell me how you install OCP/OCS in detail? like jenkins job? image > site? Or other install method like installing repo via command-line? > I can't find OCS operator in latest OCP 4.6 operatorHub, perhaps OCS4.6 not > released yet. I'm asking about OCS operator availability in OCP 4.6 on OCS eng list: http://post-office.corp.redhat.com/archives/rhocs-eng/2020-August/msg00218.html based on the answer, I can help you with using OCS CI builds for 4.5 or 4.6. Hi, Martin Bukatovic do you know what is the OCS 4.5 CI build image tag? As in the https://ocs-ci.readthedocs.io/en/latest/docs/deployment_without_ocs.html#enabling-catalog-source-with-development-builds-of-ocs described, I need to replace tag latest with an available version in catalog-source.yaml (In reply to MinLi from comment #8) > do you know what is the OCS 4.5 CI build image tag? As in the > https://ocs-ci.readthedocs.io/en/latest/docs/deployment_without_ocs. > html#enabling-catalog-source-with-development-builds-of-ocs described, I > need to replace tag latest with an available version in catalog-source.yaml To get latest OCS 4.5 CI build, use latest-4.5 tag. the bug reproduce on OCP4.6 and OCS-CI 4.5.
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.6.0-0.nightly-2020-09-01-070508 True False 164m Cluster version is 4.6.0-0.nightly-2020-09-01-070508
$ oc get csv --all-namespaces
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
openshift-storage ocs-operator.v4.5.0-543.ci OpenShift Container Storage 4.5.0-543.ci Succeeded
$ oc rsh fio-8lksh
sh-4.2$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 120G 11G 110G 9% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
tmpfs 16G 51M 16G 1% /etc/passwd
/dev/rbd1 9.8G 6.4G 3.5G 65% /target
/dev/mapper/coreos-luks-root-nocrypt 120G 11G 110G 9% /etc/fio
tmpfs 16G 28K 16G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 16G 0 16G 0% /proc/acpi
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs
16G 0 16G 0% /sys/firmware
sh-4.2$ lsblk
lsblk: dm-0: failed to get device path
lsblk: dm-0: failed to get device path
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 252:0 0 50G 0 disk
xvda 202:0 0 120G 0 disk
|-xvda4 202:4 0 119.5G 0 part
|-xvda2 202:2 0 127M 0 part
|-xvda3 202:3 0 1M 0 part
`-xvda1 202:1 0 384M 0 part
xvdcs 202:24576 0 2T 0 disk
xvdbg 202:14848 0 10G 0 disk
rbd1 252:16 0 10G 0 disk /target
loop0 7:0 0 2T 0 loop
sh-4.2$ ls -l /dev/rbd1
ls: cannot access /dev/rbd1: No such file or directory
sh-4.2$ ls -l /dev/rbd0
ls: cannot access /dev/rbd0: No such file or directory
I was unable to reproduce this with crictl:
```container_config.json
{
"metadata": {
"name": "container1",
"attempt": 1
},
"image": {
"image": "registry.fedoraproject.org/fedora-minimal:latest"
},
"command": [
"lsblk"
],
"args": [],
"working_dir": "/",
"envs": [
{
"key": "PATH",
"value": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
},
{
"key": "TERM",
"value": "xterm"
},
{
"key": "TESTDIR",
"value": "test/dir1"
},
{
"key": "TESTFILE",
"value": "test/file1"
}
],
"labels": {
"type": "small",
"batch": "no"
},
"annotations": {
"owner": "dragon",
"daemon": "crio"
},
"log_path": "",
"stdin": false,
"stdin_once": false,
"tty": false,
"linux": {
"resources": {
"cpu_period": 10000,
"cpu_quota": 20000,
"cpu_shares": 512,
"oom_score_adj": 30,
"memory_limit_in_bytes": 268435456
},
"security_context": {
"run_as_user":{
"value": 0
},
"namespace_options": {
"pid": 1
},
"readonly_rootfs": false,
"selinux_options": {
"user": "system_u",
"role": "system_r",
"type": "svirt_lxc_net_t",
"level": "s0:c4,c5"
},
"capabilities": {
"add_capabilities": [
"setuid",
"setgid"
],
"drop_capabilities": [
]
}
}
}
}
```
```sandbox_config.json
{
"metadata": {
"name": "podsandbox1",
"uid": "redhat-test-crio",
"namespace": "redhat.test.crio",
"attempt": 1
},
"hostname": "crictl_host",
"log_directory": "",
"dns_config": {
"searches": [
"8.8.8.8"
]
},
"port_mappings": [],
"resources": {
"cpu": {
"limits": 3,
"requests": 2
},
"memory": {
"limits": 50000000,
"requests": 2000000
}
},
"labels": {
"group": "test"
},
"annotations": {
"owner": "hmeng",
"security.alpha.kubernetes.io/seccomp/pod": "unconfined"
},
"linux": {
"cgroup_parent": "pod_123-456.slice",
"security_context": {
"namespace_options": {
"network": 0,
"pid": 1,
"ipc": 0
},
"selinux_options": {
"user": "system_u",
"role": "system_r",
"type": "svirt_lxc_net_t",
"level": "s0:c4,c5"
}
}
}
}
```
and running:
$ sudo crictl run container_config.json sandbox_config.json
10f976a96da86ee57b4bcecc8472990b281082153c832ecf878b2bd50d2bddb2
yields:
$ sudo crictl logs 10f976a96da86ee57b4bcecc8472990b281082153c832ecf878b2bd50d2bddb2
lsblk: failed to access sysfs directory: /sys/dev/block: No such file or directory
note the lack of "privileged" in both `linux` objects. Adding `privileged: true` to both json files makes this work.
Are we sure the container in question is not privileged?
Moving back to modified, if it still doesn't work on 4.8 I'll need the pod manifest of the failing pod :)
verified with 4.8.0-0.nightly-2021-06-16-020345 $ oc rsh fio-fclhn bash bash-4.2$ df -h Filesystem Size Used Avail Use% Mounted on overlay 120G 12G 109G 10% / tmpfs 64M 0 64M 0% /dev tmpfs 16G 0 16G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm tmpfs 16G 56M 16G 1% /etc/passwd /dev/rbd0 9.8G 8.8G 1.1G 90% /target /dev/nvme0n1p4 120G 12G 109G 10% /etc/fio tmpfs 16G 20K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 16G 0 16G 0% /proc/acpi tmpfs 16G 0 16G 0% /proc/scsi tmpfs 16G 0 16G 0% /sys/firmware bash-4.2$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 512G 0 loop nvme0n1 259:0 0 120G 0 disk |-nvme0n1p3 259:3 0 384M 0 part |-nvme0n1p1 259:1 0 1M 0 part |-nvme0n1p4 259:4 0 119.5G 0 part /dev/termination-log `-nvme0n1p2 259:2 0 127M 0 part rbd0 252:0 0 10G 0 disk /target nvme2n1 259:6 0 512G 0 disk nvme1n1 259:5 0 10G 0 disk There is only one rbd device rbd0 visible. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |