RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1333435 - [PPC][rhevm-3.6.6-0.1] Host cannot see other block disks created from the other hosts as SPM(seems lvm related)
Summary: [PPC][rhevm-3.6.6-0.1] Host cannot see other block disks created from the ot...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.0
Hardware: ppc64le
OS: Linux
medium
urgent
Target Milestone: pre-dev-freeze
: 7.3
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: RHEV4.0PPC 1372340
TreeView+ depends on / blocked
 
Reported: 2016-05-05 13:32 UTC by Carlos Mestre González
Modified: 2023-09-14 03:22 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1372340 (view as bug list)
Environment:
Last Closed: 2016-11-04 02:28:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Collection of logs (1.58 MB, application/x-gzip)
2016-05-05 14:44 UTC, Carlos Mestre González
no flags Details
messages and audit.log for both hosts (102.65 KB, application/x-gzip)
2016-05-09 11:41 UTC, Carlos Mestre González
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2283 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2016-11-03 13:36:25 UTC

Description Carlos Mestre González 2016-05-05 13:32:40 UTC
Description of problem:

**This is actually build rhevm-3.6.6-0.1 on PPC**

We've run our tier1 run on our environment, after that we found some issues with the hosts, the hosts cannot see the disks created on the storage domain, both those hosts are in status UP and so is the data center and the storage domain. There's no issue creating file disks, but block disks are affected.

I have 3 hosts UP with multiple block (iscsi) and file (nfs) domains. putting the first host as SPM and creating a disk works, but I cannot see it from the second host (on /rhev/data-center/{id}/{storagedomain_id}/). Same if I try to create a vm with a iscsi disk and run it. If I run from the host that was the SPM it works but from the other hosts it fails.

I can see the iscsi sessions and the pvs from the domain, and everything looks good on both hosts.

Version-Release number of selected component (if applicable):
rhevm-3.6.6-0.1
qemu-kvm-rhev-2.3.0-31.el7_2.12.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.12.ppc64le
lvm2-2.02.130-5.el7_2.2.ppc64le
lvm2-libs-2.02.130-5.el7_2.2.ppc64le
vdsm-4.17.27-0.el7ev.noarch

How reproducible:
Happens all the time, though we only have one environment with this build so we just tried it.

Steps to Reproduce:
1. Have an env with at least two hosts and a block domain
2. Take host_1 and create a vm with a block disk

Actual results:
host_2 does not see disk in the domain, cannot start the vm (permission denied/file not found)

Expected results:
You can see the disk from host 2 since are in the same data center and run it.

Since this issue appeared after we run a few tests we actually don't know if the status that we see in the environment right now is linked to some operation we executed.

Comment 2 Allon Mureinik 2016-05-05 14:08:12 UTC
For the manual testing - without running pvscan --cache (or better, preparing the image) you aren't expected to see them. Have you done so?

For running a VM usecase - please share the logs. Without them it's a guessing game.

Tentatively targeting to 3.6.7 until we understand what the real issue here, if any.

Comment 3 Yaniv Kaul 2016-05-05 14:21:35 UTC
Missing logs: engine, VDSM and host logs (/var/log/messages and friends). 
Please re-open if/when you have the logs.

Comment 4 Carlos Mestre González 2016-05-05 14:44:52 UTC
Created attachment 1154238 [details]
Collection of logs

Logs, engine.log, vdsm/messages for host_mixed_1 and host_mixed_2

Created a vm (a6f3eb6e-fa1e-435d-80ee-59869d612447) with host_mixed_1 as SPM. The after try to start the vm in host_mixed_2 and fails with:

2016-05-05 17:34:07,566 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-73) [368600fb] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM test_vm is down with error. Exit message: internal error: process exited while connecting to monitor: 2016-05-05T14:33:54.488639Z qemu-kvm: -drive file=/rhev/data-center/b7cd8e98-7e10-466b-8a10-ca49e23a2764/f68ba4d3-8d31-430c-b3a6-b574bf06db01/images/dab8b336-0746-4b3c-bf93-f19d7f860b62/d7608c53-c733-451b-aacd-2e86ed65b333,if=none,id=drive-virtio-disk1,format=qcow2,serial=dab8b336-0746-4b3c-bf93-f19d7f860b62,cache=none,werror=stop,rerror=stop,aio=native: Could not open '/rhev/data-center/b7cd8e98-7e10-466b-8a10-ca49e23a2764/f68ba4d3-8d31-430c-b3a6-b574bf06db01/images/dab8b336-0746-4b3c-bf93-f19d7f860b62/d7608c53-c733-451b-aacd-2e86ed65b333': Permission denied

Full engine.log and others in the tarball.

After this I run pvscan --cache but still the image was not under that location (and you see how qemu fails to locate it)

Comment 5 Allon Mureinik 2016-05-05 15:08:55 UTC
Thanks Carlos!
The permission denied issue is often indicative on selinux issues. Can you please try with selinux disabled (not as a solution, just to help pinpoint the issue)?

Comment 6 Carlos Mestre González 2016-05-05 15:31:06 UTC
(In reply to Allon Mureinik from comment #5)
> Thanks Carlos!
> The permission denied issue is often indicative on selinux issues. Can you
> please try with selinux disabled (not as a solution, just to help pinpoint
> the issue)?

switched to selinux to 'permissive' the problematic host (host_mixed_2) and the flow in my previous comment works. 

Remember that nfs disks have been working normally.

Allon, do you need new logs?

Comment 7 Yaniv Kaul 2016-05-06 06:55:16 UTC
(In reply to Carlos Mestre González from comment #6)
> (In reply to Allon Mureinik from comment #5)
> > Thanks Carlos!
> > The permission denied issue is often indicative on selinux issues. Can you
> > please try with selinux disabled (not as a solution, just to help pinpoint
> > the issue)?
> 
> switched to selinux to 'permissive' the problematic host (host_mixed_2) and
> the flow in my previous comment works. 

Can you get the selinux relevant logs now? Can you identify the denial?
(Let me know if you don't know how to debug selinux issues and we'll assist).

> 
> Remember that nfs disks have been working normally.
> 
> Allon, do you need new logs?

Comment 8 Carlos Mestre González 2016-05-06 14:35:06 UTC
Hi Yaniv,

I don't know how to debug the issue, could you point me to the documentation/how to do it? Thanks

Comment 9 Yaniv Kaul 2016-05-06 14:38:24 UTC
(In reply to Carlos Mestre González from comment #8)
> Hi Yaniv,
> 
> I don't know how to debug the issue, could you point me to the
> documentation/how to do it? Thanks

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/chap-Security-Enhanced_Linux-Troubleshooting.html#sect-Security-Enhanced_Linux-Troubleshooting-What_Happens_when_Access_is_Denied

Comment 10 Ilanit Stein 2016-05-08 14:16:57 UTC
I ran tier 1 storage test again, with the only difference that selinux on all hosts is disabled.
The test cases, that were failing previously, to run a VM, on Permission denied, as reported in the bug description, now pass.

Comment 11 Ilanit Stein 2016-05-08 14:34:25 UTC
Further to comment #9,
checking the audit.log & /var/log/messages logs for the tier 1 run, on which bug reported:

* host_mixed_1 
 - audit.log contain these "denied" messages:
 type=AVC msg=audit(1462286235.861:63010): avc:  denied  { getattr } for   pid=19260 comm="dhclient-script" path="/etc/locale.conf" dev="dm-7" ino=135292414  scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:getty_etc_t:s0  tclass=file

 type=AVC msg=audit(1462286235.871:63011): avc:  denied  { getattr } for   pid=19260 comm="dhclient-script" path="/etc/locale.conf" dev="dm-7" ino=135292414  scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:getty_etc_t:s0 tclass=file

 - /var/log/messages do not contain "SELinux is preventing".
 
* host_mixed_2
 - audit.log do not contain "denied" messages.
 - /var/log/messages do not contain "SELinux is preventing".

Comment 12 Yaniv Kaul 2016-05-08 14:52:22 UTC
(In reply to Ilanit Stein from comment #10)
> I ran tier 1 storage test again, with the only difference that selinux on
> all hosts is disabled.
> The test cases, that were failing previously, to run a VM, on Permission
> denied, as reported in the bug description, now pass.

- selinux should be on permissive mode, not disabled, otherwise you'll miss denial logs.
- once you get all the data, please move the bug to selinux.

Comment 13 Ilanit Stein 2016-05-08 15:00:04 UTC
Correction to comment #10: all hosts are in "Permissive" selinux mode, and not Disabled.

Comment 14 Milos Malik 2016-05-09 07:08:25 UTC
The /etc/locale.conf file should be labeled locale_t,

# matchpathcon /etc/locale.conf 
/etc/locale.conf	system_u:object_r:locale_t:s0
#

but it is labeled getty_etc_t for unknown reason. Following command should correct it:

# restorecon -Rv /etc

Comment 15 Miroslav Grepl 2016-05-09 07:32:38 UTC
Also if you fix labeling can you reproduce it?

Comment 16 Carlos Mestre González 2016-05-09 11:32:13 UTC
I'll reproduce my findings when putting selinux in enforcing for all the host:

scenario:
1. SPM host host_mixed_1 creates a vm from a clone on a iscsi domain (remember only this happens o niscsi)
2. After I tried with host_mixed_2 to run the vm and fails with the denail.

Miroslav, I checked all the host and all have the proper labels:

# matchpathcon /etc/locale.conf /etc/locale.conf	system_u:object_r:locale_t:s0

but as you said the getty_etc_t label is shown in the log, but that machine is the one creating the vm, not running it. the issue we have is qemu-kvm deniel in host_mixed_2.

I could only find denied in the audit.log for both hosts (messages doesn't show preventing:

host_mixed_2 audit.log (one with the deniel):

type=AVC msg=audit(1462792590.711:103767): avc:  denied  { read } for  pid=2146 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c681,c872 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462792590.711:103768): avc:  denied  { read } for  pid=2146 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c681,c872 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462792590.711:103769): avc:  denied  { read } for  pid=2146 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c681,c872 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462792590.711:103770): avc:  denied  { read } for  pid=2146 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c681,c872 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file

host_mixed_1 audit log:
type=AVC msg=audit(1462771024.122:142411): avc:  denied  { getattr } for  pid=12112 comm="dhclient-script" path="/etc/locale.conf" dev="dm-7" ino=135292414 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:getty_etc_t:s0 tclass=file
type=AVC msg=audit(1462771024.122:142412): avc:  denied  { read } for  pid=12112 comm="dhclient-script" name="locale.conf" dev="dm-7" ino=135292414 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:getty_etc_t:s0 tclass=file
type=AVC msg=audit(1462771024.122:142412): avc:  denied  { open } for  pid=12112 comm="dhclient-script" path="/etc/locale.conf" dev="dm-7" ino=135292414 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:getty_etc_t:s0 tclass=file

I'll provide messages and full audit.log jsut in case.

Comment 17 Carlos Mestre González 2016-05-09 11:41:05 UTC
Created attachment 1155255 [details]
messages and audit.log for both hosts

Comment 18 Carlos Mestre González 2016-05-09 11:44:39 UTC
Mirosalv,

I regardless executed:

restorecon -Rv /etc

on all hosts. I don't see the issue in host_mixed_1 about the locale.conf, but then the issue is in host_mixed_2:

# grep "denied" /var/log/audit/audit.logtype=AVC msg=audit(1462793836.236:104011): avc:  denied  { read } for  pid=5522 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c831,c876 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793836.236:104012): avc:  denied  { read } for  pid=5522 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c831,c876 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793836.236:104013): avc:  denied  { read } for  pid=5522 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c831,c876 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793836.236:104014): avc:  denied  { read } for  pid=5522 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67337198 scontext=system_u:system_r:svirt_t:s0:c831,c876 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793992.109:104303): avc:  denied  { read } for  pid=6494 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67336163 scontext=system_u:system_r:svirt_t:s0:c405,c493 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793992.109:104304): avc:  denied  { read } for  pid=6494 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67336163 scontext=system_u:system_r:svirt_t:s0:c405,c493 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793992.109:104305): avc:  denied  { read } for  pid=6494 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67336163 scontext=system_u:system_r:svirt_t:s0:c405,c493 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
type=AVC msg=audit(1462793992.109:104306): avc:  denied  { read } for  pid=6494 comm="qemu-kvm" name="4c332113-291a-4669-86d1-d4c17eb3770d" dev="dm-6" ino=67336163 scontext=system_u:system_r:svirt_t:s0:c405,c493 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file

Comment 22 Lukas Vrabec 2016-07-14 11:19:37 UTC
Please, see comment20 

Thank you.

Comment 24 Carlos Mestre González 2016-07-19 10:59:54 UTC
Hi Lukas,

I re-tested the scenario in the description with the enforcing and seems to work now.

packages tested with:

libselinux-python-2.2.2-6.el7.ppc64le
selinux-policy-targeted-3.13.1-60.el7_2.7.noarch
libselinux-2.2.2-6.el7.ppc64le
selinux-policy-3.13.1-60.el7_2.7.noarch
libselinux-utils-2.2.2-6.el7.ppc64le
libselinux-ruby-2.2.2-6.el7.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.18.ppc64le

so this was an update on the new packages or a misconfiguration on our side?

Comment 27 Carlos Mestre González 2016-08-11 11:45:02 UTC
I'm reassigned this, after a new fresh install the issue appears again with the packages as before (weird, since I stated in my #comment24 it used to work again):

libselinux-python-2.2.2-6.el7.ppc64le
selinux-policy-targeted-3.13.1-60.el7_2.7.noarch
libselinux-2.2.2-6.el7.ppc64le
selinux-policy-3.13.1-60.el7_2.7.noarch
libselinux-utils-2.2.2-6.el7.ppc64le
libselinux-ruby-2.2.2-6.el7.ppc64le
vdsm-4.17.33-1.el7ev.noarch

and even after I run the restorecon -Rv / the issue still persist:

[...]
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/e7d7b7d2-5fa0-4df9-a252-cabe2ac45df5
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/13a9b3af-5d31-4d6f-b2d0-ea275e9e4c9e
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/0432c9cd-0db1-43c6-8d66-422876c18bec
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/37d8554d-c5ac-45a1-b1bf-45ea896f2b1f
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/a2f6e756-4fec-490c-a25f-014a2b770112
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/0898423d-a894-4453-966e-6837ee42776d
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/042d4074-1786-4c80-8a66-754bdfde53cb
restorecon:  Warning no default label for /rhev/data-center/fc482313-d4ce-40a3-994f-fd59b2dc3399/mastersd

Lukas is this suppose to work on that selinux version?

Comment 29 Carlos Mestre González 2016-09-01 09:51:49 UTC
Hi,

just tested in our systems with 7.3 with:

selinux-policy-3.13.1-96.el7.noarch
libselinux-2.5-6.el7.ppc64le
libselinux-python-2.5-6.el7.ppc64le
libselinux-ruby-2.5-6.el7.ppc64le
selinux-policy-targeted-3.13.1-96.el7.noarch
libselinux-utils-2.5-6.el7.ppc64le

and is fixed.

is this going to be backported to 7.2? I was asking since this bug was submitted for rhevm 3.6 and AFAIK we're only supporting RHEL 7.2 for hosts.

Lukas maybe you know?

Comment 31 Karel Srot 2016-09-08 07:11:08 UTC
Hi Lukas,
could you please point me to relevant fixes? It is not clear to me from the above what has been changed in the policy.

Comment 34 Carlos Mestre González 2016-10-18 10:53:17 UTC
# ls -Z /rhev/data-center
drwxr-xr-x. vdsm kvm system_u:object_r:mnt_t:s0       32ec4dd0-3944-4e12-9a9a-d047a1b88d23
drwxr-xr-x. vdsm kvm system_u:object_r:mnt_t:s0       c13b8dad-5a8f-470e-a7c8-a173f6ccda23
drwxr-xr-x. vdsm kvm system_u:object_r:mnt_t:s0       mnt


selinux-policy-3.13.1-102.el7.noarch
selinux-policy-targeted-3.13.1-102.el7.noarch
libselinux-2.5-6.el7.ppc64le
libselinux-python-2.5-6.el7.ppc64le
libselinux-ruby-2.5-6.el7.ppc64le
libselinux-utils-2.5-6.el7.ppc64le

3.10.0-327.30.1.el7.ppc64le

Comment 35 Miroslav Grepl 2016-10-20 07:43:08 UTC
It looks fine. Are you still getting SELinux issues? If so could re-test it and run

# ausearch -m avc -ts recent

? Thank you.

Comment 37 errata-xmlrpc 2016-11-04 02:28:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2283.html

Comment 38 Carlos Mestre González 2016-11-29 09:51:39 UTC
Hi Miroslav,

I'm getting issues *again* with the scenario, from the ouput you requested:

----
time->Tue Nov 29 04:39:48 2016
type=SYSCALL msg=audit(1480412388.568:3967302): arch=c0000015 syscall=5 success=yes exit=18 a0=10030eb09a0 a1=80800 a2=0 a3=923243bc0aef500 items=0 ppid=1 pid=47310 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c513,c695 key=(null)
type=AVC msg=audit(1480412388.568:3967302): avc:  denied  { read } for  pid=47310 comm="qemu-kvm" name="2eb7ff83-a951-4417-a965-874d1aa9386f" dev="dm-0" ino=201468701 scontext=system_u:system_r:svirt_t:s0:c513,c695 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
----
time->Tue Nov 29 04:47:20 2016
type=SYSCALL msg=audit(1480412840.369:3967757): arch=c0000015 syscall=5 success=no exit=-13 a0=1002fa109a0 a1=80800 a2=0 a3=923243bc0aef500 items=0 ppid=1 pid=49762 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c270,c835 key=(null)
type=AVC msg=audit(1480412840.369:3967757): avc:  denied  { read } for  pid=49762 comm="qemu-kvm" name="7f8aeb01-86ad-4de1-a38c-c718534d432c" dev="dm-0" ino=59968 scontext=system_u:system_r:svirt_t:s0:c270,c835 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
----
time->Tue Nov 29 04:47:20 2016
type=SYSCALL msg=audit(1480412840.369:3967758): arch=c0000015 syscall=106 success=no exit=-13 a0=1002fa109a0 a1=3fffebe553d8 a2=3fffebe553d8 a3=923243bc0aef500 items=0 ppid=1 pid=49762 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c270,c835 key=(null)
type=AVC msg=audit(1480412840.369:3967758): avc:  denied  { read } for  pid=49762 comm="qemu-kvm" name="7f8aeb01-86ad-4de1-a38c-c718534d432c" dev="dm-0" ino=59968 scontext=system_u:system_r:svirt_t:s0:c270,c835 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file
----
time->Tue Nov 29 04:47:20 2016
type=SYSCALL msg=audit(1480412840.369:3967759): arch=c0000015 syscall=5 success=no exit=-13 a0=1002fa109a0 a1=a0002 a2=0 a3=1002f84 items=0 ppid=1 pid=49762 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c270,c835 key=(null)
type=AVC msg=audit(1480412840.369:3967759): avc:  denied  { read } for  pid=49762 comm="qemu-kvm" name="7f8aeb01-86ad-4de1-a38c-c718534d432c" dev="dm-0" ino=59968 scontext=system_u:system_r:svirt_t:s0:c270,c835 tcontext=system_u:object_r:unlabeled_t:s0 tclass=lnk_file


kernel: 3.10.0-327.28.2.el7.ppc64le
selinux: 
selinux-policy-3.13.1-102.el7_3.4.noarch
libselinux-2.5-6.el7.ppc64le
libselinux-python-2.5-6.el7.ppc64le
selinux-policy-targeted-3.13.1-102.el7_3.4.noarch

I think I should open a new bug with this info.

Comment 39 Milos Malik 2016-11-29 10:09:37 UTC
We should find out if there are any file context equivalences established.

# semanage fcontext -l -C

If they are not established then unlabeled_t files are expected, because policy does not know how to label them:

# matchpathcon /rhev/data-center/*
/rhev/data-center/*	<<none>>
# matchpathcon /rhev/data-center/7f8aeb01-86ad-4de1-a38c-c718534d432c
/rhev/data-center/7f8aeb01-86ad-4de1-a38c-c718534d432c	<<none>>
#

Comment 40 Carlos Mestre González 2016-11-29 10:32:25 UTC
# semanage fcontext -l -C
SELinux fcontext                                   type               Context

/var/log/core(/.*)?                                all files          system_u

#  matchpathcon /rhev/data-center/*
/rhev/data-center/2b7bf6d5-ae33-45fd-bdaa-be03e9ec0f45	<<none>>
/rhev/data-center/355b65b9-8b5a-404f-b21c-8d8d2a0b184c	<<none>>
/rhev/data-center/mnt	<<none>>

Comment 41 Red Hat Bugzilla 2023-09-14 03:22:05 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.