Bug 1146689 - Revert to default selinux attributes to all modified files
Summary: Revert to default selinux attributes to all modified files
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: otopi
Classification: oVirt
Component: Core
Version: 1.0.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: 1.3.0
Assignee: Alon Bar-Lev
QA Contact: Petr Beňas
URL:
Whiteboard: infra
: 1154365 (view as bug list)
Depends On:
Blocks: rhevh-7.0 rhev35betablocker rhev35rcblocker rhev35gablocker
TreeView+ depends on / blocked
 
Reported: 2014-09-25 18:44 UTC by Fabian Deutsch
Modified: 2016-02-10 19:37 UTC (History)
16 users (show)

Fixed In Version: otopi-1.3.0-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-17 12:25:14 UTC
oVirt Team: Infra
Embargoed:


Attachments (Terms of Use)
rhevm side error (27.04 KB, image/png)
2014-09-25 18:44 UTC, Fabian Deutsch
no flags Details
node side error (29.86 KB, image/png)
2014-09-25 18:44 UTC, Fabian Deutsch
no flags Details
engine side logs (255.09 KB, application/x-xz)
2014-09-29 08:19 UTC, Fabian Deutsch
no flags Details
node side logs (195.65 KB, application/x-xz)
2014-09-29 08:21 UTC, Fabian Deutsch
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 33427 0 master MERGED core: filetransaction: restore default selinux default attributes Never

Description Fabian Deutsch 2014-09-25 18:44:00 UTC
Created attachment 941234 [details]
rhevm side error

Description of problem:
Host deploy fails

Version-Release number of selected component (if applicable):
vdsm-4.16.5-2.el7

How reproducible:
always

Steps to Reproduce:
1. Install rhevm35
2. Install latest rhevh build
3. Register BNode to Engine
4. Approve node

Actual results:
approval failed

Expected results:
approval succeeds

Additional info:
see attached screenshots

Comment 1 Fabian Deutsch 2014-09-25 18:44:23 UTC
Created attachment 941235 [details]
node side error

Comment 3 Fabian Deutsch 2014-09-25 18:45:47 UTC
otopi tries to use systemctl enable network.service, but chkconfig network on must be used.

Comment 4 Alon Bar-Lev 2014-09-25 19:07:19 UTC
(In reply to Fabian Deutsch from comment #3)
> otopi tries to use systemctl enable network.service, but chkconfig network
> on must be used.

how come chkconfig is to be used when systemd is available? how do we see this only at node? (or not...?)

Comment 5 Fabian Deutsch 2014-09-25 21:34:26 UTC
I've got no idea. That is just waht I saw in the logs.

systemd is actually the part suggesting to use chkconfig to enable the network service.

Comment 6 Alon Bar-Lev 2014-09-25 21:39:53 UTC
please attach host-deploy log, please avoid attaching pictures... not useful.
thanks!

Comment 7 Fabian Deutsch 2014-09-26 13:03:02 UTC
/var/log/vdsm-reg/vdsm-reg.log is empty

Comment 8 Fabian Deutsch 2014-09-26 13:52:25 UTC
I can currently not provide the logs.

Comment 9 Alon Bar-Lev 2014-09-26 15:14:52 UTC
(In reply to Fabian Deutsch from comment #8)
> I can currently not provide the logs.

logs of host-deploy... at engine.
how can bug can be urgent if no way to perform problem determination?

BTW: the chkconfig guess is invalid as you can see that the chkconfig is executed based on the output you provide (first line).

Comment 10 Oved Ourfali 2014-09-28 05:33:49 UTC
Reducing priority until logs are supplied.
Fabian - please attach logs ASAP.

Comment 11 Alon Bar-Lev 2014-09-28 07:27:01 UTC
2014-09-28 07:22:42 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'start', 'iptables.service') stderr:
Job for iptables.service failed. See 'systemctl status iptables.service' and 'journalctl -xn' for details.

2014-09-28 07:22:42 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
  File "/tmp/ovirt-k8FgBRhsmX/pythonlib/otopi/context.py", line 142, in _executeMethod
    method['method']()
  File "/tmp/ovirt-k8FgBRhsmX/otopi-plugins/otopi/network/iptables.py", line 118, in _closeup
    self.services.state('iptables', True)
  File "/tmp/ovirt-k8FgBRhsmX/otopi-plugins/otopi/services/systemd.py", line 138, in state
    'start' if state else 'stop'
  File "/tmp/ovirt-k8FgBRhsmX/otopi-plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
    raiseOnError=raiseOnError
  File "/tmp/ovirt-k8FgBRhsmX/pythonlib/otopi/plugin.py", line 871, in execute
    command=args[0],
RuntimeError: Command '/bin/systemctl' failed to execute

# systemctl status iptables
iptables.service - IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
   Active: failed (Result: exit-code) since Sun 2014-09-28 07:22:42 UTC; 2min 17s ago
  Process: 4688 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=1/FAILURE)
 Main PID: 4688 (code=exited, status=1/FAILURE)

Sep 28 07:22:42 alonbl5.tlv.redhat.com iptables.init[4688]: iptables: Applying firewall rules: Can't open /etc/sysconfig/iptables: Permission denied
Sep 28 07:22:42 alonbl5.tlv.redhat.com iptables.init[4688]: [FAILED]
Sep 28 07:22:42 alonbl5.tlv.redhat.com systemd[1]: iptables.service: main process exited, code=exited, status=1/FAILURE
Sep 28 07:22:42 alonbl5.tlv.redhat.com systemd[1]: Failed to start IPv4 firewall with iptables.
Sep 28 07:22:42 alonbl5.tlv.redhat.com systemd[1]: Unit iptables.service entered failed state.

# ls -la /etc/sysconfig/iptables
-rw-------. 1 root root 770 Sep 28 07:22 /etc/sysconfig/iptables

Comment 12 Alon Bar-Lev 2014-09-28 08:51:13 UTC
This is due to selinux policy, but for being at safe side I will revert all files to default attributes when written.

Comment 13 Fabian Deutsch 2014-09-28 16:13:54 UTC
(In reply to Alon Bar-Lev from comment #9)
> (In reply to Fabian Deutsch from comment #8)
> > I can currently not provide the logs.
> 
> logs of host-deploy... at engine.
> how can bug can be urgent if no way to perform problem determination?

Hey Alon, just to clarify my comment: Because I initially missed to save the log files, I tried to reproduce the issue, but failed to do so. That was the context in which I replied I could not provide the logs. I appologize if it made it harder to find the solution.

Comment 14 Alon Bar-Lev 2014-09-28 16:16:54 UTC
(In reply to Fabian Deutsch from comment #13)
> (In reply to Alon Bar-Lev from comment #9)
> > (In reply to Fabian Deutsch from comment #8)
> > > I can currently not provide the logs.
> > 
> > logs of host-deploy... at engine.
> > how can bug can be urgent if no way to perform problem determination?
> 
> Hey Alon, just to clarify my comment: Because I initially missed to save the
> log files, I tried to reproduce the issue, but failed to do so. That was the
> context in which I replied I could not provide the logs. I appologize if it
> made it harder to find the solution.

logs are located at engine at /var/log/ovirt-engine/host-deploy.
now we do not know what we solved... we just solved yet another issue, vdsm is not up per missing packages yaniv will be in touch, probably there are more issues.

Comment 15 Fabian Deutsch 2014-09-29 08:19:38 UTC
Created attachment 942226 [details]
engine side logs

Engine side logs, after the failed approval.

Manually running systemctl enable network.service on RHEV-H also yields the error seen in the logs.

Comment 16 Fabian Deutsch 2014-09-29 08:21:29 UTC
Created attachment 942227 [details]
node side logs

For completness logs on the node side after approval

Comment 17 Alon Bar-Lev 2014-09-29 08:23:15 UTC
(In reply to Fabian Deutsch from comment #15)
> Created attachment 942226 [details]
> engine side logs
> 
> Engine side logs, after the failed approval.
> 
> Manually running systemctl enable network.service on RHEV-H also yields the
> error seen in the logs.

Different issue than the one I fixed in this bug.

Please investigate, it is in your domain, open a new bug if required.

2014-09-29 08:15:05 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
  File "/tmp/ovirt-4h26xNEhw0/pythonlib/otopi/context.py", line 142, in _executeMethod
    method['method']()
  File "/tmp/ovirt-4h26xNEhw0/otopi-plugins/ovirt-host-deploy/node/persist.py", line 62, in _closeup
    raise RuntimeError("Cannot execute persist task!")
RuntimeError: Cannot execute persist task!
2014-09-29 08:15:05 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Closing up': Cannot execute persist task!

Comment 18 Fabian Deutsch 2014-09-29 08:29:38 UTC
I reported the error which can be seen in line 1768 and onwards - the problem you also saw is likely another bug, but at least these logs contain the problem I reported in this bug.

2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd systemd.status:102 check service network status
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:785 execute: ('/bin/systemctl', 'status', 'network.service'), executable='None', cwd='None', env=None
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:803 execute-result: ('/bin/systemctl', 'status', 'network.service'), rc=0
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:861 execute-output: ('/bin/systemctl', 'status', 'network.service') stdout:
network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: active (exited) since Mon 2014-09-29 08:13:38 UTC; 1min 16s ago

Sep 29 08:13:35 localhost systemd[1]: Starting LSB: Bring up/down networking...
Sep 29 08:13:35 localhost network[3321]: Bringing up loopback interface:  [  OK  ]
Sep 29 08:13:35 localhost network[3321]: Bringing up interface ens3:
Sep 29 08:13:35 localhost dhclient[3446]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 8 (xid=0x4bbd2791)
Sep 29 08:13:35 localhost dhclient[3446]: DHCPREQUEST on ens3 to 255.255.255.255 port 67 (xid=0x4bbd2791)
Sep 29 08:13:35 localhost dhclient[3446]: DHCPOFFER from 192.168.122.1
Sep 29 08:13:36 localhost dhclient[3446]: DHCPACK from 192.168.122.1 (xid=0x4bbd2791)
Sep 29 08:13:38 localhost network[3321]: Determining IP information for ens3... done.
Sep 29 08:13:38 localhost network[3321]: [  OK  ]
Sep 29 08:13:38 localhost systemd[1]: Started LSB: Bring up/down networking.

2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'status', 'network.service') stderr:


2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd systemd.startup:111 set service network startup to True
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:785 execute: ('/bin/systemctl', 'show', '-p', 'Id', 'network.service'), executable='None', cwd='None', env=None
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:803 execute-result: ('/bin/systemctl', 'show', '-p', 'Id', 'network.service'), rc=0
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:861 execute-output: ('/bin/systemctl', 'show', '-p', 'Id', 'network.service') stdout:
Id=network.service

2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'show', '-p', 'Id', 'network.service') stderr:


2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:785 execute: ('/bin/systemctl', 'enable', u'network.service'), executable='None', cwd='None', env=None
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:803 execute-result: ('/bin/systemctl', 'enable', u'network.service'), rc=0
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:861 execute-output: ('/bin/systemctl', 'enable', u'network.service') stdout:


2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'enable', u'network.service') stderr:
network.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig network on
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
   .wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
   a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
   D-Bus, udev, scripted systemctl call, ...).

2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd systemd.state:134 starting service vdsmd
2014-09-29 08:14:55 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:785 execute: ('/bin/systemctl', 'start', 'vdsmd.service'), executable='None', cwd='None', env=None
2014-09-29 08:15:04 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:803 execute-result: ('/bin/systemctl', 'start', 'vdsmd.service'), rc=0
2014-09-29 08:15:04 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:861 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stdout:


2014-09-29 08:15:04 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:866 execute-output: ('/bin/systemctl', 'start', 'vdsmd.service') stderr:

Comment 19 Alon Bar-Lev 2014-09-29 08:32:16 UTC
(In reply to Fabian Deutsch from comment #18)
> I reported the error which can be seen in line 1768 and onwards - the
> problem you also saw is likely another bug, but at least these logs contain
> the problem I reported in this bug.

this is no error, read comment#9.

Comment 20 Alon Bar-Lev 2014-09-29 08:32:48 UTC
(In reply to Alon Bar-Lev from comment #17)
> (In reply to Fabian Deutsch from comment #15)
> > Created attachment 942226 [details]
> > engine side logs
> > 
> > Engine side logs, after the failed approval.
> > 
> > Manually running systemctl enable network.service on RHEV-H also yields the
> > error seen in the logs.
> 
> Different issue than the one I fixed in this bug.
> 
> Please investigate, it is in your domain, open a new bug if required.
> 
> 2014-09-29 08:15:05 DEBUG otopi.context context._executeMethod:152 method
> exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-4h26xNEhw0/pythonlib/otopi/context.py", line 142, in
> _executeMethod
>     method['method']()
>   File
> "/tmp/ovirt-4h26xNEhw0/otopi-plugins/ovirt-host-deploy/node/persist.py",
> line 62, in _closeup
>     raise RuntimeError("Cannot execute persist task!")
> RuntimeError: Cannot execute persist task!
> 2014-09-29 08:15:05 ERROR otopi.context context._executeMethod:161 Failed to
> execute stage 'Closing up': Cannot execute persist task!

this is due to bad commit of bug#1128033, fixing anyway.

Comment 21 Fabian Deutsch 2014-09-29 08:37:00 UTC
(In reply to Alon Bar-Lev from comment #19)
> (In reply to Fabian Deutsch from comment #18)
> > I reported the error which can be seen in line 1768 and onwards - the
> > problem you also saw is likely another bug, but at least these logs contain
> > the problem I reported in this bug.
> 
> this is no error, read comment#9.

Sorry, missed that. Thanks.

Comment 23 Petr Beňas 2014-10-15 13:55:07 UTC
successfully added Red Hat Enterprise Virtualization Hypervisor release 7.0 (20140904.0.el7ev) to rhevm-3.5.0-0.14.beta.el6ev.noarch

Comment 24 Sandro Bonazzola 2014-10-17 12:25:14 UTC
oVirt 3.5 has been released and should include the fix for this issue.

Comment 25 Alon Bar-Lev 2014-10-19 11:19:36 UTC
*** Bug 1154365 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.