RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1367240 - [Hyper-V][RHEL7.3]error message Job sys-devices-virtual-misc-vmbus\x21hv_kvp.device/start failed with result 'timeout'
Summary: [Hyper-V][RHEL7.3]error message Job sys-devices-virtual-misc-vmbus\x21hv_kvp....
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: hyperv-daemons
Version: 7.3
Hardware: x86_64
OS: Linux
high
low
Target Milestone: rc
: ---
Assignee: Vitaly Kuznetsov
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1371427 (view as bug list)
Depends On:
Blocks: 1274381 1304407 1364088
TreeView+ depends on / blocked
 
Reported: 2016-08-16 02:37 UTC by yaoal
Modified: 2016-11-04 03:51 UTC (History)
16 users (show)

Fixed In Version: hyperv-daemons-0-0.29.20160216git.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 03:51:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
message log (592.78 KB, text/plain)
2016-08-16 02:40 UTC, yaoal
no flags Details
rc3 message file (217.65 KB, text/plain)
2016-11-03 12:29 UTC, yaoal
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2330 0 normal SHIPPED_LIVE hyperv-daemons bug fix update 2016-11-03 13:44:26 UTC

Description yaoal 2016-08-16 02:37:11 UTC
Description of problem:
when RHEL7.3 was started,there are some fail log in message file;

Version-Release number of selected component (if applicable):
NA

How reproducible:
Restart kernel

Steps to Reproduce:
1. Full install RHEL7.3Alpha1 on X3650M5 with UEFI mode
2. boot into OS and check log found there are some systemd fail logs in messages log file.

Actual results:
found error log,like:
[root@localhost log]# cat messages | grep -i fail
Aug  9 01:19:00 localhost systemd: initial-setup.service: main process exited, code=exited, status=1/FAILURE
Aug  9 01:19:00 localhost systemd: Failed to start Initial Setup configuration program.
Aug  9 01:19:00 localhost systemd: initial-setup.service failed.
Aug 11 00:09:05 localhost systemd: pcscd.service: main process exited, code=exited, status=1/FAILURE
Aug 11 00:09:05 localhost systemd: Unit pcscd.service entered failed state.
Aug 11 00:09:05 localhost systemd: pcscd.service failed.
Aug 11 00:14:07 localhost systemd: Dependency failed for Hyper-V VSS daemon.
Aug 11 00:14:07 localhost systemd: Job hypervvssd.service/start failed with result 'dependency'.
Aug 11 00:14:07 localhost systemd: Job sys-devices-virtual-misc-vmbus\x21hv_vss.device/start failed with result 'timeout'.
Aug 11 00:14:07 localhost systemd: Dependency failed for Hyper-V FCOPY daemon.
Aug 11 00:14:07 localhost systemd: Job hypervfcopyd.service/start failed with result 'dependency'.
Aug 11 00:14:07 localhost systemd: Job sys-devices-virtual-misc-vmbus\x21hv_fcopy.device/start failed with result 'timeout'.
Aug 11 00:14:07 localhost systemd: Dependency failed for Hyper-V KVP daemon.
Aug 11 00:14:07 localhost systemd: Job hypervkvpd.service/start failed with result 'dependency'.
Aug 11 00:14:07 localhost systemd: Job sys-devices-virtual-misc-vmbus\x21hv_kvp.device/start failed with result 'timeout'.

Expected results:
there should be no error log

Additional info:

Comment 1 yaoal 2016-08-16 02:40:46 UTC
Created attachment 1191041 [details]
message log

Comment 3 Lukáš Nykrýn 2016-08-16 07:31:07 UTC
This is basically the same as https://bugzilla.redhat.com/show_bug.cgi?id=1331577

Comment 8 yaoal 2016-08-19 06:09:02 UTC
Hi Vitaly Kuznetsov:
  So this issue will resolved in newer Rhel 7.3 ?
  Do you have any plan about it?
  Thanks

Comment 9 Vitaly Kuznetsov 2016-08-19 08:49:22 UTC
The bug is ON_QA, so yes, the fix is supposed to make it to 7.3.

Comment 12 Vitaly Kuznetsov 2016-09-02 11:03:26 UTC
*** Bug 1371427 has been marked as a duplicate of this bug. ***

Comment 16 yaoal 2016-11-03 12:28:11 UTC
Hi:
   for rc3, we check the issue, there is still some error,like following:

Nov  3 06:16:33 localhost systemd: NetworkManager-wait-online.service: main process exited, code=exited, status=1/FAILURE
Nov  3 06:16:33 localhost systemd: Failed to start Network Manager Wait Online.
Nov  3 06:16:33 localhost systemd: Unit NetworkManager-wait-online.service entered failed state.
Nov  3 06:16:33 localhost systemd: NetworkManager-wait-online.service failed.

Nov  3 06:17:31 localhost systemd: initial-setup.service: main process exited, code=exited, status=1/FAILURE
Nov  3 06:17:31 localhost systemd: Failed to start Initial Setup configuration program.
Nov  3 06:17:31 localhost systemd: Unit initial-setup.service entered failed state.
Nov  3 06:17:31 localhost systemd: initial-setup.service failed.

Comment 17 yaoal 2016-11-03 12:29:58 UTC
Created attachment 1216949 [details]
rc3 message file

Comment 18 Vitaly Kuznetsov 2016-11-03 12:43:07 UTC
(In reply to yaoal from comment #16)
> Hi:
>    for rc3, we check the issue, there is still some error,like following:
> 
> Nov  3 06:16:33 localhost systemd: NetworkManager-wait-online.service: main
> process exited, code=exited, status=1/FAILURE
> Nov  3 06:16:33 localhost systemd: Failed to start Network Manager Wait
> Online.
> Nov  3 06:16:33 localhost systemd: Unit NetworkManager-wait-online.service
> entered failed state.
> Nov  3 06:16:33 localhost systemd: NetworkManager-wait-online.service failed.
> 
> Nov  3 06:17:31 localhost systemd: initial-setup.service: main process
> exited, code=exited, status=1/FAILURE
> Nov  3 06:17:31 localhost systemd: Failed to start Initial Setup
> configuration program.
> Nov  3 06:17:31 localhost systemd: Unit initial-setup.service entered failed
> state.
> Nov  3 06:17:31 localhost systemd: initial-setup.service failed.

This is a different issue. First of all, NetworkManager-wait-online.service is failing here and not 'hyperv*.service/start'. From your logs I can see a possible reason for that: eno2 interface is configured to get its settings via DHCP but it can't so you either have it disconnected of you have some issues with your network configuration (e.g. you don't get IP via DHCP).

Anyway, I don't see how this issue is related to hyperv-daemons, in case you think that there is a bug please open a new (probably one against NetworkManager to start with).

Comment 19 errata-xmlrpc 2016-11-04 03:51:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2330.html


Note You need to log in before you can comment on or make changes to this bug.