Bug 1551403 - FCoE is not initiated on boot with lldp enabled
Summary: FCoE is not initiated on boot with lldp enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Services
Version: 4.19.41
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.3.0
: ---
Assignee: Eyal Shenitzky
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks: 1636254
TreeView+ depends on / blocked
 
Reported: 2018-03-05 04:52 UTC by Mark Keir
Modified: 2019-03-13 16:37 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1636254 (view as bug list)
Environment:
Last Closed: 2019-03-13 16:37:47 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.3+
ylavi: exception+


Attachments (Terms of Use)
rhevh12 journalctl (348.33 KB, application/zip)
2018-04-30 07:57 UTC, Eyal Shenitzky
no flags Details
validation that disableing lldp on the interface resolves the initial problem (79.71 KB, text/plain)
2019-01-17 15:48 UTC, Dominik Holler
no flags Details
validation that disableing lldp on the interface resolves the initial problem (79.71 KB, text/plain)
2019-01-17 15:48 UTC, Dominik Holler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2750771 0 None None Using bonding on software served FCOE interfaces in a multipath SAN boot configuration may experience I/O errors and can... 2019-02-16 21:58:04 UTC
oVirt gerrit 95987 0 'None' MERGED net: Add config option to enable/disable LLDP 2021-02-12 19:54:56 UTC

Description Mark Keir 2018-03-05 04:52:59 UTC
Description of problem:

On fresh boot, FCoE storage is not found.  It requires running "service network restart" after login for the bond device to be created and multipath to see FC storage.

Version-Release number of selected component (if applicable):

[root@rhevh-13 ~]# rpm -q vdsm
vdsm-4.19.45-1.el7ev.x86_64
[root@rhevh-13 ~]# uname -a
Linux rhevh-13.infra.prod.eng.bos.redhat.com 3.10.0-693.17.1.el7.x86_64 #1 SMP Sun Jan 14 10:36:03 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:

Set up FCoE via process in https://gitlab.infra.prod.eng.rdu2.redhat.com/ansible-roles/rhvm-server/tree/master/rhvh-fcoe (derived from https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/fcoe-config)

Steps to Reproduce:
1. Configure network interface config files, fcoe files, restart services
2. Reboot host - run "fcoeadm -i": gives no result
3. Restart networking, re-run "fcoeadm -l"

Actual results:

[root@rhevh-13 ~]# fcoeadm -i {no returned value}

Expected results:

[root@rhevh-13 ~]# fcoeadm -i
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB72A40

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.10.3 over p2p2_4.1002-fco
        OS Device Name:    host16
        Node Name:         0x200018fb7b731036
        Port Name:         0x200118fb7b731036
        Fabric Name:        0x100050eb1a292a94
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x010d01
        State:             Online
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB72A40

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.10.3 over p2p1_4.1002-fco
        OS Device Name:    host15
        Node Name:         0x200018fb7b731033
        Port Name:         0x200118fb7b731033
        Fabric Name:        0x100050eb1a292694
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x010d01
        State:             Online

Additional info:

[root@rhevh-13 ~]# more /etc/sysconfig/network-scripts/ifcfg-p2p1_4 /etc/sysconfig/network-scripts/ifcfg-p2p2_4
::::::::::::::
/etc/sysconfig/network-scripts/ifcfg-p2p1_4
::::::::::::::
DEVICE="p2p1_4"
ONBOOT=yes
MTU=1500
HWADDR="18:fb:7b:73:10:31"
NM_CONTROLLED=no
PEERDNS=no
::::::::::::::
/etc/sysconfig/network-scripts/ifcfg-p2p2_4
::::::::::::::
DEVICE="p2p2_4"
ONBOOT=yes
MTU=1500
HWADDR="18:fb:7b:73:10:34"
NM_CONTROLLED=no
PEERDNS=no
[root@rhevh-13 ~]# more /etc/fcoe/cfg-p2p1_4 /etc/fcoe/cfg-p2p2_4
::::::::::::::
/etc/fcoe/cfg-p2p1_4
::::::::::::::
## Type:       yes/no
## Default:    no
# Enable/Disable FCoE service at the Ethernet port
# Normally set to "yes"
FCOE_ENABLE="yes"

## Type:       yes/no
## Default:    no
# Indicate if DCB service is required at the Ethernet port
# Normally set to "yes"
DCB_REQUIRED="no"

## Type:	yes/no
## Default:	no
# Indicate if VLAN discovery should be handled by fcoemon
# Normally set to "yes"
AUTO_VLAN="yes"

## Type:	fabric/vn2vn
## Default:	fabric
# Indicate the mode of the FCoE operation, either fabric or vn2vn
# Normally set to "fabric"
MODE="fabric"

## Type:	yes/no
## Default:	no
# Indicate whether to run a FIP responder for VLAN discovery in vn2vn mode
#FIP_RESP="yes"
::::::::::::::
/etc/fcoe/cfg-p2p2_4
::::::::::::::
## Type:       yes/no
## Default:    no
# Enable/Disable FCoE service at the Ethernet port
# Normally set to "yes"
FCOE_ENABLE="yes"

## Type:       yes/no
## Default:    no
# Indicate if DCB service is required at the Ethernet port
# Normally set to "yes"
DCB_REQUIRED="no"

## Type:	yes/no
## Default:	no
# Indicate if VLAN discovery should be handled by fcoemon
# Normally set to "yes"
AUTO_VLAN="yes"

## Type:	fabric/vn2vn
## Default:	fabric
# Indicate the mode of the FCoE operation, either fabric or vn2vn
# Normally set to "fabric"
MODE="fabric"

## Type:	yes/no
## Default:	no
# Indicate whether to run a FIP responder for VLAN discovery in vn2vn mode
#FIP_RESP="yes"

[root@rhevh-13 ~]# for i in $(systemctl --all --full | grep active | awk '{print $1}'); do time=$(systemctl show -p ActiveEnterTimestampMonotonic -- $i | awk -F= '{print $2}') ; echo "$time $i" ;  done |sort -n | more
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 ●
0 auth-rpcgss-module.service
0 cockpit.service
0 dracut-shutdown.service
0 emergency.service
0 emergency.target
0 final.target
0 graphical.target
0 imgbase-generate-iqn.service
0 ip6tables.service
0 iscsi.service
0 iscsi-shutdown.service
0 iscsiuio.service
0 ksm.service
0 ksmtuned.service
0 libvirt-guests.service
0 lvm2-activation-early.service
0 lvm2-activation-net.service
0 lvm2-activation.service
0 lvm2-lvmpolld.service
0 momd.service
0 network-pre.target
0 nfs-config.service
0 nfs-idmapd.service
0 nfs-mountd.service
0 nfs-server.service
0 nfs-utils.service
0 nss-user-lookup.target
0 ntpdate.service
0 ntpd.service
0 openvswitch.service
0 ovsdb-server.service
0 ovs-vswitchd.service
0 plymouth-quit.service
0 plymouth-quit-wait.service
0 plymouth-read-write.service
0 proc-sys-fs-binfmt_misc.mount
0 rc-local.service
0 rescue.service
0 rescue.target
0 rhel-autorelabel-mark.service
0 rhel-autorelabel.service
0 rhel-configure.service
0 rhel-loadmodules.service
0 rpc-gssd.service
0 rpc-statd-notify.service
0 selinux-policy-migrate-local-changes
0 shutdown.target
0 sshd-keygen.service
0 sshd.socket
0 sys-fs-fuse-connections.mount
0 syslog.socket
0 systemd-ask-password-console.path
0 systemd-ask-password-console.service
0 systemd-ask-password-plymouth.service
0 systemd-ask-password-wall.service
0 systemd-binfmt.service
0 systemd-firstboot.service
0 systemd-hwdb-update.service
0 systemd-initctl.service
0 systemd-journal-catalog-update.service
0 systemd-machine-id-commit.service
0 systemd-readahead-done.service
0 systemd-reboot.service
0 systemd-shutdownd.service
0 systemd-tmpfiles-clean.service
0 systemd-update-done.service
0 systemd-update-utmp-runlevel.service
0 umount.target
0 var-lib-nfs-rpc_pipefs.mount
0 virt-guest-shutdown.target
0 virtlockd.service
3241326 -.mount
3245159 -.slice
3245344 systemd-journald.socket
3245691 system.slice
3254468 systemd-vconsole-setup.service
4020278 sys-kernel-config.mount
4231655 plymouth-start.service
4232465 systemd-ask-password-plymouth.path
7251341 systemd-fsck-root.service
9244559 lvm2-lvmpolld.socket
9247020 dm-event.socket
9305601 proc-sys-fs-binfmt_misc.automount
9355638 systemd-initctl.socket
9356198 system-systemd\x2dfsck.slice
9359816 systemd-udevd-control.socket
9360334 user.slice
9360916 systemd-shutdownd.socket
9361524 system-getty.slice
9361621 slices.target
9361720 time-sync.target
9362321 system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice
9363456 systemd-udevd-kernel.socket
9367294 systemd-ask-password-wall.path
9367355 paths.target
9378711 dev-hugepages.mount
9378973 dev-mqueue.mount
9379207 sys-kernel-debug.mount
9380183 systemd-readahead-replay.service
9381252 systemd-readahead-collect.service
9386848 kmod-static-nodes.service
9422421 systemd-journald.service
9431136 systemd-remount-fs.service
9451646 systemd-tmpfiles-setup-dev.service
9465463 systemd-modules-load.service
9478375 systemd-sysctl.service
9479387 proc-fs-nfsd.mount
9671519 systemd-udev-trigger.service
9717783 rhel-readonly.service
9739550 multipathd.service
9941438 systemd-udevd.service
9967061 sys-module-configfs.device
10187041 dev-ttyS2.device
10187045 sys-devices-platform-serial8250-tty-ttyS2.device
10187392 dev-ttyS3.device
10187393 sys-devices-platform-serial8250-tty-ttyS3.device
10187728 dev-ttyS0.device
10187729 sys-devices-pnp0-00:02-tty-ttyS0.device
10188063 dev-ttyS1.device
10188065 sys-devices-pnp0-00:03-tty-ttyS1.device
10916859 sys-subsystem-net-devices-p2p2_3.device
10916864 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.5-net-p2p2_3.device
10925489 sys-subsystem-net-devices-p2p2_4.device
10925491 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.7-net-p2p2_4.device
10949233 sys-subsystem-net-devices-em1.device
10949235 sys-devices-pci0000:00-0000:00:01.0-0000:01:00.0-net-em1.device
10963057 sys-subsystem-net-devices-p2p1_1.device
10963058 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.0-net-p2p1_1.device
11015621 sys-subsystem-net-devices-p2p2_2.device
11015623 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.3-net-p2p2_2.device
11019417 sys-subsystem-net-devices-p2p1_4.device
11019418 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.6-net-p2p1_4.device
11028546 dev-block-253:2.device
11028548 dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dsfKhoy\x2djNSW\x2d7ZAg\x2diWzn\x2dkKvg\x2dcY1r\x2dNTVT8k.device
11028548 dev-mapper-3614187705e4ddc001e3cf35c0fe6bd31p2.device
11028549 dev-disk-by\x2did-dm\x2dname\x2d3614187705e4ddc001e3cf35c0fe6bd31p2.device
11028549 dev-disk-by\x2did-dm\x2duuid\x2dpart2\x2dmpath\x2d3614187705e4ddc001e3cf35c0fe6bd31.device
11028550 dev-dm\x2d2.device
11028550 sys-devices-virtual-block-dm\x2d2.device
11028944 system-lvm2\x2dpvscan.slice
11047832 sys-subsystem-net-devices-p2p1_3.device
11047834 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.4-net-p2p1_3.device
11054024 lvm2-pvscan@253:2.service
11078411 sys-subsystem-net-devices-em2.device
11078413 sys-devices-pci0000:00-0000:00:01.0-0000:01:00.1-net-em2.device
11111298 sys-subsystem-net-devices-p2p2_1.device
11111300 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.1-net-p2p2_1.device
11141060 sys-subsystem-net-devices-p2p1_2.device
11141062 sys-devices-pci0000:00-0000:00:03.0-0000:03:00.2-net-p2p1_2.device
11144712 dev-rhvh_rhevh\x2d13-rhvh\x2d4.1\x2d0.20180126.0\x2b1.device
11144713 dev-disk-by\x2duuid-c0c7387d\x2d9d41\x2d42e1\x2dbfbf\x2d8e58ac824db6.device
11144714 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2drhvh\x2d\x2d4.1\x2d\x2d0.20180126.0\x2b1.device
11144714 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldCY8QILAEabET70ig6PGpGtmPdaOYx4sB.device
11144715 dev-dm\x2d7.device
11144715 sys-devices-virtual-block-dm\x2d7.device
11144716 dev-mapper-rhvh_rhevh\x2d\x2d13\x2drhvh\x2d\x2d4.1\x2d\x2d0.20180126.0\x2b1.device
11153427 dev-rhvh_rhevh\x2d13-swap.device
11153429 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dswap.device
11153488 dev-disk-by\x2duuid-1bc26186\x2d52d9\x2d488c\x2d8ae1\x2d4372a06a9159.device
11153489 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dswap.device
11153489 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldpIR4ophPviw1QZlFr3NYAY2K7dP6dQfU.device
11153490 dev-dm\x2d3.device
11153490 sys-devices-virtual-block-dm\x2d3.device
11168922 dev-mapper-3614187705e4ddc001e3cf35c0fe6bd31p1.device
11168924 dev-disk-by\x2duuid-e428c107\x2d0b19\x2d4ad9\x2dbfaf\x2de3684b15e9ee.device
11256275 dev-disk-by\x2did-dm\x2duuid\x2dpart1\x2dmpath\x2d3614187705e4ddc001e3cf35c0fe6bd31.device
11256276 dev-disk-by\x2did-dm\x2dname\x2d3614187705e4ddc001e3cf35c0fe6bd31p1.device
11256277 dev-dm\x2d1.device
11256277 sys-devices-virtual-block-dm\x2d1.device
11258096 dev-rhvh_rhevh\x2d13-swap.swap
11258195 dev-disk-by\x2duuid-1bc26186\x2d52d9\x2d488c\x2d8ae1\x2d4372a06a9159.swap
11258274 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldpIR4ophPviw1QZlFr3NYAY2K7dP6dQfU.swap
11258353 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dswap.swap
11258431 dev-dm\x2d3.swap
11258509 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dswap.swap
11260065 swap.target
11291690 dev-mapper-3614187705e4ddc001e3cf35c0fe6bd31.device
11291693 dev-disk-by\x2did-dm\x2duuid\x2dmpath\x2d3614187705e4ddc001e3cf35c0fe6bd31.device
11291694 dev-disk-by\x2did-dm\x2dname\x2d3614187705e4ddc001e3cf35c0fe6bd31.device
11291695 dev-dm\x2d0.device
11291696 sys-devices-virtual-block-dm\x2d0.device
11436488 systemd-udev-settle.service
11476031 dm-event.service
11561468 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dpool00.device
11561472 dev-dm\x2d8.device
11561473 sys-devices-virtual-block-dm\x2d8.device
11733193 dev-rhvh_rhevh\x2d13-var.device
11733196 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar.device
11733330 dev-disk-by\x2duuid-f2a582b4\x2d5141\x2d4c63\x2d894d\x2d793c631bed50.device
11733331 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9Oldpd4SnRLmgqrBhzLm5qhASSteNdlDkSCr.device
11733332 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dvar.device
11733333 dev-dm\x2d11.device
11733334 sys-devices-virtual-block-dm\x2d11.device
11739636 dev-rhvh_rhevh\x2d13-var_log.device
11739639 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar_log.device
11739730 dev-disk-by\x2duuid-7e617471\x2d359d\x2d4bd4\x2db673\x2d4f1d55697bab.device
11739731 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldnqBoT2YhgDdnOtVmdTRV2sINLASZB98M.device
11739732 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dvar_log.device
11739733 dev-dm\x2d10.device
11739734 sys-devices-virtual-block-dm\x2d10.device
11757490 dev-rhvh_rhevh\x2d13-var_log_audit.device
11757493 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar_log_audit.device
11757595 dev-disk-by\x2duuid-ae67404b\x2d3ebd\x2d4f2f\x2da8fe\x2dc4fd02207a76.device
11757596 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldqvX3eknst9kXMkjQf8hiVNfeUppf3KvI.device
11757597 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dvar_log_audit.device
11757598 dev-dm\x2d9.device
11757598 sys-devices-virtual-block-dm\x2d9.device
11758996 dev-rhvh_rhevh\x2d13-tmp.device
11758998 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dtmp.device
11759076 dev-disk-by\x2duuid-2fb7dcc9\x2d30bf\x2d4c30\x2d926f\x2d9ddf8e64516d.device
11759077 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9Old58glOnUiT7Y2N87uug0fQc7bZW1BjHjl.device
11759078 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dtmp.device
11759079 dev-dm\x2d12.device
11759079 sys-devices-virtual-block-dm\x2d12.device
11786395 dev-rhvh_rhevh\x2d13-home.device
11786398 dev-mapper-rhvh_rhevh\x2d\x2d13\x2dhome.device
11786500 dev-disk-by\x2duuid-d18a23ce\x2d5590\x2d4eb8\x2db002\x2d4a8ee644db7e.device
11786501 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dyEJiqYUpCoIFH6MG1SG5ippJaMAc9OldRjyRbJnXA1eFgiUImxtjZWEcwPmBdBPP.device
11786502 dev-disk-by\x2did-dm\x2dname\x2drhvh_rhevh\x2d\x2d13\x2dhome.device
11786503 dev-dm\x2d13.device
11786503 sys-devices-virtual-block-dm\x2d13.device
11803283 cryptsetup.target
11881353 lvm2-monitor.service
11881668 local-fs-pre.target
11921888 systemd-fsck@dev-mapper-rhvh_rhevh\x2d\x2d13\x2dhome.service
11933493 systemd-fsck@dev-mapper-rhvh_rhevh\x2d\x2d13\x2dtmp.service
11947786 home.mount
11949430 systemd-fsck@dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar.service
11950232 systemd-fsck@dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar_log_audit.service
11950942 systemd-fsck@dev-mapper-rhvh_rhevh\x2d\x2d13\x2dvar_log.service
11964687 tmp.mount
11975421 var.mount
12004745 var-log.mount
12031527 systemd-fsck@dev-disk-by\x2duuid-e428c107\x2d0b19\x2d4ad9\x2dbfaf\x2de3684b15e9ee.service
12032192 systemd-random-seed.service
12048925 var-log-audit.mount
12054536 boot.mount
12056288 systemd-journal-flush.service
12058113 local-fs.target
12117591 rhel-import-state.service
12174920 systemd-tmpfiles-setup.service
12366930 auditd.service
12387327 systemd-update-utmp.service
12387698 sysinit.target
12388394 iscsiuio.socket
12389123 iscsid.socket
12389218 systemd-tmpfiles-clean.timer
12389241 timers.target
12390883 virtlockd.socket
12391424 lldpad.socket
12392399 virtlogd.socket
12396082 dbus.socket
12397664 rpcbind.socket
12406576 cockpit.socket
12406814 sockets.target
12406944 basic.target
12410781 abrtd.service
12430420 irqbalance.service
12437647 dbus.service
12480716 sanlock.service
12483879 rpcbind.service
12488077 rhel-dmesg.service
12491069 gssproxy.service
12494536 rsyslog.service
12498331 wdmd.service
12501284 systemd-logind.service
12502338 nfs-client.target
12555293 iptables.service
12592553 ovirt-imageio-daemon.service
12612843 NetworkManager.service
12754613 imgbase-clean-grub.service
12757115 imgbase-copy-bootfiles.service
12759216 chronyd.service
15280115 NetworkManager-wait-online.service
15334429 polkit.service
15977416 vdsm-network-init.service
16194084 imgbase-config-vdsm.service
17493578 sys-subsystem-net-devices-bond0.device
17493581 sys-devices-virtual-net-bond0.device
29421312 network.target
29431141 supervdsmd.service
29433784 ovirt-vmconsole-host-sshd.service
29438375 goferd.service
29465435 iscsid.service
29604796 sshd.service
29622306 blk-availability.service
29675606 remote-fs-pre.target
29675662 remote-fs.target
29689605 systemd-user-sessions.service
29691113 crond.service
29834383 getty
29834792 getty.target
30015863 libvirtd.service
30651356 postfix.service
31457883 vdsm-network.service
33508344 glusterd.service
33510851 network-online.target
33567025 rhnsd.service
34191380 sys-subsystem-net-devices-\x3bvdsmdummy\x3b.device
34191384 sys-devices-virtual-net-\x3bvdsmdummy\x3b.device
34289210 vdsmd.service
34290747 multi-user.target
34291622 systemd-readahead-done.timer
39820706 mom-vdsm.service
106164771 kdump.service
266477710 run-user-0.mount
266488543 user-0.slice
312040097 sys-subsystem-net-devices-p2p1_4.1002\x2dfco.device
312040102 sys-devices-virtual-net-p2p1_4.1002\x2dfco.device
312933884 sys-subsystem-net-devices-p2p2_4.1002\x2dfco.device
312933890 sys-devices-virtual-net-p2p2_4.1002\x2dfco.device
322252749 sys-subsystem-net-devices-ovirtmgmt.device
322252755 sys-devices-virtual-net-ovirtmgmt.device
328656906 dev-block-253:14.device
328656911 dev-mapper-360060160af2037003f89da16e302e811.device
328656912 dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dVCy9hD\x2dpgB9\x2dzPvv\x2dHtEa\x2dqw6g\x2dGi6z\x2dmuyrDr.device
328656913 dev-disk-by\x2did-dm\x2duuid\x2dmpath\x2d360060160af2037003f89da16e302e811.device
328656914 dev-disk-by\x2did-dm\x2dname\x2d360060160af2037003f89da16e302e811.device
328656916 dev-dm\x2d14.device
328656917 sys-devices-virtual-block-dm\x2d14.device
328670948 dev-block-253:15.device
328670951 dev-mapper-360060160af203700807d9da74405e811.device
328670953 dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dMeERNB\x2dxzN4\x2dzU7I\x2dKqkh\x2dPrwp\x2dHbOr\x2dF2wzFu.device
328670954 dev-disk-by\x2did-dm\x2duuid\x2dmpath\x2d360060160af203700807d9da74405e811.device
328670955 dev-disk-by\x2did-dm\x2dname\x2d360060160af203700807d9da74405e811.device
328670956 dev-dm\x2d15.device
328670957 sys-devices-virtual-block-dm\x2d15.device
328708693 lvm2-pvscan@253:14.service
328710726 lvm2-pvscan@253:15.service
328723975 dev-mapper-360060160af2037007cbe1881261de811.device
328723978 dev-disk-by\x2did-dm\x2duuid\x2dmpath\x2d360060160af2037007cbe1881261de811.device
328723979 dev-disk-by\x2did-dm\x2dname\x2d360060160af2037007cbe1881261de811.device
328723980 dev-dm\x2d16.device
328723981 sys-devices-virtual-block-dm\x2d16.device
328740063 dev-block-253:17.device
328740066 dev-mapper-360060160af2037007cbe1881261de811p1.device
328740067 dev-disk-by\x2did-dm\x2duuid\x2dpart1\x2dmpath\x2d360060160af2037007cbe1881261de811.device
328740067 dev-disk-by\x2did-lvm\x2dpv\x2duuid\x2dY9TUL9\x2dCx6D\x2dxunj\x2dxKXU\x2doqYl\x2dUDvM\x2dPP8FN7.device
328740068 dev-disk-by\x2did-dm\x2dname\x2d360060160af2037007cbe1881261de811p1.device
328740069 dev-dm\x2d17.device
328740070 sys-devices-virtual-block-dm\x2d17.device
328757312 lvm2-pvscan@253:17.service
336524087 network.service
3289980017 nss-lookup.target
3290087942 rpc-statd.service
3290346248 rhev-data\x2dcenter-mnt-vnx\x2dbos\x2dpdi\x2dnfs01.storage.prod.eng.bos.redhat.com:_rhv__ISO.mount
3290646752 rhev-data\x2dcenter-mnt-vnx\x2dbos\x2dpdi\x2dnfs01.storage.prod.eng.bos.redhat.com:_exports.mount
3292069419 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dmetadata.device
3292069425 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpC3kjrTG29NuNfQrbvO2YEV4eiejSoNFu.device
3292069427 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dmetadata.device
3292069428 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-metadata.device
3292069430 dev-dm\x2d18.device
3292069431 sys-devices-virtual-block-dm\x2d18.device
3292413345 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dids.device
3292413351 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpxRqS6VXmJBONcYa5RDexBdX5BVwagMDK.device
3292413353 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dids.device
3292413354 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-ids.device
3292413355 dev-dm\x2d19.device
3292413356 sys-devices-virtual-block-dm\x2d19.device
3292624620 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dleases.device
3292624626 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEp2lcI50OLpm5Ih8zwC6nWx8ndsGrXT8lZ.device
3292624627 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dleases.device
3292624629 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-leases.device
3292624630 dev-dm\x2d20.device
3292624631 sys-devices-virtual-block-dm\x2d20.device
3292990195 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2doutbox.device
3292990201 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpWagdUc1KjqcNkqDTOp5OHkF30Ge3xdxj.device
3292990202 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2doutbox.device
3292990203 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-outbox.device
3292990204 dev-dm\x2d21.device
3292990205 sys-devices-virtual-block-dm\x2d21.device
3292993067 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dinbox.device
3292993070 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEp2cguiT8S9PyU8CGEtaLCQauKfNSXa1js.device
3292993071 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dinbox.device
3292993072 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-inbox.device
3292993073 dev-dm\x2d23.device
3292993074 sys-devices-virtual-block-dm\x2d23.device
3292995229 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dxleases.device
3292995232 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpSfBGXhSgpIw37np6eENq3eYEypq3Apmq.device
3292995233 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dxleases.device
3292995234 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-xleases.device
3292995235 dev-dm\x2d22.device
3292995236 sys-devices-virtual-block-dm\x2d22.device
3293001604 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dmaster.device
3293001608 dev-disk-by\x2duuid-1976a61c\x2d041b\x2d4725\x2d8b87\x2d99a0864c14e1.device
3293001609 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpSh7fXxb8GUKzeoLeNV27gUHUEMY9u5GU.device
3293001610 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dmaster.device
3293001611 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-master.device
3293001612 dev-dm\x2d24.device
3293001613 sys-devices-virtual-block-dm\x2d24.device
3293389333 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dmetadata.device
3293389338 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-metadata.device
3293389339 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiyPAVkCnY6jiDa2qmWbP50o4MdCjDrCXYy.device
3293389340 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dmetadata.device
3293389341 dev-dm\x2d25.device
3293389342 sys-devices-virtual-block-dm\x2d25.device
3293666774 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dids.device
3293666779 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-ids.device
3293666780 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dids.device
3293666780 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiyE5UohHZ2d9pLxKzVPSsZoc31i2qKLozB.device
3293666781 dev-dm\x2d26.device
3293666782 sys-devices-virtual-block-dm\x2d26.device
3293834085 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dleases.device
3293834091 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-leases.device
3293834092 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiyrwWdWZD0PcVV2QegI2mYbJk9I1HWSS7o.device
3293834093 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dleases.device
3293834095 dev-dm\x2d27.device
3293834096 sys-devices-virtual-block-dm\x2d27.device
3294137398 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2doutbox.device
3294137405 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-outbox.device
3294137408 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiydCjXgptUFdjv5MzmzoOhQV6NnokYB8jp.device
3294137410 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2doutbox.device
3294137412 dev-dm\x2d28.device
3294137413 sys-devices-virtual-block-dm\x2d28.device
3294139757 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dinbox.device
3294139760 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-inbox.device
3294139761 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiyNVQW8CIvIpCMEWGoMu8wICovP7Zwy5Ca.device
3294139762 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dinbox.device
3294139763 dev-dm\x2d29.device
3294139764 sys-devices-virtual-block-dm\x2d29.device
3294141757 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dmaster.device
3294141759 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-master.device
3294141760 dev-disk-by\x2duuid-94d87ca6\x2dae34\x2d4d1b\x2d8e72\x2dac6fe09fc770.device
3294141761 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiy7vohqnEI5BSrUmFkYBJm0gLnBMQj6OAb.device
3294141762 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dmaster.device
3294141762 dev-dm\x2d30.device
3294141763 sys-devices-virtual-block-dm\x2d30.device
3294143486 dev-mapper-e6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dxleases.device
3294143488 dev-e6b01dae\x2dc127\x2d4bd7\x2da013\x2dff7440b03f82-xleases.device
3294143489 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dixAnFlj1J3Eo84VAbccpbhcMLjefphiyYVGM8Nj7GL628LL3poIKpxerBNlmlI1c.device
3294143490 dev-disk-by\x2did-dm\x2dname\x2de6b01dae\x2d\x2dc127\x2d\x2d4bd7\x2d\x2da013\x2d\x2dff7440b03f82\x2dxleases.device
3294143491 dev-dm\x2d31.device
3294143491 sys-devices-virtual-block-dm\x2d31.device
3359167861 lldpad.service
3359288435 fcoe.service
3361499205 vdsm.slice
3361499941 vdsm-dhclient.slice
3361587284 sys-subsystem-net-devices-bond0.170.device
3361587288 sys-devices-virtual-net-bond0.170.device
3361619072 sys-subsystem-net-devices-brew.device
3361619079 sys-devices-virtual-net-brew.device
3363207712 sys-subsystem-net-devices-bond0.167.device
3363207717 sys-devices-virtual-net-bond0.167.device
3363247965 sys-subsystem-net-devices-host.device
3363247968 sys-devices-virtual-net-host.device
3422273988 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2db3f107de\x2d\x2d76bb\x2d\x2d4040\x2d\x2d8331\x2d\x2d4cbfb2961b14.device
3422273994 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEp2ei6VuZSJZ13MYRZu1vYC44teSyLQqrS.device
3422273995 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2db3f107de\x2d\x2d76bb\x2d\x2d4040\x2d\x2d8331\x2d\x2d4cbfb2961b14.device
3422273996 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-b3f107de\x2d76bb\x2d4040\x2d8331\x2d4cbfb2961b14.device
3422273997 dev-dm\x2d32.device
3422273998 sys-devices-virtual-block-dm\x2d32.device
3424077836 virtlogd.service
3424159842 sys-subsystem-net-devices-vnet0.device
3424159848 sys-devices-virtual-net-vnet0.device
3425207819 machine.slice
3425346153 systemd-machined.service
3425359558 machine-qemu\x2d1\x2dfreshmaker\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3483689359 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2db575ee14\x2d\x2dc901\x2d\x2d4d71\x2d\x2d81a5\x2d\x2d887b34e2ffc4.device
3483689365 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEp1yxhF7Dl5iYX06suGbdl2FtHki89EWKL.device
3483689366 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2db575ee14\x2d\x2dc901\x2d\x2d4d71\x2d\x2d81a5\x2d\x2d887b34e2ffc4.device
3483689367 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-b575ee14\x2dc901\x2d4d71\x2d81a5\x2d887b34e2ffc4.device
3483689368 dev-dm\x2d33.device
3483689369 sys-devices-virtual-block-dm\x2d33.device
3485907863 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2df2e1ecc1\x2d\x2ddb4a\x2d\x2d497c\x2d\x2da7ae\x2d\x2d9ccf8aef2e3c.device
3485907869 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpsT3OBDanT2Be9OiyfHuRXrr2VLduFxw6.device
3485907870 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2df2e1ecc1\x2d\x2ddb4a\x2d\x2d497c\x2d\x2da7ae\x2d\x2d9ccf8aef2e3c.device
3485907871 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-f2e1ecc1\x2ddb4a\x2d497c\x2da7ae\x2d9ccf8aef2e3c.device
3485907872 dev-dm\x2d34.device
3485907873 sys-devices-virtual-block-dm\x2d34.device
3488375806 sys-subsystem-net-devices-vnet1.device
3488375811 sys-devices-virtual-net-vnet1.device
3489589671 machine-qemu\x2d2\x2drpmdiff\x2dworker\x2d06\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3604297693 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2deeae34ba\x2d\x2d38e6\x2d\x2d4fff\x2d\x2d8c82\x2d\x2d3ca2f35d4909.device
3604297703 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEptNDUXATME1sdrR3i4a1chErRnf0dSAMe.device
3604297704 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2deeae34ba\x2d\x2d38e6\x2d\x2d4fff\x2d\x2d8c82\x2d\x2d3ca2f35d4909.device
3604297705 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-eeae34ba\x2d38e6\x2d4fff\x2d8c82\x2d3ca2f35d4909.device
3604297706 dev-dm\x2d35.device
3604297707 sys-devices-virtual-block-dm\x2d35.device
3605034307 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d82dd8174\x2d\x2dc89a\x2d\x2d4c2b\x2d\x2d9c00\x2d\x2dd3b6cc215bd7.device
3605034313 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpIZVyBd6urEflYBU5MbeeUQ2zyYemQUS8.device
3605034314 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d82dd8174\x2d\x2dc89a\x2d\x2d4c2b\x2d\x2d9c00\x2d\x2dd3b6cc215bd7.device
3605034315 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-82dd8174\x2dc89a\x2d4c2b\x2d9c00\x2dd3b6cc215bd7.device
3605034317 dev-dm\x2d36.device
3605034318 sys-devices-virtual-block-dm\x2d36.device
3605808364 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dec634bc7\x2d\x2dc2f5\x2d\x2d4a7d\x2d\x2db6de\x2d\x2d11c63a5cf403.device
3605808370 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEptVO3ReVIbVlkJ4LOXpMdc6Xba9dF3xMc.device
3605808371 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dec634bc7\x2d\x2dc2f5\x2d\x2d4a7d\x2d\x2db6de\x2d\x2d11c63a5cf403.device
3605808372 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-ec634bc7\x2dc2f5\x2d4a7d\x2db6de\x2d11c63a5cf403.device
3605808374 dev-dm\x2d37.device
3605808375 sys-devices-virtual-block-dm\x2d37.device
3607098740 sys-subsystem-net-devices-vnet2.device
3607098746 sys-devices-virtual-net-vnet2.device
3608355499 machine-qemu\x2d3\x2drcm\x2datomic\x2d02\x2dbrew\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3665800475 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d4d0dad70\x2d\x2d67dd\x2d\x2d4edd\x2d\x2d808d\x2d\x2d5169ec6fbda2.device
3665800481 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpQmpDrHKX3catWxsx8PxNKjNXAfkyKyIj.device
3665800482 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d4d0dad70\x2d\x2d67dd\x2d\x2d4edd\x2d\x2d808d\x2d\x2d5169ec6fbda2.device
3665800484 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-4d0dad70\x2d67dd\x2d4edd\x2d808d\x2d5169ec6fbda2.device
3665800485 dev-dm\x2d38.device
3665800486 sys-devices-virtual-block-dm\x2d38.device
3667029597 sys-subsystem-net-devices-vnet3.device
3667029603 sys-devices-virtual-net-vnet3.device
3668096925 machine-qemu\x2d4\x2dmbs\x2dfrontend\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3726069326 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d23e53c37\x2d\x2d64d7\x2d\x2d4282\x2d\x2da3b8\x2d\x2d0bc358371c2f.device
3726069332 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpRaxzX8ZIBW0lzPuSq0BQDfQXTxXKZi0C.device
3726069334 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d23e53c37\x2d\x2d64d7\x2d\x2d4282\x2d\x2da3b8\x2d\x2d0bc358371c2f.device
3726069335 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-23e53c37\x2d64d7\x2d4282\x2da3b8\x2d0bc358371c2f.device
3726069336 dev-dm\x2d39.device
3726069338 sys-devices-virtual-block-dm\x2d39.device
3726942711 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d6ac567df\x2d\x2d1be3\x2d\x2d4a88\x2d\x2db554\x2d\x2dc6f4028c80f0.device
3726942717 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpCLR60e41YPTYZNxLBqzV76WQ9gbAd6nc.device
3726942719 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2d6ac567df\x2d\x2d1be3\x2d\x2d4a88\x2d\x2db554\x2d\x2dc6f4028c80f0.device
3726942720 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-6ac567df\x2d1be3\x2d4a88\x2db554\x2dc6f4028c80f0.device
3726942721 dev-dm\x2d40.device
3726942722 sys-devices-virtual-block-dm\x2d40.device
3728213149 sys-subsystem-net-devices-vnet4.device
3728213155 sys-devices-virtual-net-vnet4.device
3729413844 machine-qemu\x2d5\x2drpmdiff\x2dworker\x2d05\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3846703181 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dcdd67eb6\x2d\x2da473\x2d\x2d4796\x2d\x2d9f5f\x2d\x2d7a8f98683fa5.device
3846703187 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEprjLwr820oXXs6mn8Bv2iDrMvNfd9M3Xk.device
3846703188 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dcdd67eb6\x2d\x2da473\x2d\x2d4796\x2d\x2d9f5f\x2d\x2d7a8f98683fa5.device
3846703189 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-cdd67eb6\x2da473\x2d4796\x2d9f5f\x2d7a8f98683fa5.device
3846703190 dev-dm\x2d41.device
3846703192 sys-devices-virtual-block-dm\x2d41.device
3847608445 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dea00288f\x2d\x2d5cc6\x2d\x2d49d2\x2d\x2d9f2b\x2d\x2d604daa49d222.device
3847608451 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEp8oQ5HOu3VsOeEKejKQqZnDWl0xBEOmmv.device
3847608452 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dea00288f\x2d\x2d5cc6\x2d\x2d49d2\x2d\x2d9f2b\x2d\x2d604daa49d222.device
3847608453 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-ea00288f\x2d5cc6\x2d49d2\x2d9f2b\x2d604daa49d222.device
3847608455 dev-dm\x2d42.device
3847608456 sys-devices-virtual-block-dm\x2d42.device
3848873363 sys-subsystem-net-devices-vnet5.device
3848873369 sys-devices-virtual-net-vnet5.device
3850163727 machine-qemu\x2d6\x2derrata\x2dweb\x2d01\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope
3917932246 session-10.scope
4268632888 dev-mapper-04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dae847a8f\x2d\x2db691\x2d\x2d4c4b\x2d\x2da486\x2d\x2d32be4970a198.device
4268632893 dev-disk-by\x2did-dm\x2duuid\x2dLVM\x2dTc7r4fadsow1jWZLkkd1tvk7kLpG1bEpaLPerzI5LDvz0bRs91FIR2os35NrixlR.device
4268632894 dev-disk-by\x2did-dm\x2dname\x2d04b01381\x2d\x2d6715\x2d\x2d459e\x2d\x2db941\x2d\x2d37d337eab9c7\x2dae847a8f\x2d\x2db691\x2d\x2d4c4b\x2d\x2da486\x2d\x2d32be4970a198.device
4268632895 dev-04b01381\x2d6715\x2d459e\x2db941\x2d37d337eab9c7-ae847a8f\x2db691\x2d4c4b\x2da486\x2d32be4970a198.device
4268632896 dev-dm\x2d43.device
4268632897 sys-devices-virtual-block-dm\x2d43.device
4269912139 sys-subsystem-net-devices-vnet6.device
4269912145 sys-devices-virtual-net-vnet6.device
4271229987 machine-qemu\x2d7\x2dpnc\x2dkeycloak\x2d02\x2dhost\x2dprod\x2deng\x2dbos\x2dredhat\x2dcom.scope

Comment 1 Allon Mureinik 2018-03-15 12:26:30 UTC
Do you have the fcoe_before_network_setup hook installed?
Can you verify its executable?
Can you please attach vdsm's logs?

Comment 2 Mark Keir 2018-03-16 03:39:32 UTC
[root@rhevh-13 ~]# find /usr/share/vdsm -name fcoe_before_network_setup.py
[root@rhevh-13 ~]# yum search fcoe
Loaded plugins: imgbased-persist, package_upload, product-id, search-disabled-repos, subscription-manager
============================================================================================================= N/S matched: fcoe ==============================================================================================================
vdsm-hook-fcoe.noarch : Hook to enable FCoE support
fcoe-utils.x86_64 : Fibre Channel over Ethernet utilities

  Name and summary matches only, use "search all" for everything.
[root@rhevh-13 ~]# rpm -q vdsm-hook-fcoe
vdsm-hook-fcoe-4.19.45-1.el7ev.noarch


If the reference is to https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/fcoe/fcoe_before_network_setup.py, no I don't find that on the system.

Comment 3 Germano Veit Michel 2018-03-16 03:42:54 UTC
Hey Mark,

It should go here:
/usr/libexec/vdsm/hooks/before_network_setup/50_fcoe

Comment 4 Mark Keir 2018-03-16 04:02:31 UTC
[root@rhevh-13 ~]# ls -la /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe
-rwxr-xr-x. 1 root root 6555 Jan 16 11:10 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe

Comment 5 Eyal Shenitzky 2018-03-19 09:32:56 UTC
Hi Mark, 

Can you please provide all the relevant logs?

Comment 6 Mark Keir 2018-03-27 01:55:22 UTC
The related log from the time of the first system registration is found at https://drive.google.com/file/d/1_2o_Gztq6TFEgHX4oDOQvXddREfd3aaI/view?usp=sharing

BR
Mark

PS.  Apols for delay, was on PTO

Comment 7 Eyal Shenitzky 2018-04-16 13:41:02 UTC
(In reply to Mark Keir from comment #6)
> The related log from the time of the first system registration is found at
> https://drive.google.com/file/d/1_2o_Gztq6TFEgHX4oDOQvXddREfd3aaI/
> view?usp=sharing
> 
> BR
> Mark
> 
> PS.  Apols for delay, was on PTO

Can I use your env to investigate this issue,

Can you please supply the env details?

Comment 8 Mark Keir 2018-04-17 03:50:06 UTC
What form of access would you require?

This is our production cluster in BOS, home to Errata, Beaker, Brew, Gerrit etc.

https://rhvm.infra.prod.eng.bos.redhat.com

I am about to rebuild another system and go through the setup process.  Is there any additional data capture I can do to trap better information for this issue?

BR
Mark

Comment 9 Mark Keir 2018-04-18 06:29:09 UTC
This is an additional set of information: https://drive.google.com/open?id=1DD2ozokNqDprGXiL4OoDWX0npt3YYYbX

If you download this and run it in a browser, you can see a recording from just after imaging and initial fcoe setup, through registration to RHVM, with a reboot sequence and check afterwards.

Comment 10 Eyal Shenitzky 2018-04-18 08:26:14 UTC
(In reply to Mark Keir from comment #8)
> What form of access would you require?
> 
> This is our production cluster in BOS, home to Errata, Beaker, Brew, Gerrit
> etc.
> 
> https://rhvm.infra.prod.eng.bos.redhat.com
> 
> I am about to rebuild another system and go through the setup process.  Is
> there any additional data capture I can do to trap better information for
> this issue?
> 
> BR
> Mark

Thanks, 
I need access to the environment as admin, I need also access to the 
engine machine and VDSM machine.

Comment 11 Mark Keir 2018-04-19 02:18:34 UTC
I will contact you directly in email for details around access credentials and conditions of use.

Comment 12 Dan Kenigsberg 2018-04-30 07:00:45 UTC
(In reply to Mark Keir from comment #0)


Germano, please note that Mark is not using RHV's fcoe hook. He is configuring his /etc/fcoe/* with his own Ansible playbook:

> Set up FCoE via process in
> https://gitlab.infra.prod.eng.rdu2.redhat.com/ansible-roles/rhvm-server/tree/
> master/rhvh-fcoe (derived from
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/
> html/storage_administration_guide/fcoe-config)

I don't believe that this is related to your bug, Mark, but I must comment that I see there that you are writing cfg files. That should not be done if the interfaces are to be managed by RHV (as RHV is going to rewrite them).

Does your Ansible playbook work fine when applied to non-RHV hosts? do you see failures to start networking during boot time?

Comment 13 Eyal Shenitzky 2018-04-30 07:57:51 UTC
Created attachment 1428701 [details]
rhevh12 journalctl

Comment 14 Mark Keir 2018-05-01 06:29:53 UTC
(In reply to Dan Kenigsberg from comment #12)
> (In reply to Mark Keir from comment #0)
> 
> 
> Germano, please note that Mark is not using RHV's fcoe hook. He is
> configuring his /etc/fcoe/* with his own Ansible playbook:
> 
> > Set up FCoE via process in
> > https://gitlab.infra.prod.eng.rdu2.redhat.com/ansible-roles/rhvm-server/tree/
> > master/rhvh-fcoe (derived from
> > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/
> > html/storage_administration_guide/fcoe-config)
> 
> I don't believe that this is related to your bug, Mark, but I must comment
> that I see there that you are writing cfg files. That should not be done if
> the interfaces are to be managed by RHV (as RHV is going to rewrite them).
> 
> Does your Ansible playbook work fine when applied to non-RHV hosts? do you
> see failures to start networking during boot time?

I don't believe we have any standard RHEL7 systems using FCoE.  I will ask.

This is chicken and egg Dan - we have to get fcoe connection to storage to register and activate the machines as they don't see the storage domains otherwize.  We have no choice except to manually create this setup.
Its the same setup, on the same hosts, running RHEVH3.x documented in https://docs.engineering.redhat.com/pages/viewpage.action?pageId=42939508 and working for several years.

Comment 15 Eyal Shenitzky 2018-06-03 08:39:58 UTC
Hey Mark,

Are the 'lldpad' and 'fcoe' services active after restarting the host?

Comment 16 Mark Keir 2018-06-13 04:40:38 UTC
Eyal,
You have full access to the RHVH host.  Are you unable to test yourself?

If you no longer need access to conduct independent investigations, please advise so and I will take the production resources back.

Mark

Comment 17 Mark Keir 2018-08-01 04:58:46 UTC
The lldpad and fcoe services are started after reboot.  Below is from a system upgraded to RHVH4.2.5 today.


[root@rhevh-16 ~]# rpm -qa 'redhat-virtualization-host-image*'
redhat-virtualization-host-image-update-4.2-20180724.0.el7_5.noarch
redhat-virtualization-host-image-update-placeholder-4.2-5.0.el7.noarch
[root@rhevh-16 ~]# hostnamectl status
   Static hostname: rhevh-16.infra.prod.eng.bos.redhat.com
         Icon name: computer-server
           Chassis: server
        Machine ID: ae2a206dbb4f45a0834f61141455fd63
           Boot ID: 5cbcba1406754b018e8398347f815019
  Operating System: Red Hat Virtualization Host 4.2.5 (el7.5)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:beta:hypervisor
            Kernel: Linux 3.10.0-862.9.1.el7.x86_64
      Architecture: x86-64
[root@rhevh-16 ~]# systemctl status lldpad fcoe
● lldpad.service - Link Layer Discovery Protocol Agent Daemon.
   Loaded: loaded (/usr/lib/systemd/system/lldpad.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-08-01 04:49:57 UTC; 4min 47s ago
 Main PID: 3272 (lldpad)
    Tasks: 1
   Memory: 372.0K
   CGroup: /system.slice/lldpad.service
           └─3272 /usr/sbin/lldpad -t

Aug 01 04:49:57 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Started Link Layer Discovery Protocol Agent Daemon..
Aug 01 04:49:57 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Starting Link Layer Discovery Protocol Agent Daemon....
Aug 01 04:52:12 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available
Aug 01 04:52:13 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available
Aug 01 04:52:13 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available
Aug 01 04:52:18 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available
Aug 01 04:52:19 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available
Aug 01 04:52:19 rhevh-16.infra.prod.eng.bos.redhat.com lldpad[3272]: recvfrom(Event interface): No buffer space available

● fcoe.service - Open-FCoE Inititator.
   Loaded: loaded (/usr/lib/systemd/system/fcoe.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-08-01 04:49:57 UTC; 4min 47s ago
  Process: 3369 ExecStart=/usr/sbin/fcoemon $FCOEMON_OPTS (code=exited, status=0/SUCCESS)
  Process: 3357 ExecStartPre=/sbin/modprobe -qa $SUPPORTED_DRIVERS (code=exited, status=0/SUCCESS)
 Main PID: 3380 (fcoemon)
    Tasks: 1
   Memory: 144.0K
   CGroup: /system.slice/fcoe.service
           └─3380 /usr/sbin/fcoemon --syslog

Aug 01 04:49:57 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Starting Open-FCoE Inititator....
Aug 01 04:49:57 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Started Open-FCoE Inititator..
Aug 01 04:52:14 rhevh-16.infra.prod.eng.bos.redhat.com fcoemon[3380]: fip_send_vlan_request: error 100 Network is down
Aug 01 04:52:14 rhevh-16.infra.prod.eng.bos.redhat.com fcoemon[3380]: fip_send_vlan_request: sendmsg error
Aug 01 04:52:14 rhevh-16.infra.prod.eng.bos.redhat.com fcoemon[3380]: fip_send_vlan_request: error 100 Network is down
Aug 01 04:52:14 rhevh-16.infra.prod.eng.bos.redhat.com fcoemon[3380]: fip_send_vlan_request: sendmsg error
[root@rhevh-16 ~]# fcoeadm -i
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB71040

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.11.8 over p2p1_4.1002-fco
        OS Device Name:    host15
        Node Name:         0x200018fb7b731258
        Port Name:         0x200118fb7b731258
        Fabric Name:        0x100050eb1a292694
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x011001
        State:             Online
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB71040

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.11.8 over p2p2_4.1002-fco
        OS Device Name:    host16
        Node Name:         0x200018fb7b73125b
        Port Name:         0x200118fb7b73125b
        Fabric Name:        0x100050eb1a292a94
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x011001
        State:             Online
[root@rhevh-16 ~]# systemctl reboot
Connection to rhevh-16.infra.prod.eng.bos.redhat.com closed by remote host.
Connection to rhevh-16.infra.prod.eng.bos.redhat.com closed.
[mkeir@mkeir ~]$ ssh root.prod.eng.bos.redhat.com
root.prod.eng.bos.redhat.com's password: 
Last login: Wed Aug  1 04:51:12 2018 from dhcp-40-233.bne.redhat.com

  node status: OK
  See `nodectl check` for more information

Admin Console: https://10.19.220.21:9090/

[root@rhevh-16 ~]# hostnamectl status
   Static hostname: rhevh-16.infra.prod.eng.bos.redhat.com
         Icon name: computer-server
           Chassis: server
        Machine ID: ae2a206dbb4f45a0834f61141455fd63
           Boot ID: 5725b22ebd8d4274a7c30b67f1aefbb6
  Operating System: Red Hat Virtualization Host 4.2.5 (el7.5)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:beta:hypervisor
            Kernel: Linux 3.10.0-862.9.1.el7.x86_64
      Architecture: x86-64
[root@rhevh-16 ~]# systemctl status lldpad fcoe
● lldpad.service - Link Layer Discovery Protocol Agent Daemon.
   Loaded: loaded (/usr/lib/systemd/system/lldpad.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-08-01 05:00:01 UTC; 39s ago
 Main PID: 3783 (lldpad)
    Tasks: 1
   Memory: 348.0K
   CGroup: /system.slice/lldpad.service
           └─3783 /usr/sbin/lldpad -t

Aug 01 05:00:01 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Started Link Layer Discovery Protocol Agent Daemon..
Aug 01 05:00:01 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Starting Link Layer Discovery Protocol Agent Daemon....

● fcoe.service - Open-FCoE Inititator.
   Loaded: loaded (/usr/lib/systemd/system/fcoe.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-08-01 05:00:01 UTC; 39s ago
  Process: 3851 ExecStart=/usr/sbin/fcoemon $FCOEMON_OPTS (code=exited, status=0/SUCCESS)
  Process: 3849 ExecStartPre=/sbin/modprobe -qa $SUPPORTED_DRIVERS (code=exited, status=0/SUCCESS)
 Main PID: 3853 (fcoemon)
    Tasks: 1
   Memory: 128.0K
   CGroup: /system.slice/fcoe.service
           └─3853 /usr/sbin/fcoemon --syslog

Aug 01 05:00:01 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Starting Open-FCoE Inititator....
Aug 01 05:00:01 rhevh-16.infra.prod.eng.bos.redhat.com systemd[1]: Started Open-FCoE Inititator..
[root@rhevh-16 ~]# fcoeadm -i
fcoeadm: No action was taken
Try 'fcoeadm --help' for more information.
[root@rhevh-16 ~]# systemctl restart network
[root@rhevh-16 ~]# fcoeadm -i
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB71040

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.11.8 over p2p1_4.1002-fco
        OS Device Name:    host15
        Node Name:         0x200018fb7b731258
        Port Name:         0x200118fb7b731258
        Fabric Name:        0x100050eb1a292694
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x011001
        State:             Online
    Description:      NetXtreme II BCM57810 10 Gigabit Ethernet Multi Function
    Revision:         10
    Manufacturer:     Broadcom Limited
    Serial Number:    000E1EB71040

    Driver:           bnx2x 1.712.30-0
    Number of Ports:  1

        Symbolic Name:     bnx2fc (QLogic BCM57810) v2.11.8 over p2p2_4.1002-fco
        OS Device Name:    host16
        Node Name:         0x200018fb7b73125b
        Port Name:         0x200118fb7b73125b
        Fabric Name:        0x100050eb1a292a94
        Speed:             10 Gbit
        Supported Speed:   1 Gbit, 10 Gbit
        MaxFrameSize:      2048 bytes
        FC-ID (Port ID):   0x011001
        State:             Online

Comment 18 Eyal Shenitzky 2018-08-23 05:37:43 UTC
There is not FCoE environment to test and investigate the flow.
Return the status to new until there will be a setup.

Comment 19 Dominik Holler 2018-11-30 08:24:46 UTC
There is a workaround for the similar bug 1636254, which indicates that bug 1623904 is related. This would mean that this bug can be reproduced by creating a bond on Broadcom Limited NetXtreme II BCM57800 or BCM57810 with lldpad adminStatus=rx, maybe even without FCoE.

Comment 20 Dominik Holler 2019-01-17 15:48:06 UTC
Created attachment 1521301 [details]
validation that disableing lldp on the interface resolves the initial problem

Comment 21 Dominik Holler 2019-01-17 15:48:40 UTC
Created attachment 1521302 [details]
validation that disableing lldp on the interface resolves the initial problem

Comment 22 Dominik Holler 2019-01-17 15:59:03 UTC
Elad, from my point of view attachment 1521301 [details] shows the validation for this bug. Do you agree?

Comment 23 Elad 2019-01-20 08:19:13 UTC
I suppose that disabling lldp has its implications. As I'm not an expert, I suggest to check what's the effect of disabling it and to validate this against different switch vendors.
Anyway, please change the scope of this bug to be 'FCoE is not initiated on boot with lldp enabled'.

Comment 24 Dominik Holler 2019-01-21 08:08:35 UTC
(In reply to Elad from comment #23)
> I suppose that disabling lldp has its implications. As I'm not an expert, I

This is a valuable thought. Do you already have an idea which disabling lldp
this could have?

According to https://www.kernel.org/doc/Documentation/scsi/bnx2fc.txt:

> ** Broadcom FCoE capable devices implement a DCBX/LLDP client on-chip. Only one
> LLDP client is allowed per interface. For proper operation all host software
> based DCBX/LLDP clients (e.g. lldpad) must be disabled. To disable lldpad on a
> given interface, run the following command:
>
> lldptool set-lldp -i <interface_name> adminStatus=disabled

manually disabling the LLDP receiving is the correct way.
So lldp would not be disabled on the interface, because it is handled in
hardware instead of software.

> suggest to check what's the effect of disabling it and to validate this
> against different switch vendors.


What effects you would check and how you would validate?

> Anyway, please change the scope of this bug to be 'FCoE is not initiated on
> boot with lldp enabled'.

Done.

Comment 25 Elad 2019-01-21 11:20:30 UTC
> What effects you would check and how you would validate?
Nothing that I'm aware from the storage side

Comment 30 Avihai 2019-03-08 07:53:13 UTC
Verification done.
 ovirt-engine 4.3.2-0.1

Scenario is take from prior comment#29:

[root@green-vdse ~]# lldptool set-lldp -i eno6 adminStatus=disabled
adminStatus = disabled
[root@green-vdse ~]# lldptool get-lldp -i eno6 adminStatus
adminStatus=disabled
[root@green-vdse ~]# systemctl restart vdsmd

#Checking adminStatus=disabled should remain during vdsm restart and reboot.

[root@green-vdse ~]# lldptool get-lldp -i eno6 adminStatus
adminStatus=disabled

Comment 31 Sandro Bonazzola 2019-03-13 16:37:47 UTC
This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.