RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1601843 - [NVMe Device Assignment] Could not get NVMe device in guest after hotplug a NVMe device assigned from host
Summary: [NVMe Device Assignment] Could not get NVMe device in guest after hotplug a N...
Keywords:
Status: CLOSED DUPLICATE of bug 1592654
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Alex Williamson
QA Contact: CongLi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-17 10:32 UTC by CongLi
Modified: 2018-07-23 18:34 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-23 18:34:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description CongLi 2018-07-17 10:32:42 UTC
Description of problem:
Hotplug a NVMe device assigned from host to a RHEL.7.6 guest, could not get the NVMe device via 'lsblk' in guest.

Version-Release number of selected component (if applicable):
host:
qemu-kvm-rhev-2.12.0-7.el7.x86_64
guest:
kernel-3.10.0-915.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. Boot up a RHEL.7.6 guest.

2. Hotplug a NVMe device assigned from host
{"execute":"device_add","arguments":{"driver":"vfio-pci","host":"06:00.0","id":"nvme"}}

3. Check the NVMe device in guest.
3.1 dmesg in guest --> works as expected
[  124.371347] pci 0000:00:03.0: [8086:0953] type 00 class 0x010802
[  124.371488] pci 0000:00:03.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[  124.371792] pci 0000:00:03.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
[  124.372674] pci 0000:00:03.0: BAR 6: assigned [mem 0xc0000000-0xc000ffff pref]
[  124.373588] pci 0000:00:03.0: BAR 0: assigned [mem 0x240000000-0x240003fff 64bit]
[  124.428495] nvme nvme0: pci function 0000:00:03.0
[  124.429175] nvme 0000:00:03.0: enabling device (0000 -> 0002)
[  124.470968] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[  124.472112] nvme 0000:00:03.0: irq 31 for MSI/MSI-X

3.2 lspci in guest --> works as expected
00:03.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01)

3.3 lsblk --> does not work as expected, no nvme device info (also no nvme device under /dev)
# lsblk 
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda             8:0    0  20G  0 disk 
├─sda1          8:1    0   1G  0 part /boot
└─sda2          8:2    0  19G  0 part 
  ├─rhel-root 253:0    0  17G  0 lvm  /


Actual results:
As step3, no nvme device info (also no nvme device under /dev) after hotplug nvme device assigned from host.

Expected results:
Could get nvme device info via lsblk in guest after hotplug.
like:
# lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                         8:0    0    20G  0 disk 
├─sda1                      8:1    0     1G  0 part /boot
└─sda2                      8:2    0    19G  0 part 
  ├─rhel-root             253:0    0    17G  0 lvm  /
  └─rhel-swap             253:1    0     2G  0 lvm  [SWAP]
nvme0n1                   259:0    0 372.6G  0 disk 
├─nvme0n1p1               259:1    0     1G  0 part 
└─nvme0n1p2               259:2    0 371.6G  0 part 


Additional info:
1. Could get nvme device info if reboot guest after hotplug nvme device.

2. NVMe device info in host:
06:00.0 Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01) (prog-if 02 [NVM Express])

3. host info:
processor	: 39
vendor_id	: GenuineIntel
cpu family	: 6
model		: 79
model name	: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
stepping	: 1
microcode	: 0xb000021
cpu MHz		: 2399.938
cache size	: 25600 KB
physical id	: 1
siblings	: 20
core id		: 12
cpu cores	: 10
apicid		: 57
initial apicid	: 57
fpu		: yes
fpu_exception	: yes
cpuid level	: 20
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bogomips	: 4410.94
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:

4. QEMU CML:
/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine pc  \
    -nodefaults  \
    -vga cirrus  \
    -device virtio-net-pci,mac=9a:50:51:52:53:54,id=idQaGkcH,vectors=4,netdev=id1nw46d,bus=pci.0,addr=0x5  \
    -netdev tap,id=id1nw46d,vhost=on \
    -m 8192  \
    -smp 8,cores=4,threads=1,sockets=2  \
    -cpu host,+kvm_pv_unhalt \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,strict=off,order=cdn,once=c \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:localhost:4444,server,nowait \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel76-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1 \

Comment 2 Alex Williamson 2018-07-23 18:34:08 UTC
This issue appears to be the same as bug 1592654.  Adding a device specific delay after FLR resolves both issues.

*** This bug has been marked as a duplicate of bug 1592654 ***


Note You need to log in before you can comment on or make changes to this bug.