Bug 874627

Summary: kvm: vcpu0 unhandled wrmsr, rdmsr
Product: [Fedora] Fedora Reporter: Albert Strasheim <fullung>
Component: kernelAssignee: Kernel Maintainer List <kernel-maint>
Status: CLOSED CURRENTRELEASE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 26CC: andrey, awilliam, bugzilla, crobinso, eduardo, Frodox, fullung, gansalmon, gaspard.dhautefeuille, gczarcinski, huangyx3, hulyom, ilmostro7, itamar, jiyan, jonathan, jpokorny, kcoar, kernel-maint, knoel, loberman, loganjerry, madhu.chinakonda, marcin.haba, mtosatti, nalmond, philipp, pmarciniak, Pupkur, robinlee.sysu, sergei.litvinenko, simon.bilodeau, sl1pkn07, stuartjames, tsuroerusu, vcojot, yalzhang, yves.lecuyer.linfedora, zbyszek
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1466895 (view as bug list) Environment:
Last Closed: 2017-07-26 15:53:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1466895, 1470216    

Description Albert Strasheim 2012-11-08 14:53:25 UTC
Description of problem:

[195317.093358] kvm [4670]: vcpu0 unhandled rdmsr: 0x345
[195317.096347] kvm [4670]: vcpu0 unhandled wrmsr: 0x40 data 0

etc.

Version-Release number of selected component (if applicable):

kernel-3.6.3-1.fc17.x86_64

How reproducible:

always

Steps to Reproduce:

virt-install \
	--name fedora16 \
	--connect qemu:///session \
	--ram=2048 \
	--cpu=host \
	--location=/home/fedora/16/x86_64 \
	--extra-args="ks=file:/ks.cfg ksdevice=eth0" \
	--initrd-inject=ks.cfg \
	--os-type=linux \
	--os-variant=fedora16 \
	--disk path=disk.raw,size=4,sparse,cache=writeback,format=raw,bus=virtio

Additional info:

running as normal user in libvirt group, if that matters

command that gets run by virt-install:

qemu-kvm -S -M pc-0.15 -cpu core2duo,+lahf_lm,+rdtscp,+avx,+osxsave,+xsave,+aes,+tsc-deadline,+popcnt,+x2apic,+sse4.2,+sse4.1,+pdcm,+xtpr,+cx16,+tm2,+est,+smx,+vmx,+ds_cpl,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+ds -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name fedora16 -uuid c784c044-fc82-ba84-98e4-4cb1834f4eb4 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/home/alberts/.libvirt/qemu/lib/fedora16.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-reboot -no-shutdown -kernel /home/alberts/.virtinst/boot/virtinst-vmlinuz.Izel_b -initrd /home/alberts/.virtinst/boot/virtinst-initrd.img.CFHtSu -append  ks=file:/ks.cfg ksdevice=eth0 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/home/alberts/next/os/new/disk.raw,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev user,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:75:66,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz

[195317.093358] kvm [4670]: vcpu0 unhandled rdmsr: 0x345
[195317.096347] kvm [4670]: vcpu0 unhandled wrmsr: 0x40 data 0
[195317.096350] kvm [4670]: vcpu0 unhandled wrmsr: 0x60 data 0
[195317.096351] kvm [4670]: vcpu0 unhandled wrmsr: 0x41 data 0
[195317.096353] kvm [4670]: vcpu0 unhandled wrmsr: 0x61 data 0
[195317.096354] kvm [4670]: vcpu0 unhandled wrmsr: 0x42 data 0
[195317.096356] kvm [4670]: vcpu0 unhandled wrmsr: 0x62 data 0
[195317.096357] kvm [4670]: vcpu0 unhandled wrmsr: 0x43 data 0
[195317.096359] kvm [4670]: vcpu0 unhandled wrmsr: 0x63 data 0
[196036.491454] kvm [5311]: vcpu0 unhandled rdmsr: 0x345
[196036.494377] kvm [5311]: vcpu0 unhandled wrmsr: 0x40 data 0
[196036.494380] kvm [5311]: vcpu0 unhandled wrmsr: 0x60 data 0
[196036.494382] kvm [5311]: vcpu0 unhandled wrmsr: 0x41 data 0
[196036.494384] kvm [5311]: vcpu0 unhandled wrmsr: 0x61 data 0
[196036.494385] kvm [5311]: vcpu0 unhandled wrmsr: 0x42 data 0
[196036.494387] kvm [5311]: vcpu0 unhandled wrmsr: 0x62 data 0
[196036.494388] kvm [5311]: vcpu0 unhandled wrmsr: 0x43 data 0
[196036.494390] kvm [5311]: vcpu0 unhandled wrmsr: 0x63 data 0
[196807.233832] kvm [6018]: vcpu0 unhandled rdmsr: 0x345
[196807.236789] kvm [6018]: vcpu0 unhandled wrmsr: 0x40 data 0
[196807.236792] kvm [6018]: vcpu0 unhandled wrmsr: 0x60 data 0
[196807.236794] kvm [6018]: vcpu0 unhandled wrmsr: 0x41 data 0
[196807.236795] kvm [6018]: vcpu0 unhandled wrmsr: 0x61 data 0
[196807.236797] kvm [6018]: vcpu0 unhandled wrmsr: 0x42 data 0
[196807.236798] kvm [6018]: vcpu0 unhandled wrmsr: 0x62 data 0
[196807.236800] kvm [6018]: vcpu0 unhandled wrmsr: 0x43 data 0
[196807.236801] kvm [6018]: vcpu0 unhandled wrmsr: 0x63 data 0
[196924.759844] kvm [6151]: vcpu0 unhandled rdmsr: 0x345
[196924.762831] kvm [6151]: vcpu0 unhandled wrmsr: 0x40 data 0
[196924.762834] kvm [6151]: vcpu0 unhandled wrmsr: 0x60 data 0
[196924.762836] kvm [6151]: vcpu0 unhandled wrmsr: 0x41 data 0
[196924.762838] kvm [6151]: vcpu0 unhandled wrmsr: 0x61 data 0
[196924.762839] kvm [6151]: vcpu0 unhandled wrmsr: 0x42 data 0
[196924.762841] kvm [6151]: vcpu0 unhandled wrmsr: 0x62 data 0
[196924.762842] kvm [6151]: vcpu0 unhandled wrmsr: 0x43 data 0
[196924.762844] kvm [6151]: vcpu0 unhandled wrmsr: 0x63 data 0
[196995.481847] kvm [6299]: vcpu0 unhandled rdmsr: 0x345
[196995.484812] kvm [6299]: vcpu0 unhandled wrmsr: 0x40 data 0
[196995.484815] kvm [6299]: vcpu0 unhandled wrmsr: 0x60 data 0
[196995.484816] kvm [6299]: vcpu0 unhandled wrmsr: 0x41 data 0
[196995.484818] kvm [6299]: vcpu0 unhandled wrmsr: 0x61 data 0
[196995.484819] kvm [6299]: vcpu0 unhandled wrmsr: 0x42 data 0
[196995.484821] kvm [6299]: vcpu0 unhandled wrmsr: 0x62 data 0
[196995.484822] kvm [6299]: vcpu0 unhandled wrmsr: 0x43 data 0
[196995.484824] kvm [6299]: vcpu0 unhandled wrmsr: 0x63 data 0
[197498.647025] kvm [6435]: vcpu0 unhandled rdmsr: 0x345
[197498.650002] kvm [6435]: vcpu0 unhandled wrmsr: 0x40 data 0
[197498.650005] kvm [6435]: vcpu0 unhandled wrmsr: 0x60 data 0
[197498.650007] kvm [6435]: vcpu0 unhandled wrmsr: 0x41 data 0
[197498.650008] kvm [6435]: vcpu0 unhandled wrmsr: 0x61 data 0
[197498.650010] kvm [6435]: vcpu0 unhandled wrmsr: 0x42 data 0
[197498.650011] kvm [6435]: vcpu0 unhandled wrmsr: 0x62 data 0
[197498.650013] kvm [6435]: vcpu0 unhandled wrmsr: 0x43 data 0
[197498.650015] kvm [6435]: vcpu0 unhandled wrmsr: 0x63 data 0
[197827.823986] kvm [6754]: vcpu0 unhandled rdmsr: 0x345
[197827.826943] kvm [6754]: vcpu0 unhandled wrmsr: 0x40 data 0
[197827.826946] kvm [6754]: vcpu0 unhandled wrmsr: 0x60 data 0
[197827.826948] kvm [6754]: vcpu0 unhandled wrmsr: 0x41 data 0
[197827.826950] kvm [6754]: vcpu0 unhandled wrmsr: 0x61 data 0
[197827.826951] kvm [6754]: vcpu0 unhandled wrmsr: 0x42 data 0
[197827.826953] kvm [6754]: vcpu0 unhandled wrmsr: 0x62 data 0
[197827.826955] kvm [6754]: vcpu0 unhandled wrmsr: 0x43 data 0
[197827.826956] kvm [6754]: vcpu0 unhandled wrmsr: 0x63 data 0

Comment 1 Josh Boyer 2013-01-07 21:23:03 UTC
Are you still seeing this with the 3.6.10 or newer kernel updates?

Comment 2 Jerry James 2013-01-07 21:28:53 UTC
I am.  I booted into kernel-3.6.11-1.fc17.x86_64 this morning, and see this in /var/log/messages:

Jan  7 11:17:56 diannao kernel: [10381.859360] kvm [12932]: vcpu0 unhandled rdmsr: 0x345
Jan  7 11:17:56 diannao kernel: [10381.859400] kvm_set_msr_common: 6 callbacks suppressed
Jan  7 11:17:56 diannao kernel: [10381.859402] kvm [12932]: vcpu0 unhandled wrmsr: 0x40 data 0
Jan  7 11:17:56 diannao kernel: [10381.859405] kvm [12932]: vcpu0 unhandled wrmsr: 0x60 data 0
Jan  7 11:17:56 diannao kernel: [10381.859408] kvm [12932]: vcpu0 unhandled wrmsr: 0x41 data 0
Jan  7 11:17:56 diannao kernel: [10381.859410] kvm [12932]: vcpu0 unhandled wrmsr: 0x61 data 0
Jan  7 11:17:56 diannao kernel: [10381.859413] kvm [12932]: vcpu0 unhandled wrmsr: 0x42 data 0
Jan  7 11:17:56 diannao kernel: [10381.859415] kvm [12932]: vcpu0 unhandled wrmsr: 0x62 data 0
Jan  7 11:17:56 diannao kernel: [10381.859417] kvm [12932]: vcpu0 unhandled wrmsr: 0x43 data 0
Jan  7 11:17:56 diannao kernel: [10381.859420] kvm [12932]: vcpu0 unhandled wrmsr: 0x63 data 0
Jan  7 11:17:56 diannao kernel: [10381.871643] kvm [12932]: vcpu1 unhandled wrmsr: 0x40 data 0
Jan  7 11:17:56 diannao kernel: [10381.871650] kvm [12932]: vcpu1 unhandled wrmsr: 0x60 data 0

That was when I launched a Fedora 18 VM.

Comment 3 Andrey Korolyov 2013-01-07 22:25:09 UTC
The problem always happens when VM trying to access debug MSRs not accessible within virtualized guest. For reproduce, one may set an exact CPU model(Nehalem|Westmere|whatever modern enough) and launch VM, those messages should appear shortly even if all requested features available at the host node. I was able to do reproducible crashes of the host node using such guest with launched ``mbw'' process inside with Sandy Bridge-EP host architecture and various guest cpu models, other workloads may do the same. The possible workaround may be in using safer models which do not contain vendor string in the vendor_id, so guest kernel will not try to access those registers. And if such crash can be reproduced, it is clearly a DoS vector, if attacker has privileged account on the guest.

Comment 4 Marcelo Tosatti 2013-01-17 18:17:45 UTC
(In reply to comment #3)
> The problem always happens when VM trying to access debug MSRs not
> accessible within virtualized guest. For reproduce, one may set an exact CPU
> model(Nehalem|Westmere|whatever modern enough) and launch VM, those messages
> should appear shortly even if all requested features available at the host
> node. I was able to do reproducible crashes of the host node using such
> guest with launched ``mbw'' process inside with Sandy Bridge-EP host
> architecture and various guest cpu models, other workloads may do the same.
> The possible workaround may be in using safer models which do not contain
> vendor string in the vendor_id, so guest kernel will not try to access those
> registers. And if such crash can be reproduced, it is clearly a DoS vector,
> if attacker has privileged account on the guest.

Andrey,

Can you please provide additional details on this bug?

Comment 5 Marcelo Tosatti 2013-01-17 18:57:20 UTC
Assigning to Gleb (guest accessing perf monitor MSRs).

Comment 6 Gleb Natapov 2013-01-17 19:21:57 UTC
(In reply to comment #3)
> The problem always happens when VM trying to access debug MSRs not
> accessible within virtualized guest. For reproduce, one may set an exact CPU
> model(Nehalem|Westmere|whatever modern enough) and launch VM, those messages
> should appear shortly even if all requested features available at the host
> node. I was able to do reproducible crashes of the host node using such
> guest with launched ``mbw'' process inside with Sandy Bridge-EP host
> architecture and various guest cpu models, other workloads may do the same.
> The possible workaround may be in using safer models which do not contain
> vendor string in the vendor_id, so guest kernel will not try to access those
> registers. And if such crash can be reproduced, it is clearly a DoS vector,
> if attacker has privileged account on the guest.

What crash? I do not see any crach. Support for those MSRs is not planned.

Comment 7 Andrey Korolyov 2013-01-17 20:06:01 UTC
Gleb,

May you suggest way to collect additional data for this report? The only one observable symptom in the host is a soft lockup appearing under heavy load on guest with such cpu model and then complete freeze in short time, with softlockup messages in the physical console.

Comment 8 Marcelo Tosatti 2013-01-17 20:26:50 UTC
(In reply to comment #7)
> Gleb,
> 
> May you suggest way to collect additional data for this report? The only one
> observable symptom in the host is a soft lockup appearing under heavy load
> on guest with such cpu model and then complete freeze in short time, with
> softlockup messages in the physical console.

Andrey, please grab the softlockup messages (netconsole might help). Also please clone the bug or create a new one (to avoid confusion).

Gleb, the original report contains ignored wrmsr cases for perf msrs. Are these expected 
and harmless?

Comment 9 Gleb Natapov 2013-01-17 20:50:28 UTC
(In reply to comment #8)
> (In reply to comment #7)
> > Gleb,
> > 
> > May you suggest way to collect additional data for this report? The only one
> > observable symptom in the host is a soft lockup appearing under heavy load
> > on guest with such cpu model and then complete freeze in short time, with
> > softlockup messages in the physical console.
> 
> Andrey, please grab the softlockup messages (netconsole might help). Also
> please clone the bug or create a new one (to avoid confusion).
> 
> Gleb, the original report contains ignored wrmsr cases for perf msrs. Are
> these expected 
> and harmless?
Yes. Some PMU events require programing aditional MSRs (besides PMU counters), we do not implement them. The messages are harmless (well KVM injects #GPs bug Linux guest ignores them). Sofltlockups are certanly unrelated.

Comment 10 Josh Boyer 2013-05-28 14:39:12 UTC
This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.

Comment 11 Zbigniew Jędrzejewski-Szmek 2013-07-09 18:35:31 UTC
(In reply to Josh Boyer from comment #10)
> This bug is being closed with INSUFFICIENT_DATA as there has not been a
> response in 2 weeks. If you are still experiencing this issue, please reopen
> and attach the relevant data from the latest kernel you are running and any
> data that might have been requested previously.
Actually the NEEDINFO was for a side bug that was mentioned in the comment. The original issue is still here:

Happens with (on the host):
kernel-3.9.6-301.fc19.x86_64
qemu-kvm-1.4.2-4.fc19.x86_64

Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled rdmsr: 0x345
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x680 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x6c0 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x681 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x6c1 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x682 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x6c2 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x683 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x6c3 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x684 data 0
Jul 09 11:31:59 bupkis kernel: kvm [1896]: vcpu0 unhandled wrmsr: 0x6c4 data 0

Host is "Intel(R) Xeon(R) CPU E5-2620", and the guest was configured with "copy host CPU configuration" in virt-manager, so has "model: sandybridge", and the guest sees "Intel Xeon E312xx (Sandy Bridge)".

> Yes. Some PMU events require programing aditional MSRs (besides PMU counters), we do not implement them. The messages are harmless (well KVM injects #GPs bug Linux guest ignores them). Sofltlockups are certanly unrelated.

If the messages are harmless, their severity should be reduced (to DEBUG?). They are currently WARNINGSs.

Comment 12 Zbigniew Jędrzejewski-Szmek 2013-07-09 18:37:33 UTC
(In reply to Zbigniew Jędrzejewski-Szmek from comment #11)
> If the messages are harmless, their severity should be reduced (to DEBUG?).
> They are currently WARNINGSs.
Actually errors (PRIORITY=3).

Comment 13 Gleb Natapov 2013-07-10 08:03:44 UTC
(In reply to Zbigniew Jędrzejewski-Szmek from comment #12)
> (In reply to Zbigniew Jędrzejewski-Szmek from comment #11)
> > If the messages are harmless, their severity should be reduced (to DEBUG?).
> > They are currently WARNINGSs.
> Actually errors (PRIORITY=3).
Log them as errors is probably to harsh, warning is more appropriate, but I still want them to be seen in a dmesg. They are harmless for a host and most of them are harmless for guests too, but if guest fails in a strange way seeing which not emulated msrs it tried to use may provide a quick clue about the issue. So when I said they are harmless I took into consideration what MSRs were actually accessed.

Comment 14 Zbigniew Jędrzejewski-Szmek 2013-07-10 12:39:44 UTC
(In reply to Gleb Natapov from comment #13)
> (In reply to Zbigniew Jędrzejewski-Szmek from comment #12)
> > (In reply to Zbigniew Jędrzejewski-Szmek from comment #11)
> > > If the messages are harmless, their severity should be reduced (to DEBUG?).
> > > They are currently WARNINGSs.
> > Actually errors (PRIORITY=3).
> Log them as errors is probably to harsh, warning is more appropriate, but I
> still want them to be seen in a dmesg. They are harmless for a host and most
> of them are harmless for guests too, but if guest fails in a strange way
> seeing which not emulated msrs it tried to use may provide a quick clue
> about the issue. So when I said they are harmless I took into consideration
> what MSRs were actually accessed.

I'd think that even a warning is too much -- after all a "warning" means that "something is wrong". It is a reasonable assumption that many admins will try to fix the system to eliminate or at least understand all errors and warnings. Something that is completely OK (as I understand from the explanations above) and can occur repeatedly, should not be logged with elevated priority. The user can see the debug messages if necessary after all.

Comment 15 Gene Czarcinski 2014-05-03 14:52:42 UTC
This problem is still a problem with Fedora 20, i7-4770 host, and kernel-3.14.2-200.fc20.x86_64.

I notice that is is occurring with the processor set to "Hypervisor Default" or "Haswell" in the qemu-kvm definition.

I am reopening this because it is a LARGE problem for me.  However, I have a quesion:

This was reported against the kernel but is that correct??  Perhaps this should be reported against kvm or qemu.

There may not be a problem with not handling rdmsr or wrmsr but THERE IS A PROBLEM THAT THE SYSTEM LOCKS UP!

If it is the kernel, has it been reported upstream?

Comment 16 Gene Czarcinski 2014-05-10 19:48:01 UTC
I am not sure what is going on but as of 20140510 with all updates available applied, the problem ("vcpu0 unhandled wrmsr") has disappeared!  The "vcpu0 unhandled rdmsr" messages are still present but appear not to cause any problems.  The last time I had a problem was on May 3rd and I do not see an updated applied between that time and now which would appear to fix the problem.

Comment 17 Zbigniew Jędrzejewski-Szmek 2014-05-11 12:20:25 UTC
The messages are still there at ERROR priority with latest rawhide kernel.

Comment 18 Gene Czarcinski 2014-05-11 14:24:15 UTC
My error in attempting to close this.  If it is any consolation, this "problem" disappeared on Fedora 20 a few days again.  I am not sure what got updated/changed but something must have.  If there was a real change in a Fedora 20 update then that change should be in rawhide "real soon now".

IK sure would like to know what changed to fix this problem!

Comment 19 Justin M. Forbes 2014-05-21 19:37:30 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.14.4-200.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you experience different issues, please open a new bug report for those.

Comment 20 Josh Boyer 2014-06-18 13:56:54 UTC
This should have been fixed a while ago.  If it isn't please open a new bug with relevant information.

Comment 21 Ken Coar 2014-10-10 21:41:11 UTC
(In reply to Josh Boyer from comment #20)
> This should have been fixed a while ago.  If it isn't please open a new bug
> with relevant information.

What was the fix, and in what version should it have appeared?  I'm seeing it with the latest RHEL 6.5.

# cat /etc/system-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
# uname -a
Linux elided 2.6.32-431.29.2.el6.x86_64 #1 SMP Sun Jul 27 15:55:46 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux

I am seeing these with rdmsr values of, and roughly in proportions of:

 o 0xc0010001 × n
 o 0xc0010112 × n
 o 0xc001100d × 2n

It's all very well to want to see these in the kernel ring buffer, but as they continue to be emitted they eventually replace any actual useful information in dmesg.

If they're harmless, should be DEBUG, won't be fixed, and the sysadmin can't do anything about them, I'm afraid I don't see the point of them being logged to the kernel buffer at all, much less given the opportunity to poison it.

Just MHO.

Thanks!

Comment 22 Josh Boyer 2014-10-11 12:43:22 UTC
(In reply to Ken Coar from comment #21)
> (In reply to Josh Boyer from comment #20)
> > This should have been fixed a while ago.  If it isn't please open a new bug
> > with relevant information.
> 
> What was the fix, and in what version should it have appeared?  I'm seeing
> it with the latest RHEL 6.5.

I don't recall exactly what the fix was, but this isn't a RHEL bug and we have no idea whether it's fixed in RHEL.  You should open a bug against RHEL if you're seeing this issue so the RHEL maintainers are aware of it.

Comment 23 Jan Pokorný [poki] 2015-06-23 13:28:59 UTC
When looking into journal (for an unrelated issue), I spotted these:

Jun 01 17:35:49 juicyfruit kernel: kvm: zapping shadow pages for mmio generation wraparound
Jun 01 17:35:50 juicyfruit avahi-daemon[707]: Registering new address record for fe80::fc54:ff:fe77:6533 on vnet0.*.
Jun 01 17:35:50 juicyfruit kernel: virbr0: port 2(vnet0) entered learning state
Jun 01 17:35:52 juicyfruit kernel: virbr0: topology change detected, propagating
Jun 01 17:35:52 juicyfruit kernel: virbr0: port 2(vnet0) entered forwarding state
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x611
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x639
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x641
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x619
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x611
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x639
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x641
Jun 01 17:35:55 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x619

[...]

Jun 01 17:40:02 juicyfruit kernel: virbr0: port 3(vnet1) entered learning state
Jun 01 17:40:04 juicyfruit kernel: virbr0: topology change detected, propagating
Jun 01 17:40:04 juicyfruit kernel: virbr0: port 3(vnet1) entered forwarding state
Jun 01 17:40:06 juicyfruit kernel: kvm [843]: vcpu0 unhandled rdmsr: 0x606

[...]

Jun 01 17:55:25 juicyfruit kernel: virbr0: port 3(vnet1) entered learning state
Jun 01 17:55:27 juicyfruit kernel: virbr0: topology change detected, propagating
Jun 01 17:55:27 juicyfruit kernel: virbr0: port 3(vnet1) entered forwarding state
Jun 01 17:55:30 juicyfruit kernel: kvm [5290]: vcpu0 unhandled rdmsr: 0x606
[...]
Jun 01 17:55:53 juicyfruit kernel: kvm [5290]: vcpu0 unhandled rdmsr: 0x611
Jun 01 17:55:53 juicyfruit kernel: kvm [5290]: vcpu0 unhandled rdmsr: 0x639
Jun 01 17:55:53 juicyfruit kernel: kvm [5290]: vcpu0 unhandled rdmsr: 0x641
Jun 01 17:55:53 juicyfruit kernel: kvm [5290]: vcpu0 unhandled rdmsr: 0x619

[...]

Jun 01 18:02:28 juicyfruit kernel: kvm: zapping shadow pages for mmio generation wraparound
Jun 01 18:02:29 juicyfruit avahi-daemon[707]: Registering new address record for fe80::fc54:ff:fe6f:ac54 on vnet1.*.
Jun 01 18:02:30 juicyfruit kernel: virbr0: port 3(vnet1) entered learning state
Jun 01 18:02:32 juicyfruit kernel: virbr0: topology change detected, propagating
Jun 01 18:02:32 juicyfruit kernel: virbr0: port 3(vnet1) entered forwarding state
Jun 01 18:02:32 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x606

[...]

Jun 01 18:02:34 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x611
Jun 01 18:02:34 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x639
Jun 01 18:02:34 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x641
Jun 01 18:02:34 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x619
[...]
Jun 01 18:02:35 juicyfruit kernel: kvm [6940]: vcpu0 unhandled rdmsr: 0x1ad

[...]

Jun 16 13:35:01 juicyfruit kernel: virbr0: port 4(vnet2) entered learning state
Jun 16 13:35:03 juicyfruit kernel: virbr0: topology change detected, propagating
Jun 16 13:35:03 juicyfruit kernel: virbr0: port 4(vnet2) entered forwarding state
Jun 16 13:35:06 juicyfruit kernel: kvm [6029]: vcpu0 unhandled rdmsr: 0x606

[...]

Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x611
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x639
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x641
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x619
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x611
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x639
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x641
Jun 23 11:03:31 juicyfruit kernel: kvm [31653]: vcpu0 unhandled rdmsr: 0x619
Jun 23 11:03:52 juicyfruit systemd-logind[711]: Failed to abandon session scope: Connection timed out

[...]


 * * *

Stats:

0x611/0x619/0x639/0x641: 6x

0x606: 4x
- (reliably so far) preceded with virbr0 messages

0x1ad: 1x


Wanted to share with my findings, I am not completely sure if this
deserves a separate tracking.

Comment 24 Jan Pokorný [poki] 2015-06-23 13:30:34 UTC
I forgot to add that host is Fedora 22, guests are RHEL 6.7
and 7.2 snapshots (and for June 01 and 16, could be also Fedora 21).

Comment 25 ILMostro 2015-07-22 10:46:13 UTC
(In reply to Josh Boyer from comment #22)
For what it's worth, I'm seeing this on RHEL 7.1 host; don't feel like rebooting into Fedora.  I'm pretty sure RHEL maintainers know about this, since it's an issue with KVM.  Besides, RHEL maintainers' attention seems unattainable if/when users have Self-support.  I'm not here asking for help; take the info if it's useful, otherwise ignore it like the KVM messages.

According to Ubuntu's bug report, temporary workaround for this is to ignore messages with `echo 1 > /sys/module/kvm/parameters/ignore_msrs`, unless you feel they need to be explicitly flooding journal.

As @Jan Pokorny confirmed this behavior continues in Fedora22, I'm not sure why this bug is still closed.

Comment 26 Marcin Haba 2015-08-06 00:40:47 UTC
Hello,

I am also seeing this "unhandled rdmsr" message in dmesg on Fedora 22.

[  112.530632] device vnet0 entered promiscuous mode
[  112.548782] virbr0: port 2(vnet0) entered listening state
[  112.548803] virbr0: port 2(vnet0) entered listening state
[  113.835600] kvm: zapping shadow pages for mmio generation wraparound
[  114.555163] virbr0: port 2(vnet0) entered learning state
[  115.245344] device vnet1 entered promiscuous mode
[  115.254470] virbr0: port 3(vnet1) entered listening state
[  115.254484] virbr0: port 3(vnet1) entered listening state
[  115.914037] kvm: zapping shadow pages for mmio generation wraparound
[  116.562042] virbr0: topology change detected, propagating
[  116.562064] virbr0: port 2(vnet0) entered forwarding state
[  117.258952] virbr0: port 3(vnet1) entered learning state
[  119.265772] virbr0: topology change detected, propagating
[  119.265777] virbr0: port 3(vnet1) entered forwarding state
[  119.818412] kvm [3245]: vcpu0 unhandled rdmsr: 0xc0011021
[  119.818424] kvm [3245]: vcpu0 unhandled rdmsr: 0xc0010112
[  119.956546] kvm [3245]: vcpu0 unimplemented perfctr wrmsr: 0xc0010004 data 0xffff
[  119.969982] kvm [3245]: vcpu1 unhandled rdmsr: 0xc0011021
[  119.981802] kvm [3245]: vcpu2 unhandled rdmsr: 0xc0011021
[  122.076303] kvm [3419]: vcpu0 unhandled rdmsr: 0xc001100d
[  122.076318] kvm [3419]: vcpu0 unhandled rdmsr: 0xc0010112
[  122.211591] kvm [3419]: vcpu0 unimplemented perfctr wrmsr: 0xc0010004 data 0xffff
[  122.224509] kvm [3419]: vcpu1 unhandled rdmsr: 0xc001100d
[  134.069710] kvm [3245]: vcpu0 unhandled rdmsr: 0xc0010061

My kernel is:

# uname -a
Linux ganiwork.lan 4.0.8-300.fc22.x86_64 #1 SMP Fri Jul 10 21:04:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

# modinfo kvm
filename:       /lib/modules/4.0.8-300.fc22.x86_64/kernel/arch/x86/kvm/kvm.ko.xz
license:        GPL
author:         Qumranet
depends:        
intree:         Y
vermagic:       4.0.8-300.fc22.x86_64 SMP mod_unload 
signer:         Fedora kernel signing key
sig_key:        72:32:5B:2B:44:7A:F5:6E:6E:8C:3B:96:7C:39:75:27:57:39:28:05
sig_hashalgo:   sha256
parm:           allow_unsafe_assigned_interrupts:Enable device assignment on platforms without interrupt remapping support. (bool)
parm:           ignore_msrs:bool
parm:           min_timer_period_us:uint
parm:           tsc_tolerance_ppm:uint
parm:           lapic_timer_advance_ns:uint
parm:           halt_poll_ns:uint

I hope that above listings will help in some way.

Comment 27 Stuart James 2016-03-21 14:57:22 UTC
I see this error as well on RHEL 7.2 latest running on OpenStack 6 platform

kvm [5026]: vcpu0 unhandled rdmsr: 0xc0011021
kvm [5026]: vcpu0 unhandled rdmsr: 0xc0010112


kernel 3.10.0-327.10.1.el7.x86_64
Version: AMD FX(tm)-6350 Six-Core Processor

Here is some further information on what appears to be the possible cause of the issue https://bugzilla.kernel.org/show_bug.cgi?id=102651

Comment 28 Davide Repetto 2016-08-19 13:04:00 UTC
The error messages are still there on Fedora kernel-4.6.6-300.fc24.x86_64


> ago 18 14:49:51 dave.idp.it kernel: kvm [22874]: vcpu0, guest rIP: 0xffffffff81065a52 unhandled rdmsr: 0xc0010048
> ago 18 14:49:57 dave.idp.it kernel: kvm [22874]: vcpu0, guest rIP: 0xffffffff81065a52 unhandled rdmsr: 0x3a
> ago 18 14:49:57 dave.idp.it kernel: kvm [22874]: vcpu0, guest rIP: 0xffffffff81065a52 unhandled rdmsr: 0xd90

Comment 29 Adam Williamson 2016-09-02 16:02:36 UTC
I see a bunch of these on the openQA worker host boxes, which run Fedora 24.

Comment 30 Laura Abbott 2016-09-23 19:10:49 UTC
*********** MASS BUG UPDATE **************
 
We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 24 kernel bugs.
 
Fedora 24 has now been rebased to 4.7.4-200.fc24.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.
 
If you have moved on to Fedora 25, and are still experiencing this issue, please change the version to Fedora 25.
 
If you experience different issues, please open a new bug report for those.

Comment 31 Sergei LITVINENKO 2016-10-16 14:44:55 UTC
F24

[root@homedesk ~]# uname -r
4.7.7-200.fc24.x86_64


[25458.152878] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x606
[25458.357802] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x60d
[25458.357815] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x3f8
[25458.357819] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x3f9
[25458.357822] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x3fa
[25458.357826] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x630
[25458.357829] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x631
[25458.357832] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x632
[25458.394558] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x60d
[25458.394569] kvm [12496]: vcpu0, guest rIP: 0xffffffffa705d722 unhandled rdmsr: 0x3f8
[25472.880631] br0: port 3(vnet0) entered forwarding state
[25472.880638] br0: topology change detected, sending tcn bpdu
[26523.888095] kvm_get_msr_common: 13 callbacks suppressed
[26523.888099] kvm [12496]: vcpu0, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x34
[26525.573957] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x606
[26525.651311] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x60d
[26525.651318] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x3f8
[26525.651320] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x3f9
[26525.651321] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x3fa
[26525.651323] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x630
[26525.651325] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x631
[26525.651327] kvm [12496]: vcpu1, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x632
[26525.664882] kvm [12496]: vcpu0, guest rIP: 0xffffffff8205d722 unhandled rdmsr: 0x60d
[27160.920389] kvm_get_msr_common: 21 callbacks suppressed
[27160.920393] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x34
[27162.865570] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x606
[27162.962378] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x60d
[27162.962384] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x3f8
[27162.962386] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x3f9
[27162.962388] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x3fa
[27162.962390] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x630
[27162.962391] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x631
[27162.962393] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x632
[27162.968883] kvm [12496]: vcpu0, guest rIP: 0xffffffff9605d722 unhandled rdmsr: 0x611
[27201.262244] do_trap: 18 callbacks suppressed
[27201.262250] traps: virt-manager[12448] trap int3 ip:7f6615e4915b sp:7fffd919f960 error:0 in libglib-2.0.so.0.4800.2[7f6615df9000+10d000]

Comment 32 Zbigniew Jędrzejewski-Szmek 2016-10-24 03:41:18 UTC
One more example, systemd regression tests:
/bin/qemu-kvm -smp 1 -net none -m 512M -nographic -kernel /boot/519a16632fbd4c71966ce9305b360c9c/4.8.0-0.rc7.git0.1.fc25.x86_64/linux -drive format=raw,cache=unsafe,file=/var/tmp/systemd-test.by5tMu/rootdisk.img -machine accel=kvm -enable-kvm -cpu host -append 'root=/dev/sda1 raid=noautodetect loglevel=2 init=/usr/lib/systemd/systemd ro console=ttyS0 selinux=0

$ uname -r
4.8.0-0.rc7.git0.1.fc25.x86_64
$ rpm -q kernel
kernel-4.8.0-0.rc7.git0.1.fc25.x86_64

1477280231.513610 kernel: kvm: zapping shadow pages for mmio generation wraparound
1477280231.516609 kernel: kvm: zapping shadow pages for mmio generation wraparound
1477280232.104559 kernel: kvm [17755]: vcpu0, guest rIP: 0xffffffff9705d722 unhandled rdmsr: 0x1c9

(The image is nothing special, probably any Fedora image will generate the same result. The image is 400MB, so I'm not attaching it, but I can post it somewhere if necessary).

Comment 33 Sergei LITVINENKO 2017-01-06 18:35:53 UTC
[root@homedesk ~]# uname -r ; dmesg
4.8.13-300.fc25.x86_64
[ 1217.812281] tun: Universal TUN/TAP device driver, 1.6
[ 1217.812285] tun: (C) 1999-2004 Max Krasnyansky <maxk>
[ 1217.865078] br0: port 3(vnet0) entered blocking state
[ 1217.865083] br0: port 3(vnet0) entered disabled state
[ 1217.865164] device vnet0 entered promiscuous mode
[ 1217.876605] br0: port 3(vnet0) entered blocking state
[ 1217.876610] br0: port 3(vnet0) entered listening state
[ 1218.862791] kvm: zapping shadow pages for mmio generation wraparound
[ 1218.865932] kvm: zapping shadow pages for mmio generation wraparound
[ 1226.788286] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x34
[ 1233.021564] br0: port 3(vnet0) entered learning state
[ 1240.646043] kvm [13561]: vcpu1, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x606
[ 1241.007371] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x611
[ 1241.007379] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x639
[ 1241.007382] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x641
[ 1241.007385] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x619
[ 1241.018498] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x611
[ 1241.018504] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x639
[ 1241.018507] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x641
[ 1241.018510] kvm [13561]: vcpu0, guest rIP: 0xffffffff8706c072 unhandled rdmsr: 0x619
[ 1248.379882] br0: port 3(vnet0) entered forwarding state
[ 1248.379893] br0: topology change detected, sending tcn bpdu

Comment 34 Sergei LITVINENKO 2017-01-06 18:37:13 UTC
+ Version: 25

Comment 35 loberman 2017-01-18 19:32:37 UTC
This seems to be a simple fix here as already mentioned

https://bugzilla.kernel.org/show_bug.cgi?id=102651

https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?id=73fdeb66592ee80dffb16fb8a9b7378a00c1a826

We have this issue on RHEL 7.x and many customers are looking to get these messages out of their logs.

Nick Almond will attach this to the appropriate SFDC case

Comment 36 sL1pKn07 2017-02-17 00:58:35 UTC
same here in Archlinux

Linux sL1pKn07 4.9.8-1-ARCH #1 SMP PREEMPT Mon Feb 6 12:59:40 CET 2017 x86_64 GNU/Linux


[27769.698916] kvm_get_msr_common: 422 callbacks suppressed
[27769.698919] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0xce
[27769.698935] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1fc
[27769.698947] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27769.698955] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27769.698967] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1ad
[27769.698981] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a2
[27769.699002] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fc
[27769.699011] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fd
[27770.003471] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0xce
[27770.003486] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1fc
[27774.796053] kvm_get_msr_common: 403 callbacks suppressed
[27774.796055] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0xce
[27774.796068] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1fc
[27774.796077] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27774.796083] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27774.796092] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1ad
[27774.796110] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a2
[27774.796132] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3f8
[27774.796138] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3f9
[27774.796145] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fa
[27774.796157] kvm [28334]: vcpu0, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fc
[27779.799993] kvm_get_msr_common: 433 callbacks suppressed
[27779.799995] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0xce
[27779.800006] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1fc
[27779.800014] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27779.800020] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a4
[27779.800028] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1ad
[27779.800037] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1a2
[27779.800052] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fc
[27779.800063] kvm [28334]: vcpu2, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x3fd
[27779.805854] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0xce
[27779.805867] kvm [28334]: vcpu4, guest rIP: 0xfffff80796f617f3 ignored rdmsr: 0x1fc


tons of this in a random order

libvirtd (libvirt) 3.1.0 builded from git (commit 4337bc57b)
QEMU emulator version 2.8.50 (v2.8.0-1321-gad584d37f2-dirty)


CPU host: double Intel Xeon x5650
└───╼  cat /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           X5650  @ 2.67GHz
stepping        : 2
microcode       : 0x14
cpu MHz         : 1600.000
cache size      : 12288 KB
physical id     : 0
siblings        : 12
core id         : 0
cpu cores       : 6
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb tpr_shadow vnmi flexpriority ept vpid dtherm ida arat
bugs            :
bogomips        : 5445.04
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

(x24 times)

┌─┤[$]|[sl1pkn07]|[sL1pKn07]|[~]|
└───╼  systool -vm kvm
Module = "kvm"

  Attributes:
    coresize            = "524288"
    initsize            = "0"
    initstate           = "live"
    refcnt              = "1"
    taint               = ""
    uevent              = <store method only>

  Parameters:
    allow_unsafe_assigned_interrupts= "N"
    halt_poll_ns_grow   = "2"
    halt_poll_ns_shrink = "0"
    halt_poll_ns        = "400000"
    ignore_msrs         = "Y"
    kvmclock_periodic_sync= "Y"
    lapic_timer_advance_ns= "0"
    min_timer_period_us = "500"
    mmu_audit           = "N"
    tsc_tolerance_ppm   = "250"
    vector_hashing      = "Y"

  Sections:
    .altinstr_aux       = "0xffffffffa090f5d0"
    .altinstr_replacement= "0xffffffffa090f53c"
    .altinstructions    = "0xffffffffa0920d89"
    .bss                = "0xffffffffa0935100"
    .data..read_mostly  = "0xffffffffa0931580"
    .data.unlikely      = "0xffffffffa0931560"
    .data               = "0xffffffffa0926000"
    .fixup              = "0xffffffffa090f63c"
    .gnu.linkonce.this_module= "0xffffffffa0934dc0"
    .init.text          = "0xffffffffa0947000"
    .note.gnu.build-id  = "0xffffffffa0910000"
    .parainstructions   = "0xffffffffa0924cd0"
    .ref.data           = "0xffffffffa0931aa0"
    .rodata.str1.1      = "0xffffffffa091dabc"
    .rodata.str1.8      = "0xffffffffa091f528"
    .rodata             = "0xffffffffa0911320"
    .smp_locks          = "0xffffffffa091d7e4"
    .strtab             = "0xffffffffa095eef8"
    .symtab             = "0xffffffffa094a000"
    .text               = "0xffffffffa08c6000"
    .text.unlikely      = "0xffffffffa090f6b4"
    __bug_table         = "0xffffffffa0920f9e"
    __ex_table          = "0xffffffffa0924fec"
    __jump_table        = "0xffffffffa0930528"
    __kcrctab           = "0xffffffffa0910cd0"
    __kcrctab_gpl       = "0xffffffffa0910cd8"
    __ksymtab_gpl       = "0xffffffffa0910040"
    __ksymtab_strings   = "0xffffffffa09211ba"
    __ksymtab           = "0xffffffffa0910030"
    __mcount_loc        = "0xffffffffa0922e68"
    __param             = "0xffffffffa0922238"
    __tracepoints_ptrs  = "0xffffffffa09223f0"
    __tracepoints       = "0xffffffffa09336c0"
    __tracepoints_strings= "0xffffffffa0922680"
    __verbose           = "0xffffffffa0934ab8"
    _ftrace_events      = "0xffffffffa0931820"

Comment 37 sL1pKn07 2017-02-17 01:00:00 UTC
oops. is a only fedora bugtracker?

sorry then

Comment 38 Yves L'ECUYER 2017-02-28 19:51:39 UTC
(In reply to ILMostro from comment #25)
> (In reply to Josh Boyer from comment #22)
> For what it's worth, I'm seeing this on RHEL 7.1 host; don't feel like
> rebooting into Fedora.  I'm pretty sure RHEL maintainers know about this,
> since it's an issue with KVM.  Besides, RHEL maintainers' attention seems
> unattainable if/when users have Self-support.  I'm not here asking for help;
> take the info if it's useful, otherwise ignore it like the KVM messages.
> 
> According to Ubuntu's bug report, temporary workaround for this is to ignore
> messages with `echo 1 > /sys/module/kvm/parameters/ignore_msrs`, unless you
> feel they need to be explicitly flooding journal.
> 
> As @Jan Pokorny confirmed this behavior continues in Fedora22, I'm not sure
> why this bug is still closed.

echo 1 > /sys/module/kvm/parameters/ignore_msrs
IS NOT A BUG TURN AROUND for the message "ignored rdmsr"  flooding !!!

The people discussing here are talking about this concern, because they DO HAVE currently running virtual hypervisor running on Intel Hardware, with KVM as native  hypervisor.
And in order to do so, they have already set :
    options kvm ignore_msrs=1
in kvm.conf or a specif file kvm-intel.conf under directory /etc/modprobe.d/

Consequently when kvm kernel modules are loaded, 
/sys/module/kvm/parameters/ignore_msrs
is already set to Y (<==> 1)
Otherwise they would have already get a PSOD at VM startup  !!
(and in this case no more message of course)
=============
So today with Fedora 25 and last kernel 4.8 or 4.9 the annoying messages are still there !!
=============
I do not find how to filter them in order at least , they are not reported in main journal /var/log/messages.

Comment 39 Justin M. Forbes 2017-04-11 14:41:48 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There are a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 24 kernel bugs.

Fedora 25 has now been rebased to 4.10.9-100.fc24.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 26, and are still experiencing this issue, please change the version to Fedora 26.

If you experience different issues, please open a new bug report for those.

Comment 40 Sergei LITVINENKO 2017-04-17 15:46:19 UTC
after migrate to 4.10.9-200.fc25.x86_64 it looks much better 


[root@homedesk ~]# dmesg
[24052.249892] tun: Universal TUN/TAP device driver, 1.6
[24052.249897] tun: (C) 1999-2004 Max Krasnyansky <maxk>
[24052.295602] br0: port 3(vnet0) entered blocking state
[24052.295608] br0: port 3(vnet0) entered disabled state
[24052.295702] device vnet0 entered promiscuous mode
[24052.315305] br0: port 3(vnet0) entered blocking state
[24052.315311] br0: port 3(vnet0) entered listening state
[24067.230085] kvm [31716]: vcpu0, guest rIP: 0xffffffffa5060b94 disabled perfctr wrmsr: 0xc2 data 0xffff
[24067.550227] br0: port 3(vnet0) entered learning state
[24082.909485] br0: port 3(vnet0) entered forwarding state
[24082.909498] br0: topology change detected, sending tcn bpdu

Comment 41 sL1pKn07 2017-04-17 15:58:15 UTC
Hi

Is silenced by patchset? or is silenced by upstream

greetings

Comment 42 Simon 2017-05-04 12:52:32 UTC
As of today, this is still an issue.

Kernel 3.10.0-514.16.1.el7.x86_64
RHEL 7.3
Libvirt 2.0.0
Using library: libvirt 2.0.0
Using API: QEMU 2.0.0
Running hypervisor: QEMU 1.5.3

This bug has been there for a while now... 

[Tue Apr 25 09:26:28 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x606
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x611
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x639
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x641
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x619
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x611
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x639
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x641
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu0 unhandled rdmsr: 0x619
[Tue Apr 25 09:26:31 2017] kvm [23644]: vcpu6 unhandled rdmsr: 0x60d

Comment 43 yalzhang@redhat.com 2017-05-07 09:32:50 UTC
same on 3.10.0-640.el7.x86_64
libvirt-3.2.0-4.el7.x86_64
qemu-kvm-rhev-2.9.0-2.el7.x86_64

with cpu "Intel(R) Core(TM)2 Quad CPU    Q9500  @ 2.83GHz"

# dmesg | grep vcpu0
[151159.813221] kvm [6835]: vcpu0 unhandled rdmsr: 0x60d
[151159.813263] kvm [6835]: vcpu0 unhandled rdmsr: 0x3f8
[151159.813959] kvm [6835]: vcpu0 unhandled rdmsr: 0x3f9
[151159.814639] kvm [6835]: vcpu0 unhandled rdmsr: 0x3fa
[151159.815472] kvm [6835]: vcpu0 unhandled rdmsr: 0x630
[151159.816214] kvm [6835]: vcpu0 unhandled rdmsr: 0x631
[151159.816854] kvm [6835]: vcpu0 unhandled rdmsr: 0x632
[155193.376807] kvm [7427]: vcpu0 unhandled rdmsr: 0x60d
[155193.377476] kvm [7427]: vcpu0 unhandled rdmsr: 0x3f8
[155193.378132] kvm [7427]: vcpu0 unhandled rdmsr: 0x3f9
[155193.378878] kvm [7427]: vcpu0 unhandled rdmsr: 0x3fa
[155193.379586] kvm [7427]: vcpu0 unhandled rdmsr: 0x630
[155193.380224] kvm [7427]: vcpu0 unhandled rdmsr: 0x631
[155193.380830] kvm [7427]: vcpu0 unhandled rdmsr: 0x632
[157687.114395] kvm [9304]: vcpu0 unhandled rdmsr: 0x60d
[157687.115020] kvm [9304]: vcpu0 unhandled rdmsr: 0x3f8
[157687.115760] kvm [9304]: vcpu0 unhandled rdmsr: 0x3f9
[157687.116372] kvm [9304]: vcpu0 unhandled rdmsr: 0x3fa
[157687.116986] kvm [9304]: vcpu0 unhandled rdmsr: 0x630
[157687.117562] kvm [9304]: vcpu0 unhandled rdmsr: 0x631
[157687.118193] kvm [9304]: vcpu0 unhandled rdmsr: 0x632
[157726.643637] kvm [9413]: vcpu0 unhandled rdmsr: 0x60d
[157726.644175] kvm [9413]: vcpu0 unhandled rdmsr: 0x3f8
[157726.644678] kvm [9413]: vcpu0 unhandled rdmsr: 0x3f9
[157726.645216] kvm [9413]: vcpu0 unhandled rdmsr: 0x3fa
[157726.645766] kvm [9413]: vcpu0 unhandled rdmsr: 0x630
[157726.646271] kvm [9413]: vcpu0 unhandled rdmsr: 0x631
[157726.646809] kvm [9413]: vcpu0 unhandled rdmsr: 0x632
[157776.471263] kvm [9541]: vcpu0 unhandled rdmsr: 0x60d
[157776.471738] kvm [9541]: vcpu0 unhandled rdmsr: 0x3f8
[157776.472171] kvm [9541]: vcpu0 unhandled rdmsr: 0x3f9
[157776.472694] kvm [9541]: vcpu0 unhandled rdmsr: 0x3fa
[157776.473108] kvm [9541]: vcpu0 unhandled rdmsr: 0x630
[157776.473570] kvm [9541]: vcpu0 unhandled rdmsr: 0x631
[157776.473941] kvm [9541]: vcpu0 unhandled rdmsr: 0x632
[158815.511611] kvm [10079]: vcpu0 unhandled rdmsr: 0x60d
[158815.511997] kvm [10079]: vcpu0 unhandled rdmsr: 0x3f8
[158815.512357] kvm [10079]: vcpu0 unhandled rdmsr: 0x3f9
[158815.512827] kvm [10079]: vcpu0 unhandled rdmsr: 0x3fa
[158815.513170] kvm [10079]: vcpu0 unhandled rdmsr: 0x630
[158815.513550] kvm [10079]: vcpu0 unhandled rdmsr: 0x631
[158815.513871] kvm [10079]: vcpu0 unhandled rdmsr: 0x632
[159224.098980] kvm [10484]: vcpu0 unhandled rdmsr: 0x60d
[159224.099294] kvm [10484]: vcpu0 unhandled rdmsr: 0x3f8
[159224.099572] kvm [10484]: vcpu0 unhandled rdmsr: 0x3f9
[159224.099849] kvm [10484]: vcpu0 unhandled rdmsr: 0x3fa
[159224.100118] kvm [10484]: vcpu0 unhandled rdmsr: 0x630
[159224.100463] kvm [10484]: vcpu0 unhandled rdmsr: 0x631
[159224.100703] kvm [10484]: vcpu0 unhandled rdmsr: 0x632
[159390.929848] kvm [10609]: vcpu0 unhandled rdmsr: 0x60d
[159390.930084] kvm [10609]: vcpu0 unhandled rdmsr: 0x3f8
[159390.930297] kvm [10609]: vcpu0 unhandled rdmsr: 0x3f9
[159390.930528] kvm [10609]: vcpu0 unhandled rdmsr: 0x3fa
[159390.930741] kvm [10609]: vcpu0 unhandled rdmsr: 0x630
[159390.930954] kvm [10609]: vcpu0 unhandled rdmsr: 0x631
[159390.931178] kvm [10609]: vcpu0 unhandled rdmsr: 0x632
[159468.403139] kvm [10696]: vcpu0 unhandled rdmsr: 0x60d
[159468.403371] kvm [10696]: vcpu0 unhandled rdmsr: 0x3f8
[159468.403583] kvm [10696]: vcpu0 unhandled rdmsr: 0x3f9
[159468.403811] kvm [10696]: vcpu0 unhandled rdmsr: 0x3fa
[159468.404038] kvm [10696]: vcpu0 unhandled rdmsr: 0x630
[159468.404324] kvm [10696]: vcpu0 unhandled rdmsr: 0x631
[159468.404566] kvm [10696]: vcpu0 unhandled rdmsr: 0x632
[160074.598894] kvm [11315]: vcpu0 unhandled rdmsr: 0x60d
[160074.599129] kvm [11315]: vcpu0 unhandled rdmsr: 0x3f8
[160074.599340] kvm [11315]: vcpu0 unhandled rdmsr: 0x3f9
[160074.599550] kvm [11315]: vcpu0 unhandled rdmsr: 0x3fa
[160074.599798] kvm [11315]: vcpu0 unhandled rdmsr: 0x630
[160074.600025] kvm [11315]: vcpu0 unhandled rdmsr: 0x631
[160074.600357] kvm [11315]: vcpu0 unhandled rdmsr: 0x632
[164956.623224] kvm [13275]: vcpu0 unhandled rdmsr: 0x60d
[164956.623451] kvm [13275]: vcpu0 unhandled rdmsr: 0x3f8
[164956.623680] kvm [13275]: vcpu0 unhandled rdmsr: 0x3f9
[164956.623893] kvm [13275]: vcpu0 unhandled rdmsr: 0x3fa
[164956.624121] kvm [13275]: vcpu0 unhandled rdmsr: 0x630
[164956.624443] kvm [13275]: vcpu0 unhandled rdmsr: 0x631
[164956.624757] kvm [13275]: vcpu0 unhandled rdmsr: 0x632
[165361.116575] kvm [13880]: vcpu0 unhandled rdmsr: 0x60d
[165361.116802] kvm [13880]: vcpu0 unhandled rdmsr: 0x3f8
[165361.117042] kvm [13880]: vcpu0 unhandled rdmsr: 0x3f9
[165361.117390] kvm [13880]: vcpu0 unhandled rdmsr: 0x3fa
[165361.117604] kvm [13880]: vcpu0 unhandled rdmsr: 0x630
[165361.117819] kvm [13880]: vcpu0 unhandled rdmsr: 0x631
[165361.118039] kvm [13880]: vcpu0 unhandled rdmsr: 0x632
[165864.655533] kvm [14859]: vcpu0 unhandled rdmsr: 0x60d
[165864.655760] kvm [14859]: vcpu0 unhandled rdmsr: 0x3f8
[165864.655967] kvm [14859]: vcpu0 unhandled rdmsr: 0x3f9
[165864.656218] kvm [14859]: vcpu0 unhandled rdmsr: 0x3fa
[165864.656544] kvm [14859]: vcpu0 unhandled rdmsr: 0x630
[165864.656788] kvm [14859]: vcpu0 unhandled rdmsr: 0x631
[165864.656991] kvm [14859]: vcpu0 unhandled rdmsr: 0x632
[170616.952925] kvm [16870]: vcpu0 unhandled rdmsr: 0x60d
[170616.953157] kvm [16870]: vcpu0 unhandled rdmsr: 0x3f8
[170616.953365] kvm [16870]: vcpu0 unhandled rdmsr: 0x3f9
[170616.953570] kvm [16870]: vcpu0 unhandled rdmsr: 0x3fa
[170616.953776] kvm [16870]: vcpu0 unhandled rdmsr: 0x630
[170616.954000] kvm [16870]: vcpu0 unhandled rdmsr: 0x631
[170616.954420] kvm [16870]: vcpu0 unhandled rdmsr: 0x632
[170717.446882] kvm [17047]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[172297.109617] kvm [18313]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[172969.254234] kvm [18840]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[173136.612260] kvm [19147]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[173277.430382] kvm [19246]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[174317.205378] kvm [19782]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[174446.503906] kvm [20167]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[174704.652282] kvm [20560]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff
[175062.746961] kvm [20822]: vcpu0 disabled perfctr wrmsr: 0xc2 data 0xffff

Comment 44 Zbigniew Jędrzejewski-Szmek 2017-05-25 01:00:44 UTC
Still happens in F26:
kernel-core-4.11.0-2.fc26.x86_64
qemu-kvm-2.9.0-1.fc26.x86_64

Comment 45 Vit Ry 2017-05-31 15:49:37 UTC
Still happens in last CentOS 7.3

Comment 46 jiyan 2017-06-09 07:08:57 UTC
Happens in RHEL 7.4, too,
Version:
libvirt-3.2.0-9.el7.x86_64
kernel-3.10.0-679.el7.x86_64
qemu-kvm-rhev-2.9.0-9.el7.x86_64

CPU model: Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz

Actual Results:
# dmesg |grep vcpu0
[  370.086592] kvm [11605]: vcpu0 unhandled rdmsr: 0x606
[  390.809376] kvm [11605]: vcpu0 unhandled rdmsr: 0x611
[  390.809405] kvm [11605]: vcpu0 unhandled rdmsr: 0x639
[  390.809422] kvm [11605]: vcpu0 unhandled rdmsr: 0x641
[  390.809438] kvm [11605]: vcpu0 unhandled rdmsr: 0x619
[ 1214.389086] kvm [11920]: vcpu0 unhandled rdmsr: 0x606
[ 1216.764114] kvm [11920]: vcpu0 unhandled rdmsr: 0x611
[ 1216.764140] kvm [11920]: vcpu0 unhandled rdmsr: 0x639
[ 1216.764206] kvm [11920]: vcpu0 unhandled rdmsr: 0x641
[ 1216.764223] kvm [11920]: vcpu0 unhandled rdmsr: 0x619

Comment 47 Cole Robinson 2017-07-13 12:34:10 UTC
These aren't really indicative of errors AFAIK. Upstream made these dependent on debug config though:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ae0f5499511

Patch is in 4.10+

Doesn't look like the 'vcpu0 disabled perfctr wrmsr' pattern is disabled though, maybe that's indicative of an error. Should be a separate bug I think

Comment 48 Fedora End Of Life 2017-07-25 18:31:09 UTC
This message is a reminder that Fedora 24 is nearing its end of life.
Approximately 2 (two) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 24. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '24'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 24 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 49 Cole Robinson 2017-07-26 14:53:05 UTC
zbyszek, can you post an output example for a current kernel? it's not a debug kernel correct? my understanding is that this should have been 'fixed' in 4.10

Comment 50 Zbigniew Jędrzejewski-Szmek 2017-07-26 15:11:35 UTC
May 22 19:21:13 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x619
May 22 19:21:13 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x641
May 22 19:21:13 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x639
May 22 19:21:13 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x611
May 22 19:21:12 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x606
May 22 19:21:12 bupkis kernel: kvm [21585]: vcpu1, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x606
May 22 19:21:06 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x34
May 22 19:21:03 bupkis kernel: kvm [21585]: vcpu3, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x140
May 22 19:21:03 bupkis kernel: kvm [21585]: vcpu2, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x140
May 22 19:21:03 bupkis kernel: kvm [21585]: vcpu1, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x140
May 22 19:21:03 bupkis kernel: kvm [21585]: vcpu0, guest rIP: 0xffffffff97069fc2 unhandled rdmsr: 0x140
So my report that it still happens in comment #c44 was right after that. And that's the last occurence I can find in the logs.

On another machine, the last entry is on May 9th, with kernel 4.9.14-200.fc25.x86_64. After that I upgraded to 4.11.0-2.fc26.x86_64 and it didn't happen again.

So from my POV this appears to be fixed now.

Comment 51 Cole Robinson 2017-07-26 15:53:08 UTC
Okay, I think we can close this then. If anyone is still seeing messages like this with kernel 4.10 or newer, IMO best to open a new bug

Comment 52 loberman 2017-08-01 20:14:43 UTC
There is concern here that this still exists on 7.3+ so we should look at a new BZ for 7.4 and see if we can get this corrected in RHEL 7.4 and maybe zstream for 7.3.

Comment 54 Troels Just 2017-08-02 21:57:31 UTC
(In reply to loberman from comment #52)
> There is concern here that this still exists on 7.3+ so we should look at a
> new BZ for 7.4 and see if we can get this corrected in RHEL 7.4 and maybe
> zstream for 7.3.

I just came across this bug report via Google, and I would just like to add that I have just set up CentOS 7, fully updated as of today, and I am indeed seeing this error. Although it does not seem to affect my VMs, it nonetheless shows in my console.

The precise error is:

[32.512910] kvm [2845]: vcpu0 unhandled rdsmr: 0xc0011021
[32.667070] kvm [2845]: vcpu1 unhandled rdsmr: 0xc0011021

Comment 55 Philip Prindeville 2017-08-02 22:20:24 UTC
(In reply to Troels Just from comment #54)
> (In reply to loberman from comment #52)
> > There is concern here that this still exists on 7.3+ so we should look at a
> > new BZ for 7.4 and see if we can get this corrected in RHEL 7.4 and maybe
> > zstream for 7.3.
> 
> I just came across this bug report via Google, and I would just like to add
> that I have just set up CentOS 7, fully updated as of today, and I am indeed
> seeing this error. Although it does not seem to affect my VMs, it
> nonetheless shows in my console.
> 
> The precise error is:
> 
> [32.512910] kvm [2845]: vcpu0 unhandled rdsmr: 0xc0011021
> [32.667070] kvm [2845]: vcpu1 unhandled rdsmr: 0xc0011021

You might need to pull from updates-testing if it hasn't yet gone into the released updates...

Comment 56 huang yi xuan 2017-12-05 09:32:20 UTC
the issue i also find in kernel 3.10.0-693.5.2.el7.x86_64 which installed openstack (compute node) on it. (centos 7.4) there are lots of vcpu0 unhandled rdmsr message in console.

Comment 57 huang yi xuan 2017-12-05 09:32:39 UTC
the issue i also find in kernel 3.10.0-693.5.2.el7.x86_64 which installed openstack (compute node) on it. (centos 7.4) there are lots of vcpu0 unhandled rdmsr message in console.

Comment 58 Pupkur 2018-01-26 08:21:20 UTC
Find in kernel 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Comment 59 Vincent S. Cojot 2018-02-20 20:57:03 UTC
Also found on Xeon V1 cpus (Dell R720xd) running the latest RHEL7.4 kernel:
[root@vkvm1 .ssh]# uname -r
3.10.0-693.17.1.el7.x86_64
[root@vkvm1 .ssh]# grep Xeon /proc/cpuinfo |sort -u
model name	: Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz

[Tue Feb 20 15:31:35 2018] kvm [9123]: vcpu0 unhandled rdmsr: 0x641
[Tue Feb 20 15:31:35 2018] kvm [9123]: vcpu0 unhandled rdmsr: 0x619
[Tue Feb 20 15:36:27 2018] perf: interrupt took too long (8117 > 8070), lowering kernel.perf_event_max_sample_rate to 24000
[Tue Feb 20 15:46:26 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x1c9
[Tue Feb 20 15:46:26 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x1a6
[Tue Feb 20 15:46:26 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x1a7
[Tue Feb 20 15:46:26 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x3f6
[Tue Feb 20 15:46:26 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x606
[Tue Feb 20 15:46:27 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x611
[Tue Feb 20 15:46:27 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x639
[Tue Feb 20 15:46:27 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x641
[Tue Feb 20 15:46:27 2018] kvm [31279]: vcpu0 unhandled rdmsr: 0x619
[Tue Feb 20 15:46:29 2018] kvm [31343]: vcpu0 unhandled rdmsr: 0x1c9
[Tue Feb 20 15:46:32 2018] kvm_get_msr_common: 8 callbacks suppressed
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x1c9
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x1a6
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x1a7
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x3f6
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x606
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x611
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x639
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x641
[Tue Feb 20 15:46:32 2018] kvm [31401]: vcpu0 unhandled rdmsr: 0x619
[Tue Feb 20 15:47:19 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x1c9
[Tue Feb 20 15:47:19 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x1a6
[Tue Feb 20 15:47:19 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x1a7
[Tue Feb 20 15:47:19 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x3f6
[Tue Feb 20 15:47:19 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x606
[Tue Feb 20 15:47:20 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x611
[Tue Feb 20 15:47:20 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x639
[Tue Feb 20 15:47:20 2018] kvm [31474]: vcpu0 unhandled rdmsr: 0x641

Comment 60 Eduardo Kienetz 2018-04-18 18:22:20 UTC
Centos 7.4.1708.

[root@host1 ~]# uname -r
3.10.0-693.21.1.el7.x86_64
[root@host1 ~]# grep Xeon /proc/cpuinfo |sort -u
model name	: Intel(R) Xeon(R) Platinum 8176 CPU @ 2.10GHz

Prefix kvm [xxxxx]: removed for sorting.
[root@host1 ~]# dmesg | grep rdmsr | cut -d: -f2- | sort | uniq
 vcpu0 unhandled rdmsr: 0x140
 vcpu0 unhandled rdmsr: 0x606
 vcpu0 unhandled rdmsr: 0x611
 vcpu0 unhandled rdmsr: 0x619
 vcpu0 unhandled rdmsr: 0x639
 vcpu0 unhandled rdmsr: 0x641
 vcpu10 unhandled rdmsr: 0x606
 vcpu12 unhandled rdmsr: 0x606
 vcpu15 unhandled rdmsr: 0x606
 vcpu16 unhandled rdmsr: 0x606
 vcpu18 unhandled rdmsr: 0x606
 vcpu19 unhandled rdmsr: 0x606
 vcpu1 unhandled rdmsr: 0x140
 vcpu1 unhandled rdmsr: 0x606
 vcpu2 unhandled rdmsr: 0x140
 vcpu3 unhandled rdmsr: 0x140
 vcpu3 unhandled rdmsr: 0x606
 vcpu4 unhandled rdmsr: 0x140
 vcpu5 unhandled rdmsr: 0x140
 vcpu5 unhandled rdmsr: 0x606
 vcpu6 unhandled rdmsr: 0x140
 vcpu6 unhandled rdmsr: 0x606
 vcpu7 unhandled rdmsr: 0x140
 vcpu7 unhandled rdmsr: 0x606
 vcpu8 unhandled rdmsr: 0x140
 vcpu8 unhandled rdmsr: 0x606
 vcpu9 unhandled rdmsr: 0x140

Comment 61 hulyom 2018-04-24 15:23:55 UTC
[root@host ~]# uname -r
3.10.0-693.21.1.el7.x86_64

[root@host ~]# grep "model name" /proc/cpuinfo
model name	: AMD A8-3870 APU with Radeon(tm) HD Graphics
[...]

[root@host ~]# dmesg | grep rdmsr | cut -d: -f2- | sort | uniq
 vcpu0 unhandled rdmsr: 0xc001100d
 vcpu0 unhandled rdmsr: 0xc0011029
 vcpu1 unhandled rdmsr: 0xc001100d
 vcpu1 unhandled rdmsr: 0xc0011029

Comment 62 Sergei LITVINENKO 2018-04-24 19:22:47 UTC
No issue for F-27 with CPU i7-5960X


[root@homedesk ~]# uname -r
4.15.17-300.fc27.x86_64
[root@homedesk ~]# head /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 63
model name      : Intel(R) Core(TM) i7-5960X CPU @ 3.00GHz
stepping        : 2
microcode       : 0x3c
cpu MHz         : 2313.725
cache size      : 20480 KB
physical id     : 0
[root@homedesk ~]# uname -r
4.15.17-300.fc27.x86_64
[root@homedesk ~]# dmesg
[19340.465000] br0: port 3(vnet0) entered forwarding state
[19340.465004] br0: topology change detected, sending tcn bpdu