Bug 920032
Summary: | hypervkvpd segfault when cgred is running | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | jason wang <jasowang> |
Component: | hypervkvpd | Assignee: | Tomáš Hozza <thozza> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | medium | ||
Version: | 6.5 | CC: | jingli, kys, leiwang, lnovich, ovasik, shwang, thozza, yacao |
Target Milestone: | rc | Keywords: | Patch |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | hypervkvpd-0-0.10.el6 | Doc Type: | Bug Fix |
Doc Text: |
Cause: Previously hypervkvpd registered to two NetLink multicast groups and one of them was group used by 'cgred'.
Consequence:
When hypervkvpd received a NetLink message (from cgred) it interpreted it blindly as its own. This caused hypervkvpd to segfault.
Fix:
Hypervkvpd now registers only to its own NetLink multicast group and checks type of incoming NetLink message.
Result:
Hypervkvpd does not segfault any more when 'cgred' is running.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2013-11-21 04:51:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
jason wang
2013-03-11 08:28:35 UTC
(In reply to comment #0) > Description of problem: > > From vaughan <vaughan.cao@xxxxxxxxxx> > > I guess I found a bug -- hypervkvpd running alone without hv_utils > loaded encounters segfault when service cgred start on RHEL6.4. It > occurs with both 0.8 and 0.9, regardless of i686 or x86_64. > > I read in hv_kvp_daemon.c that the user mode componet should first > registers with the kernel component. > But in my test, the hand shake phase has been ignored. > Things happens like this: > hv_utils.ko and hv_vmbus.ko is not loaded, start hypervkvpd is fine. > Then, I start cgred with the default configuration. cgroup also use > NETLINK_CONNECTOR protocol and send messages with cb_id{1,1}. Hypervkvpd > receive messages without checking their source. Some messages with > cb_id{1,1} were receviced and blindly interpreted as hv_kvp_msg. > Since the hand_shake check is as below: > if ((in_hand_shake) && (op == KVP_OP_REGISTER1)) { > ... > continue; > } > //handle kvp messages > switch (op) { ... } > Register phase is also skipped. > Everytime the KVP_OP_SET opcode is reached, kvp_key_add_or_modify() is > invoked with an very large key_size. After several iterations, segfault > occurs in memcpy(record[i].key, key, key_size) (key_size is negative now). > > I'm not very familiar with connector. But I ran the sample in > Documentation/connector/ and found that a NETLINK_CONNECTOR socket would > always some messages with cb_id{1,1}. So blindly suppose all messages > are kvp_msg is not correct. hypervkvpd should check the source of > messages and perhaps even check nlmsg_type in the nlmsghdr. I did some testing. It doesn't matter if hv_utils is unloaded or not. Handling of incoming NetLink messages is not very fortunate and should be changed for sure. The registration phase seems to be mandatory. Kernel part of the hyper-v kvp functionality is depending on user-space daemon to register with it. What's strange is that the kernel module registers with the daemon only for the first time. If daemon is restarted, then the registration with kernel module is never completed. I think this might have something to do with Bug #886781. I have some ideas, but I need to test them first. The problem was in setting sockaddr_nl.nl_groups to CN_KVP_IDX of value "9". Since nl_groups is a bit mask, hypervkvpd was getting Netlink messages also from group number "1". I changed sockaddr_nl.nl_groups to "0" (not subscribe to any multicast group) and rather subscribe to CN_KVP_IDX group using setsockopt(). I also added check for nlmsg_type in received Netlink message header so now daemon processes the message only if it's type is NLMSG_DONE. Otherwise the message is ignored. I tested the solution on Windows Server 2012 Guest with RHEL 6.4 Guest and everything worked fine. Host was able to get information from Guest and when starting/restarting cgred service for couple of times hypervkvpd kept running just fine. I sent patches to upstream: http://driverdev.linuxdriverproject.org/pipermail/devel/2013-March/036284.html http://driverdev.linuxdriverproject.org/pipermail/devel/2013-March/036285.html http://driverdev.linuxdriverproject.org/pipermail/devel/2013-March/036286.html Patches were accepted by upstream. verify this bug in x86_64 and i386 guest, no segfault occur, the kernel of which is 2.6.32-415.0.1.el6.x86_64/2.6.32-415.0.1.el6.i686 , the package of hypervkvpd is hypervkvpd-0-0.12.el6.x86_64/hypervkvpd-0-0.12.el6.i686 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1539.html |