Bug 986473

Summary: Cannot add a child to a node that uses ri-records (hivex_node_add_child: Assertion `old_offs != 0' failed.)
Product: Red Hat Enterprise Linux 6 Reporter: Anitha Udgiri <audgiri>
Component: hivexAssignee: Richard W.M. Jones <rjones>
Status: CLOSED DEFERRED QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.4CC: acathrow, audgiri, bfan, jcoscia, leiwang, lkong, rjones, wshi
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-05-20 12:35:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 987463    
Bug Blocks:    
Attachments:
Description Flags
Debug log none

Description Anitha Udgiri 2013-07-19 21:17:16 UTC
Created attachment 775972 [details]
Debug log

Description of problem:
Customer is using virt-v2v to convert a VMware VM to a RHEV VM.
The forllowing error is seen at the end of the virt-v2v process :

~~~
virt-v2v -ic esx://192.168.10.118/?no_verify=1 -o rhev -os 192.168.10.120:/nfs --network rhevm test
test_test: 100% [====================================================]D 0h13m48s
perl: hivex.c:2416: hivex_node_add_child: Assertion `old_offs != 0' failed.
~~~

Debug log is attached :

I see the following errors :

libguestfs: recv_from_daemon: 52 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 01 | 00 12 34 12 | ...
libguestfs: trace: case_sensitive_path = "/winnt"
libguestfs: trace: case_sensitive_path "/winnt/system32"
libguestfs: send_to_daemon: 64 bytes: 00 00 00 3c | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 00 | ...
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 seconds
guestfsd: main_loop: new request, len 0x3c
guestfsd: error: winnt: no file or directory found with this name
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 seconds
libguestfs: recv_from_daemon: 96 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 01 | 00 12 34 13 | ...
libguestfs: trace: case_sensitive_path = NULL (error)
libguestfs: trace: case_sensitive_path "/win32"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 00 | ...
guestfsd: main_loop: new request, len 0x34
libguestfs: recv_from_daemon: 52 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 01 | 00 12 34 14 | ...
libguestfs: trace: case_sensitive_path = "/win32"
libguestfs: trace: case_sensitive_path "/win32/system32"
libguestfs: send_to_daemon: 64 bytes: 00 00 00 3c | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 00 | ...
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 seconds
guestfsd: main_loop: new request, len 0x3c
guestfsd: error: win32: no file or directory found with this name
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 seconds
libguestfs: recv_from_daemon: 96 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 01 | 00 12 34 15 | ...
libguestfs: trace: case_sensitive_path = NULL (error)
libguestfs: trace: case_sensitive_path "/win"

libguestfs: send_to_daemon: 64 bytes: 00 00 00 3c | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 00 | ...
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 seconds
guestfsd: main_loop: new request, len 0x3c
guestfsd: error: win: no file or directory found with this name
guestfsd: main_loop: proc 197 (case_sensitive_path) took 0.00 secondslibguestfs: recv_from_daemon: 96 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 c5 | 00 00 00 01 | 00 12 34 36 | ...
libguestfs: trace: case_sensitive_path = NULL (error)
libguestfs: trace: case_sensitive_path "/System Volume Information"

lvm lvs -o vg_name,lv_name --noheadings --separator /
  No volume groups found
libguestfs: recv_from_daemon: 44 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 0b | 00 00 00 01 | 00 12 34 67 | ...
libguestfs: trace: lvs = []
libguestfs: trace: inspect_os = ["/dev/sda2"]
libguestfs: trace: inspect_get_mountpoints "/dev/sda2"
libguestfs: trace: inspect_get_mountpoints = ["/", "/dev/sda2"]
libguestfs: trace: mount_options "" "/dev/sda2" "/"
libguestfs: send_to_daemon: 72 bytes: 00 00 00 44 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 4a | 00 00 00 00 | ...
guestfsd: main_loop: proc 11 (lvs) took 0.17 seconds
guestfsd: main_loop: new request, len 0x44
mount -o  /dev/sda2 /sysroot/
The disk contains an unclean file system (0, 0).
The file system wasn't safely closed on Windows. Fixing.

Comment 10 Anitha Udgiri 2013-07-22 19:41:02 UTC
Thanks Javier !
Richard,
   The files are uploaded. Please let us know if you need anything else.

Comment 27 Lingfei Kong 2013-10-14 04:38:57 UTC
I have reproduced this bug and the results is the same with Comment 15.

Steps to reproduce:
1. Download the SYSTEM attachment: https://bugzilla.redhat.com/attachment.cgi?id=777019

2. Install debuginfo packages(Get the packages from https://brewweb.devel.redhat.com)
#rpm -ivh readline-debuginfo-6.0-4.el6.x86_64.rpm
#rpm -ivh ncurses-debuginfo-5.7-3.20090208.el6.x86_64.rpm
#rpm -ivh glibc-debuginfo-common-2.12-1.107.el6.x86_64.rpm
#rpm -ivh glibc-debuginfo-2.12-1.107.el6.x86_64.rpm
#rpm -ivh hivex-debuginfo-1.3.3-4.2.el6.x86_64.rpm

3. Run the following commands:
#hivexsh -w SYSTEM
Welcome to hivexsh, the hivex interactive shell for examining
Windows Registry binary hive files.

Type: 'help' for help summary
 'quit' to quit the shell

SYSTEM\> cd \ControlSet001\Control\CriticalDeviceDatabase
SYSTEM\ControlSet001\Control\CriticalDeviceDatabase> 
SYSTEM\ControlSet001\Control\CriticalDeviceDatabase> add pci#ven_1af4&dev_1001&subsys_00000000
hivexsh: hivex.c:2416: hivex_node_add_child: Assertion `old_offs != 0' failed.
Aborted (core dumped)

4. hivexsh segfault and generated a core file, The stack trace is:
(gdb) bt
#0 0x0000003ca5c328a5 in raise (sig=6)
 at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x0000003ca5c34085 in abort () at abort.c:92
#2 0x0000003ca5c2ba1e in __assert_fail_base (fmt=<value optimized out>, 
 assertion=0x3cb2a0b2ba "old_offs != 0", file=0x3cb2a0b216 "hivex.c", 
 line=<value optimized out>, function=<value optimized out>) at assert.c:96
#3 0x0000003ca5c2bae0 in __assert_fail (assertion=0x3cb2a0b2ba "old_offs != 0", 
 file=0x3cb2a0b216 "hivex.c", line=2416, 
 function=0x3cb2a0d680 "hivex_node_add_child") at assert.c:105
#4 0x0000003cb2a06278 in hivex_node_add_child (h=0x13ff030, parent=119928, 
 name=0x141a374 "pci#ven_1af4&dev_1001&subsys_00000000") at hivex.c:2416
#5 0x0000000000402c0a in cmd_add (argc=<value optimized out>, 
 argv=<value optimized out>) at hivexsh.c:1099
#6 dispatch (argc=<value optimized out>, argv=<value optimized out>)
 at hivexsh.c:424
#7 main (argc=<value optimized out>, argv=<value optimized out>) at hivexsh.c:214

Comment 28 Richard W.M. Jones 2014-05-20 12:35:50 UTC
Let's not do this.  It's a 20 part patch series to fix this
properly, and that's too dangerous for hivex in RHEL 6.  (I
would be happier with a rebase actually).

Note this bug is fixed properly in RHEL 7 GA.  The customer
has a hotfix.