Bug 865546 - RHEL 6.3 produces spurious stderr output from kpartx
RHEL 6.3 produces spurious stderr output from kpartx
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: device-mapper-multipath (Show other bugs)
6.3
x86_64 Linux
unspecified Severity low
: pre-dev-freeze
: 6.5
Assigned To: Ben Marzinski
Red Hat Kernel QE team
:
Depends On: 682115 687526
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-11 14:01 EDT by R P Herrold
Modified: 2015-10-14 10:30 EDT (History)
18 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 682115
Environment:
Last Closed: 2015-10-14 10:30:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description R P Herrold 2012-10-11 14:01:41 EDT
+++ This bug was initially created as a clone of Bug #682115 +++

We are seeing this noise from kpartx-0.4.7-48

It is unsightly, but seemingly harmless.  Fix is known and tested, months ago, from the cloned bug

(we see this)

/dev/mapper/MainGroup-kvm_vm_82751_1349937911p1: mknod for
MainGroup-kvm_vm_82751_1349937911p1 failed: File exists
/dev/mapper/MainGroup-kvm_vm_07831_1349954272p1: mknod for
MainGroup-kvm_vm_07831_1349954272p1 failed: File exists

Any change of getting the fixing patch applied here?

-- Russ herrold

==========================
prior bug narrative

Description of problem:
Some lines of v7 fv_tests execute successful but return Exception.
Job link: https://beaker.engineering.redhat.com/jobs/58847

Example 1:
--------------------------- 
Error log: 
...
virt-install --name v7x86_64 --ram 512 --disk path=/var/lib/libvirt/images/v7x86_64.img --disk path=/var/lib/libvirt/images/v7data.img --network network:default --vnc --import  --noreboot --noautoconsole
Error: could not generate guest config file.
...

The command run successful, and the guest have started already.

Run manually: 
# virt-install --name v7x86_64 --ram 512 --disk path=/var/lib/libvirt/images/v7x86_64.img --disk path=/var/lib/libvirt/images/v7data.img --network network:default --vnc --import  --noreboot --noautoconsole

Starting install...
Creating domain...                                       |    0 B     00:00     
Domain creation completed. You can restart your domain by running:
  virsh --connect qemu:///system start v7x86_64

# virsh list
 Id Name                 State
----------------------------------
  1 v7x86_64             running

# virsh start v7x86_64
error: Domain is already active


Example 2: 
---------------------------------------------
Error log: 
...
Using loopback device /dev/loop0 for guest data image
Error: could not mount data image
"kpartx -av /dev/loop0" has output on stderr
...

The command "kpartx -av /dev/loop0" run successful: 
# kpartx -l /dev/loop0
loop0p1 : 0 192717 /dev/loop0 63

In above cases, v7 test run abort and fail. 
But the guest can start manally by "virsh start v7x86_64"and pass.

Version-Release number of selected component (if applicable):
RHEL6.1
v7-1.3-10

How reproducible:
always

Steps to Reproduce:
1. install v7 and run one of fv_tests.
2.
3.
  
Actual results:
FAIL

Expected results:
PASS

Additional info:

--- Additional comment from yuchen@redhat.com on 2011-03-04 03:01:15 EST ---

*** Bug 678946 has been marked as a duplicate of this bug. ***

--- Additional comment from yuchen@redhat.com on 2011-03-14 02:52:00 EDT ---

Run manually : 
# losetup /dev/loop0 /var/lib/libvirt/images/v7x86_64.img  

# kpartx -av /dev/loop0
add map loop0p1 (253:3): 0 208782 linear /dev/loop0 63
add map loop0p2 (253:4): 0 12370050 linear /dev/loop0 208845
/dev/mapper/loop0p1: mknod for loop0p1 failed: File exists

# echo $?
0

Although the failure exists, the execution of command "kpartx -av /dev/loop0" also pass.
But when run command "Command("kpartx -av " + self.loopBackDevice).echo()", function _checkErrors() of command.py capture the failure and raise exception. Then v7 abort. 

The same issue happened in execution of "Command(virtInstall).echo()". 

I think it necessary to have a modification to function _run() and _checkErrors() of command.py.

--- Additional comment from gnichols@redhat.com on 2011-03-14 09:17:52 EDT ---

Created attachment 484175 [details]
fvtest.py with some added debug output

--- Additional comment from gnichols@redhat.com on 2011-03-14 09:18:56 EDT ---

Please try the above patch.

--- Additional comment from gnichols@redhat.com on 2011-03-14 11:59:59 EDT ---

I've reproduced this on a RHEL 6.1 system.  Note the redirect of stderr:

[root@hp-xw8400-01 ~]# virt-install --name v7x86_64 --ram 512 --disk path=/var/lib/libvirt/images/v7x86_64.img --disk path=/var/lib/libvirt/images/v7data.img --network network:default --vnc --import  --noreboot --noautoconsole 2>stderr.txt

Starting install...
Domain creation completed. You can restart your domain by running:
  virsh --connect qemu:///system start v7x86_64
[root@hp-xw8400-01 ~]# more stderr.txt 
Creating domain...                                       |    0 B     00:00     
[root@hp-xw8400-01 ~]# 

So, it looks like virsh is putting some status messages on stderr now in 6.1.  Not that this is a change (regression?) from 6.0.

--- Additional comment from gnichols@redhat.com on 2011-03-14 12:02:07 EDT ---

Sorry, I meant that virtinstall was putting output on stderr, not virsh.

--- Additional comment from gnichols@redhat.com on 2011-03-14 16:56:21 EDT ---

I've also reproduced the kpartx error:

Guest has shutdown
Using loopback device /dev/loop0 for guest data image
Error: could not mount data image
"kpartx -av /dev/loop0" has output on stderr
/dev/mapper/loop0p1: mknod for loop0p1 failed: File exists

As noted above, re-issuing this command manually works fine.   This error looks more real than the virtinstall one.

--- Additional comment from gnichols@redhat.com on 2011-03-15 15:52:59 EDT ---

Created attachment 485594 [details]
fv test patch to work around spurious stderr output in virt-install and kpartx

This patch uses some new V7CommandException subclasses to selectively ignore stderr output in these two cases.

--- Additional comment from gnichols@redhat.com on 2011-03-15 15:53:54 EDT ---

Created attachment 485595 [details]
command.py adding new V7CommandException subclasses.

--- Additional comment from czhang@redhat.com on 2011-05-01 05:58:17 EDT ---

This bug does not need a Tech Note because of:

1. A Red Hatter reported it
2. It only happened in an intermediate version.

--- Additional comment from czhang@redhat.com on 2011-05-01 05:58:17 EDT ---


    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
In v7 1.2, fv_* tests reported error while the tests finished successfully because of stderr output seen. This issue has been fixed in v7 1.3, now fv_* tests returns zero when tests finished successfully.

--- Additional comment from errata-xmlrpc@redhat.com on 2011-05-09 12:15:04 EDT ---

An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0497.html
Comment 1 R P Herrold 2012-10-15 10:01:49 EDT
I was trying to build a smaller reproducer for this bug, and noticed that we are actually getting it expressed when we SSH FROM a 5 series enterprise box TO a 6 series box, and run: kpartx  on the remote 6 box

So, sadly, I have mis-categorized the erroring version


Correct information is: kpartx-0.4.9-56.el6_3.1
RHEL 6
owning SRPM: device-mapper-multipath-0.4.9-56.el6_3.1.src.rpm
Comment 2 R P Herrold 2012-12-07 10:49:26 EST
I see announcement of a RHEL 6.4 beta

Is this fairly small patch likely for that version?

-- Russ herrold
Comment 3 Ben Marzinski 2012-12-13 14:59:22 EST
Sorry, this seems to have slipped through the cracks.  The title states 6.3, but the version in the bugzilla is 5.9. Also, I'm assuming that the patch you are talking about is that same one as 682115.  That's a patch to the test suite. If that's the patch you want applied, this bug should get reassigned to the Test Suite Component.
Comment 4 R P Herrold 2012-12-31 13:39:40 EST
As noted in comment 1:

Correct information is: kpartx-0.4.9-56.el6_3.1
Product: RHEL 6
owning SRPM: device-mapper-multipath-0.4.9-56.el6_3.1.src.rpm

I tried to adjust header as I seem to have not 'cloned' it well, but Bugzilla is 'fighting me' with refreshing the drop box when I try to move it from 5 to 6

I have a permissions problem in bugzilla, somehow, as I am told:
  You are not permitted to change products for this bug.

Please move to RHEL 6, at 6.3

I don't think it is a Test Suite matter, as we get this spurious noise (and more) in on a RHEL 6 class unit.  The FIX is (I think) in 682115  but it did not make it into the product

-- Russ herrold
Comment 5 R P Herrold 2012-12-31 13:47:05 EST
Trying to run back through recent bug closings for kpartx, I am largely unable to read the underlying bugs:

Blocked (non-readible) bugs: 
831045
837594
799842
812832
769527
467709
744756
752989

World Readible bugs
662433
802630
Comment 7 Alasdair Kergon 2012-12-31 18:04:53 EST
But could you describe afresh the actual problem you're trying to get fixed?  (I think cloning the other bug has only confused things.)
Comment 8 RHEL Product and Program Management 2013-01-04 01:47:12 EST
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 9 Ben Marzinski 2015-10-14 10:30:11 EDT
I really have no idea what this bugzilla is even asking for, and the reporter hasn't responded to questions for years.

Note You need to log in before you can comment on or make changes to this bug.