Description of problem: install rhevh via cdrom/usb livecd failed with Exception "OSError(30, 'Read-only file system')". 2014-10-14 06:57:47,885 ERROR Installer transaction failed Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/ovirt/node/installer/core/progress_page.py", line 126, in __run File "/usr/lib/python2.6/site-packages/ovirt/node/plugins.py", line 188, in dry_or File "/usr/lib/python2.6/site-packages/ovirt/node/installer/core/progress_page.py", line 120, in do_commit File "/usr/lib/python2.6/site-packages/ovirt/node/installer/core/progress_page.py", line 284, in commit File "/usr/lib/python2.6/site-packages/ovirtnode/install.py", line 483, in ovirt_boot_setup File "/usr/lib64/python2.6/os.py", line 157, in makedirs OSError: [Errno 30] Read-only file system: '/liveos/grub' Version-Release number of selected component (if applicable): Red Hat Enterprise Virtualization Hypervisor release 6.6 (20141008.0.el6ev) ovirt-node-3.1.0-0.22.20141010git96b7ca3.el6.noarch How reproducible: 50% Steps to Reproduce: Actual results: Expected results: Additional info:
Created attachment 946774 [details] attached ovirt-node.log
Created attachment 946776 [details] attached ovirt.log
Raising the priority, but not making it a blocker because it can not be reliably reproduced.
I have to priority this bug to urgent because I encountered this bug on RHEVH 6.6 for rhev 3.4.z build. rhev-hypervisor6-6.6-20141021.0.el6ev.noarch.rpm ovirt-node-3.0.1-19.el6.18.noarch
If you can reproduce this again, can you check what's mounted on /liveos? Just "grep live /proc/mounts" should be good enough.
(In reply to Ying Cui from comment #4) > I have to priority this bug to urgent because I encountered this bug on > RHEVH 6.6 for rhev 3.4.z build. > > rhev-hypervisor6-6.6-20141021.0.el6ev.noarch.rpm > ovirt-node-3.0.1-19.el6.18.noarch Can you also please provide the exact hardware and installation method you used. As much details as possible.
(In reply to Ryan Barry from comment #5) > If you can reproduce this again, can you check what's mounted on /liveos? > Just "grep live /proc/mounts" should be good enough. #grep live /proc/mounts dev/sr1 /dev/.initramfs/live iso9660 ro,relatime 0 0 dev/mapper/live-rw / ext2 ro,seclabel,relatime,error=continue,user_xattr,acl 0 0 dev/sr1 /live iso9660 ro,relatime 0 0
For teh record, there are squashfs errors in dmesg. So it seems to be a low level issue, the question is where it comes from.
(In reply to Fabian Deutsch from comment #9) > For teh record, there are squashfs errors in dmesg. So it seems to be a low > level issue, the question is where it comes from. This happened once, but the bug was also seen on a machine without these errors.
Let me be clear here, the reproduce steps are: 1. Install old rhevh build, 2. Reboot 3. Boot from cdrom/ virtual/ usb do TUI upgrade directly. 4. Input old/new password. 5. Save info. Then can reproduce. Thanks!
need clone to z-stream because we can reproduce this bug on rhevh 6.6 for 3.4.z build(rhev-hypervisor6-6.6-20141021.0.el6ev.noarch.rpm)
I still can reproduce this issue after iso complete checking, and checking passed with PASS. I will add enforcing=0 and try again.
Hi fabiand, Still can reproduce this issue after add enforcing=0 and make sure iso checking pass.
I'm still not able to reproduce this, but I'm looking back at the logs, and something seems weird. The last thing we see is "END of temporary log". Is this the end of the log in /var/log/ovirt.log? Or is this the log from /tmp? If it's the one from /var/log, can you add /tmp/ovirt.log? I'm expecting to see some drive get "e2label $drive RootNew" then mounted at /liveos, but it's not in the log. At that point, leaving it up would be helpful, so one of us can go see whether we can actually mount the named drive at /liveos rw or why it failed to find it and didn't report that, etc.
(In reply to Ryan Barry from comment #15) … > At that point, leaving it up would be helpful, so one of us can go see > whether we can actually mount the named drive at /liveos rw or why it failed > to find it and didn't report that, etc. I at least tried mounting the LABEL=Root partition to /liveos, I was told it is busy, but I couldn't find out why it was busy, because I ran into the squashfs errors, which appeared indmesg as soon as I used tools like findmnt etc.
I'm still not able to reproduce this. If you can reproduce it again, can you let us know the kernel cmdline, please?
Due to this bug is hard to reproduce then debug, and most end-user use PXE to install rhevh, so I down priority to high and not testblock. Any test cases need CD or usb testing as bootloader, please use server in lab for testing. But we(QE and Devel) can't give up to reproduce it. any different thoughts?
(In reply to Ryan Barry from comment #18) > I'm still not able to reproduce this. > > If you can reproduce it again, can you let us know the kernel cmdline, > please? Hi ryan, I still can't reproduce this bug today anymore, I didn't added any parameter to kernel cmdline, all setting should be the default. If I can reproduce it again, I will leave env for you. Thanks!
Test version: rhevh-6.6-20141119.0.el6ev.iso ovirt-node-3.1.0-0.27.20141119git24e087e.el6.noarch Didn't meet it when installing rhevh-6.6-20141119.0.el6ev.iso via cdrom/usb livecd, so this bug has been fixed in in rhevh 6.6 for RHEV 3.5 bulid version, change the status into "VERIFIED". If anyone could reproduce it again, please reopen this bug.
Hi Fabiand, I noticed that this bug was reported by 3.5-6.6 version, but I met this issue again with 3.5-7.0 version. So could I report a new one to trace this issue or reopen this one? Test version RHEV-H 7.0-20141202.el7ev ovirt-node-3.1.0-0.28.20141126git25ce016.el7. Test result: Exception "OSError (30, 'Read-only file system') Log: I have uploaded all logs info as attachment. /var/log/*.* /tmp/ovirt.log sosreport Thanks!
(In reply to shaochen from comment #23) ... > Log: > I have uploaded all logs info as attachment. > /var/log/*.* > /tmp/ovirt.log > sosreport Where did you upload these new files, I do not see them attached? Once we know what happened we can decide if we reopen or create a new bug
Created attachment 966151 [details] log.tar.gz
(In reply to shaochen from comment #23) ... > Test version > RHEV-H 7.0-20141202.el7ev > ovirt-node-3.1.0-0.28.20141126git25ce016.el7. Chen, if you still have RHEV-H 7 20141204 available, please check with that build. Also: How reproducible is this bug?
(In reply to Fabian Deutsch from comment #26) > (In reply to shaochen from comment #23) > ... > > Test version > > RHEV-H 7.0-20141202.el7ev > > ovirt-node-3.1.0-0.28.20141126git25ce016.el7. > > Chen, if you still have RHEV-H 7 20141204 available, please check with that > build. The reason for this bug: In pre-1204 builds, multipath was claiming the device during installation. Using find_multipath fixes this issue.
(In reply to Fabian Deutsch from comment #26) > (In reply to shaochen from comment #23) > ... > > Test version > > RHEV-H 7.0-20141202.el7ev > > ovirt-node-3.1.0-0.28.20141126git25ce016.el7. > > Chen, if you still have RHEV-H 7 20141204 available, please check with that > build. > Hi fabiand, I tested several times with RHEV-H 7 20141204 build and didn't met this issue again. After append find_multipath to cmd, still can't reproduce. > Also: How reproducible is this bug? RHEV-H 7.0-20141202.el7ev How reproducible:30% RHEV-H 7.0-20141204 How reproducible:0% Thanks!
> > Also: How reproducible is this bug? > RHEV-H 7.0-20141202.el7ev How reproducible:30% > RHEV-H 7.0-20141204 How reproducible:0% > I tried ten times almost with RHEV-H 7.0-20141204 and didn't met this issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-0160.html