Bug 1006838 - Update from rhevh 3.2 to rhevh 3.3 modify network and put host to non responsive state
Update from rhevh 3.2 to rhevh 3.3 modify network and put host to non respons...
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ovirt-node (Show other bugs)
6.5
Unspecified Unspecified
urgent Severity urgent
: rc
: ---
Assigned To: Fabian Deutsch
Virtualization Bugs
: Regression
Depends On: 1021647 1048932
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-11 07:12 EDT by Artyom
Modified: 2014-01-06 09:32 EST (History)
16 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-10-25 04:53:09 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Scrrenshots and engine log (130.00 KB, application/gzip)
2013-09-11 07:12 EDT, Artyom
no flags Details

  None (edit)
Description Artyom 2013-09-11 07:12:28 EDT
Created attachment 796344 [details]
Scrrenshots and engine log

Description of problem:
Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev was success but modify network and as result, host in non-responsive state after restart and without ssh. Also when enter to TUI(pc physical available) under Network tab under 'Available System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot for more details)

Version-Release number of selected component (if applicable):
host - from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev
rhevm - is13

How reproducible:
Always

Steps to Reproduce:
1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
3. Put host to maintenance and upgrade

Actual results:
Host to installation of new iso and after this reboot, after reboot host in non responsive state

Expected results:
Host state UP

Additional info:
Same thing happening, after that network repaired(change mode of one nic to dhcp) and adding host I see again 4 bonds with macs 00:00:00:00:00:00, but network continue work
Comment 2 Fabian Deutsch 2013-09-16 11:23:13 EDT
Has any NIC/bond got a valid IP configuration?
Comment 3 Artyom 2013-09-17 01:44:09 EDT
All NIC's and bonds have unconfigured status via TUI, but ssh work fine, also under root, ifconfig show that I have rhevm nic with correct IP.
Comment 4 Fabian Deutsch 2013-10-09 10:17:55 EDT
I'd suggest that we disable the network page when RHEV-H is managed. We'll still need to fix the status screen to display assigned IPs.

Mike,

does this align with the expected and previous behavior?
Comment 5 Mike Burns 2013-10-09 10:34:34 EDT
Yes, we lock networking when we're registered in previous versions and we should continue that going forward.

Sounds like we have a bug with upgrade...
Comment 6 Fabian Deutsch 2013-10-22 12:50:50 EDT
(In reply to Artyom from comment #0)
> Created attachment 796344 [details]
> Scrrenshots and engine log
> 
> Description of problem:
> Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to
> rhev-hypervisor6-6.5-20130910.2.el6ev was success 

Just to confirm: The update went well and the host was responsive after the update?

> but modify network and as
> result, host in non-responsive state after restart and without ssh. 

What exactely did you configure?

> Also
> when enter to TUI(pc physical available) under Network tab under 'Available
> System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot
> for more details)

That's a different issue tracked in bug 989419.

> Steps to Reproduce:
> 1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
> 2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
> 3. Put host to maintenance and upgrade

Don't you also need the next step you described above:
4. Reconfigure networking
?
Comment 7 Artyom 2013-10-23 10:33:45 EDT
I upgraded rhevh and upgrade process was success, but upgrade process, not I, dropped network setting and it put host to non responsive state(no ip), so after upgrade I was need physical access to host(via power management also option) to reconfigure network via TUI. After reconfiguration host in up state with new version of rhevh(because this I said that upgrade was success)
Comment 8 Fabian Deutsch 2013-10-23 10:58:36 EDT
(In reply to Artyom from comment #7)
> I upgraded rhevh and upgrade process was success, but upgrade process, not
> I, dropped network setting and it put host to non responsive state(no ip),
> so after upgrade I was need physical access to host(via power management
> also option) to reconfigure network via TUI. After reconfiguration host in
> up state with new version of rhevh(because this I said that upgrade was
> success)

Right.
Can you provide all vdsm, rhevh and rhevm related log files?
For node they are /var/log/ovirt.log and /var/log/ovirt-node.log.
Comment 9 Artyom 2013-10-23 14:23:34 EDT
ok, I did again upgrade from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20131011.0.el6 and upgrade was success, after restart host respond and in up state, ssh on host also worked fine, so I think problem was with build rhev-hypervisor6-6.5-20130910.2.el6ev.
So I think it possible to close bug
If I will receive the same problem in future will reopen bug or open new.
Comment 10 Fabian Deutsch 2013-10-25 04:53:09 EDT
Will close the bug given the reasons in comment 9.

Note You need to log in before you can comment on or make changes to this bug.