Bug 1006838

Summary: Update from rhevh 3.2 to rhevh 3.3 modify network and put host to non responsive state
Product: Red Hat Enterprise Linux 6 Reporter: Artyom <alukiano>
Component: ovirt-nodeAssignee: Fabian Deutsch <fdeutsch>
Status: CLOSED NOTABUG QA Contact: Virtualization Bugs <virt-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.5CC: acathrow, alukiano, bsarathy, cshao, gouyang, hadong, huiwa, iheim, jboggs, leiwang, lsong, mburns, ovirt-maint, yaniwang, ycui, yeylon
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-10-25 08:53:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1021647, 1048932    
Bug Blocks:    
Attachments:
Description Flags
Scrrenshots and engine log none

Description Artyom 2013-09-11 11:12:28 UTC
Created attachment 796344 [details]
Scrrenshots and engine log

Description of problem:
Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev was success but modify network and as result, host in non-responsive state after restart and without ssh. Also when enter to TUI(pc physical available) under Network tab under 'Available System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot for more details)

Version-Release number of selected component (if applicable):
host - from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev
rhevm - is13

How reproducible:
Always

Steps to Reproduce:
1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
3. Put host to maintenance and upgrade

Actual results:
Host to installation of new iso and after this reboot, after reboot host in non responsive state

Expected results:
Host state UP

Additional info:
Same thing happening, after that network repaired(change mode of one nic to dhcp) and adding host I see again 4 bonds with macs 00:00:00:00:00:00, but network continue work

Comment 2 Fabian Deutsch 2013-09-16 15:23:13 UTC
Has any NIC/bond got a valid IP configuration?

Comment 3 Artyom 2013-09-17 05:44:09 UTC
All NIC's and bonds have unconfigured status via TUI, but ssh work fine, also under root, ifconfig show that I have rhevm nic with correct IP.

Comment 4 Fabian Deutsch 2013-10-09 14:17:55 UTC
I'd suggest that we disable the network page when RHEV-H is managed. We'll still need to fix the status screen to display assigned IPs.

Mike,

does this align with the expected and previous behavior?

Comment 5 Mike Burns 2013-10-09 14:34:34 UTC
Yes, we lock networking when we're registered in previous versions and we should continue that going forward.

Sounds like we have a bug with upgrade...

Comment 6 Fabian Deutsch 2013-10-22 16:50:50 UTC
(In reply to Artyom from comment #0)
> Created attachment 796344 [details]
> Scrrenshots and engine log
> 
> Description of problem:
> Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to
> rhev-hypervisor6-6.5-20130910.2.el6ev was success 

Just to confirm: The update went well and the host was responsive after the update?

> but modify network and as
> result, host in non-responsive state after restart and without ssh. 

What exactely did you configure?

> Also
> when enter to TUI(pc physical available) under Network tab under 'Available
> System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot
> for more details)

That's a different issue tracked in bug 989419.

> Steps to Reproduce:
> 1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
> 2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
> 3. Put host to maintenance and upgrade

Don't you also need the next step you described above:
4. Reconfigure networking
?

Comment 7 Artyom 2013-10-23 14:33:45 UTC
I upgraded rhevh and upgrade process was success, but upgrade process, not I, dropped network setting and it put host to non responsive state(no ip), so after upgrade I was need physical access to host(via power management also option) to reconfigure network via TUI. After reconfiguration host in up state with new version of rhevh(because this I said that upgrade was success)

Comment 8 Fabian Deutsch 2013-10-23 14:58:36 UTC
(In reply to Artyom from comment #7)
> I upgraded rhevh and upgrade process was success, but upgrade process, not
> I, dropped network setting and it put host to non responsive state(no ip),
> so after upgrade I was need physical access to host(via power management
> also option) to reconfigure network via TUI. After reconfiguration host in
> up state with new version of rhevh(because this I said that upgrade was
> success)

Right.
Can you provide all vdsm, rhevh and rhevm related log files?
For node they are /var/log/ovirt.log and /var/log/ovirt-node.log.

Comment 9 Artyom 2013-10-23 18:23:34 UTC
ok, I did again upgrade from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20131011.0.el6 and upgrade was success, after restart host respond and in up state, ssh on host also worked fine, so I think problem was with build rhev-hypervisor6-6.5-20130910.2.el6ev.
So I think it possible to close bug
If I will receive the same problem in future will reopen bug or open new.

Comment 10 Fabian Deutsch 2013-10-25 08:53:09 UTC
Will close the bug given the reasons in comment 9.