RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1006838 - Update from rhevh 3.2 to rhevh 3.3 modify network and put host to non responsive state
Summary: Update from rhevh 3.2 to rhevh 3.3 modify network and put host to non respons...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ovirt-node
Version: 6.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Fabian Deutsch
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1021647 1048932
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-11 11:12 UTC by Artyom
Modified: 2014-01-06 14:32 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-10-25 08:53:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Scrrenshots and engine log (130.00 KB, application/gzip)
2013-09-11 11:12 UTC, Artyom
no flags Details

Description Artyom 2013-09-11 11:12:28 UTC
Created attachment 796344 [details]
Scrrenshots and engine log

Description of problem:
Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev was success but modify network and as result, host in non-responsive state after restart and without ssh. Also when enter to TUI(pc physical available) under Network tab under 'Available System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot for more details)

Version-Release number of selected component (if applicable):
host - from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20130910.2.el6ev
rhevm - is13

How reproducible:
Always

Steps to Reproduce:
1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
3. Put host to maintenance and upgrade

Actual results:
Host to installation of new iso and after this reboot, after reboot host in non responsive state

Expected results:
Host state UP

Additional info:
Same thing happening, after that network repaired(change mode of one nic to dhcp) and adding host I see again 4 bonds with macs 00:00:00:00:00:00, but network continue work

Comment 2 Fabian Deutsch 2013-09-16 15:23:13 UTC
Has any NIC/bond got a valid IP configuration?

Comment 3 Artyom 2013-09-17 05:44:09 UTC
All NIC's and bonds have unconfigured status via TUI, but ssh work fine, also under root, ifconfig show that I have rhevm nic with correct IP.

Comment 4 Fabian Deutsch 2013-10-09 14:17:55 UTC
I'd suggest that we disable the network page when RHEV-H is managed. We'll still need to fix the status screen to display assigned IPs.

Mike,

does this align with the expected and previous behavior?

Comment 5 Mike Burns 2013-10-09 14:34:34 UTC
Yes, we lock networking when we're registered in previous versions and we should continue that going forward.

Sounds like we have a bug with upgrade...

Comment 6 Fabian Deutsch 2013-10-22 16:50:50 UTC
(In reply to Artyom from comment #0)
> Created attachment 796344 [details]
> Scrrenshots and engine log
> 
> Description of problem:
> Upgrade rhevh from rhev-hypervisor6-6.4-20130815.0.el6_4 to
> rhev-hypervisor6-6.5-20130910.2.el6ev was success 

Just to confirm: The update went well and the host was responsive after the update?

> but modify network and as
> result, host in non-responsive state after restart and without ssh. 

What exactely did you configure?

> Also
> when enter to TUI(pc physical available) under Network tab under 'Available
> System Nics' you can see 4 bonds with macs 00:00:00:00:00:00(see screenshot
> for more details)

That's a different issue tracked in bug 989419.

> Steps to Reproduce:
> 1. Have host with rhev-hypervisor6-6.4-20130815.0.el6_4
> 2. Install on rhevm rhev-hypervisor6-6.5-20130910.2.el6ev
> 3. Put host to maintenance and upgrade

Don't you also need the next step you described above:
4. Reconfigure networking
?

Comment 7 Artyom 2013-10-23 14:33:45 UTC
I upgraded rhevh and upgrade process was success, but upgrade process, not I, dropped network setting and it put host to non responsive state(no ip), so after upgrade I was need physical access to host(via power management also option) to reconfigure network via TUI. After reconfiguration host in up state with new version of rhevh(because this I said that upgrade was success)

Comment 8 Fabian Deutsch 2013-10-23 14:58:36 UTC
(In reply to Artyom from comment #7)
> I upgraded rhevh and upgrade process was success, but upgrade process, not
> I, dropped network setting and it put host to non responsive state(no ip),
> so after upgrade I was need physical access to host(via power management
> also option) to reconfigure network via TUI. After reconfiguration host in
> up state with new version of rhevh(because this I said that upgrade was
> success)

Right.
Can you provide all vdsm, rhevh and rhevm related log files?
For node they are /var/log/ovirt.log and /var/log/ovirt-node.log.

Comment 9 Artyom 2013-10-23 18:23:34 UTC
ok, I did again upgrade from rhev-hypervisor6-6.4-20130815.0.el6_4 to rhev-hypervisor6-6.5-20131011.0.el6 and upgrade was success, after restart host respond and in up state, ssh on host also worked fine, so I think problem was with build rhev-hypervisor6-6.5-20130910.2.el6ev.
So I think it possible to close bug
If I will receive the same problem in future will reopen bug or open new.

Comment 10 Fabian Deutsch 2013-10-25 08:53:09 UTC
Will close the bug given the reasons in comment 9.


Note You need to log in before you can comment on or make changes to this bug.