RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1118252 - [Doc]Examples in 5.2 appear to have an error
Summary: [Doc]Examples in 5.2 appear to have an error
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: doc-Virtualization_Deployment_and_Administration_Guide
Version: 7.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Dayle Parker
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-10 09:48 UTC by Brian (bex) Exelbierd
Modified: 2019-03-06 01:06 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-20 04:41:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Brian (bex) Exelbierd 2014-07-10 09:48:03 UTC
Description of problem:

The examples in Chapter 5.2 rely on a bridge device, br0, that was not created prior to the example or mentioned in the forward text.  The default interface, as far as I know, is virbr0 and should be used instead.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Guest_virtual_machine_installation_overview-Creating_guests_with_virt_install.html

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 5 Laine Stump 2014-10-09 06:52:23 UTC
Oh, so to answer the initial question - this BZ should be closed as NOTABUG. The reporter is mixed up about where the virbrX names are used.

Comment 10 Laine Stump 2015-06-18 14:45:31 UTC
> libvirt-created bridges are called "virbr0"

A very common problem we encounter is with people seeing the "virbr0" style bridge device names used by libvirt when it creates transient bridges for libvirt virtual networks, then deciding that's a nice naming convention for bridges they create themselves. *then* they have problems and send parts of their ifconfig or other output to a mailing list and the people who see the name of the device there assume that they are using a libvirt-created network and offer advice accordingly, only to have that advice lead them further astray.

Because of this, whenever I see a discussion of these naming conventions I feel the need to point out that bridges created by libvirt are not used in the same manner as bridges that are created by the system network config - mixing best practices for the two leads to a horrible mess that is best remedied with a stick of dynamite and a match. The corollary is that it is *very* important to not use the virbrX style in any set of instructions detailing how to create a bridge in the system network configuration.

Also, I want to point out that, while it may work to connect to a libvirt virtual network using "<interface type='bridge'> <source bridge='virbr0'/>...", this is an extremely bad practice and should never be recommended or suggested. Instead, the guest should be connected with "<interface type='network'> <source network='default'/>..." (or whatever is the name of the libvirt network). Using type='bridge' is performing an "end-run" around libvirt's management of the network and could lead to problems in the future.

> (ie. if you have to shut down libvirt, you lose network connectivity). 

I don't understand what you're referencing here - stopping or restarting libvirtd will never cause a loss of network connectivity (the only network-related effect will be to reload all libvirt-added iptables rules which, if anything, would *restore* connectivity that may have been disrupted by some third party adding iptables rules that interfere with those previously added by libvirt).

Any networks created by libvirt will remain in service even after libvirtd is stopped. This is by design, so that upgrades can be easily performed without any service disruptions. It is true that expliciting stopping a libvirt-created *network* will cause guests using that network to lose their connectivity (and they will need to be restarted (after restarting the network) to get it back, but that doesn't match your statement.

Comment 11 Dayle Parker 2015-06-29 02:06:59 UTC
Thanks for the explanation, Laine -- that's a very good reason for keeping any instructions with "br0" as the example.

Re:
> (ie. if you have to shut down libvirt, you lose network connectivity). 

I must have misunderstood some information I'd seen, as I had thought that if libvirtd stops, any libvirt-created bridges lose connectivity. Thanks for clarifying that, it's good to know.

Comment 12 Dayle Parker 2015-07-20 04:40:32 UTC
As the work is complete and has been verified by SMEs, moving to VERIFIED.

Comment 13 Dayle Parker 2015-07-20 04:41:21 UTC
The revised guest installation chapter is now publicly available as part of the most recent asynchronous docs release, and can be viewed here:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/chap-Virtual_machine_installation.html

Closing > CURRENTRELEASE


Note You need to log in before you can comment on or make changes to this bug.