Bug 1470018

Summary: [Task-based rework] Virtual storage
Product: Red Hat Enterprise Linux 7 Reporter: Jiri Herrmann <jherrman>
Component: doc-Virtualization_Deployment_and_Administration_GuideAssignee: Yehuda Zimmerman <yzimmerm>
Status: CLOSED CURRENTRELEASE QA Contact: Jaroslav Suchanek <jsuchane>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 7.5CC: jferlan, mkalinin, rhel-docs
Target Milestone: rcKeywords: Documentation
Target Release: ---Flags: yzimmerm: needinfo+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 12:46:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1430025    

Comment 3 John Ferlan 2018-03-08 13:31:20 UTC
Unclear why there's 2 bugs tracking exactly the same thing (bz 1258793 points at the same link above). If one or the other is preferred for updates let me know; otherwise, I'll assume this one is preferred. I'll work section by section - never quite sure how long it'll take to finish, but I am working on it now and will provide some feedback shortly.

Comment 4 John Ferlan 2018-03-08 13:32:21 UTC
"Editorial comment' - rather than "storage pool", would it be more
appropriate to use "Storage Pool"? That would make it Storage Volume.
It's not that important and I defer to the documentation experise over
a personal preference.


......

[Section 13.1]

Paragraph 1:
Change:
The volumes are then assigned to guest virtual machines as block devices.

To:
Each Storage Volume is assigned to a guest virtual machine as a block
device on a guest bus.


Paragraph 3, 4, & 5 consider:

The libvirt API's can be used to query the list of Volumes within the
Storage Pool or to get information regarding the Capacity, Allocation,
and Available storage within the Pool. A Storage Volume within the
Pool may be queried to get information such as Allocation and Capacity
which may differ for sparse volumes.

For Storage Pools that support it, the libvirt API's can be used to
Create, Clone, Resize, or Delete Volumes, Upload data to Volumes,
Download data from Volumes, or Wipe data on the Volume.

Once a Storage Pool is started, a Storage Volume may be assigned
to a guest using the Storage Pool name and Volume name rather than
needing to provide the host path to the volume in the domain XML.

Storage Pool's may be stopped or destroyed thus removing the abstraction
of the data, but keeping the data in tact.
For example, a storage administrator responsible for an NFS server that
uses "mount -t nfs nfs.example.com:/path/to/share /path/to/data" could
define an NFS Storage Pool on the virtualization host to describe the
exported server path and the client target path in order to allow libvirt
to perform the mount either automatically when libvirt is started or as
needed while libvirt is running. Files with the NFS Server exported
directory are listed as Storage Volumes within the NFS Storage Pool.
When the Volume is added to the guest, the Administrator does not
need to add the target path to the volume just the pool by name and
the volume by name. This allows for ease of guest administration if
the target client path needs to be changed.

Comment 5 John Ferlan 2018-03-08 13:34:55 UTC
[Section 13.2.1]

I found it a bit redundant to indicate within the "Networked (shared)
storage pools" section that "Networked storage pools are managed
by libvirt."  A Networked Storage Pool just exposes the Volumes for
guest usage that are defined by some networked storage protocol.
The management of the source storage server is outside the scope
of libvirt's management. In a way libvirt ends up being a client
to the storage pool server and then provides the capabilities or
mechanism to expose the volumes for guest usage without needing
to provide the server information within the guest XML. Perhaps
the based way to think about this is via an example. Consider that
the guest XML can either be:

  <disk type='network' device='disk'>
    <driver name='qemu' type='raw'/>
    <source protocol='iscsi' name='iqn.2013-06.com.example:iscsi-pool/1'>
      <host name='iscsi.example.com' port='3260'/>
      <auth username='myuser'>
        <secret type='iscsi' usage='libvirtiscsi'/>
      </auth>
    </source>
    <target dev='vda' bus='virtio'/>
  </disk>

or

  <disk type='volume' device='disk'>
    <driver name='qemu' type='raw'/>
    <source pool='iscsi-pool' volume='unit:0:0:1'/>
    <target dev='vda' bus='virtio'/>
  </disk>

where the 'iscsi-pool' would be defined as:

  <pool type="iscsi">
    <name>iscsi-pool</name>
    <source>
      <host name="iscsi.example.com"/>
      <device path="iqn.2013-06.com.example:iscsi-pool"/>
      <auth type='chap' username='myuser'>
        <secret usage='libvirtiscsi'/>
      </auth>
    </source>
    <target>
      <path>/dev/disk/by-path</path>
    </target>
  </pool>

The guest XML refers to the same actual volume, but the volume
format within the guest XML removes the need to continually repeat
the source protocol and authentication information for each volume
on the storage source that's being exposed.

.....

Your reference to "Partition-based" Storage Pool initially confused
me until I read Section 13.2.3.3 where I figured out what you are
referring to is a "Filesystem Based Storage Pool". When I think of
Partition I'm thinking of the Disk Storage Pool since essentially what
one does is partitiion the disk via 'parted'.

So, let's call it a "Filesystem based Storage Pool". Also technically
there's a SCSI-based Storage Pool *and* an iSCSI based Storage Pool.
The vHBA-based Storage Pool is just an "implementation detail" of a
SCSI based pool.

As an aside starting with RHEL 7.5 adding a volume from a Veritas
HyperScale daemon is supported for domain XML only. The concept of
a Storage Pool for VxHS doesn't apply. Not sure if it needs to be
mentioned or not, but figured I would.

Comment 6 John Ferlan 2018-03-08 13:38:12 UTC
Section 13.2.2.1]

Editorial comment - item #2 Define the Storage Pool starts on a separate
line from the actual number "2" unlike the "1. Read recommendations ...".

To answer highlighted questions:

Persistent Storage Pools survive hypervisor host reboot while a Transient
Storage Pool exists only as long as the hypervisor Host is running. A
transient Storage Pool would also be removed if the Storage Pool is
destroyed via some virsh command or API call. All that is removed is
the "concept" of the Storage Pool; the actual physical storage associated
with the pool is not removed.


The 'fs' example should not include the <host...> - that's only needed
for networked storage pools.

The 'fs' example should not include the <dir...> - that's only needed
for directory, nfs, or gluster pools.

The 'fs' example should not include the <name...> - that's only needed
for logical, rbd, sheepdog, and gluster pools to provide the source
name element.

Another example:

<pool type='dir'>
  <name>etcpool</name>
  <source>
  </source>
  <target>
    <path>/etc</path>
  </target>
</pool>

This would create a Storage Pool listing all the files in the /etc
directory.  One can add/remove to the pool.

The "virsh pool-define" command is used to create a Persistent Pool
while the "virsh pool-create" would be used to create a Transient Pool.
The pool-define requires a pool-start, while the pool-create automatically
will start the pool.

.....

I note that step 3 the "Verify that the pool was created" text is on the
subsequent line (similar to step 2, but different than step 1 & 4).

In addition to the 'virsh pool-list --all', once a pool is defined or
created one can also use 'virsh pool-dumpxml' in order to list the XML
details for the pool. For example:

# virsh pool-dumpxml etcpool
<pool type='dir'>
  <name>etcpool</name>
  <uuid>ca82ddbc-fcfb-4fc4-b42f-5b9afc85a060</uuid>
  <capacity unit='bytes'>52710469632</capacity>
  <allocation unit='bytes'>39396331520</allocation>
  <available unit='bytes'>13314138112</available>
  <source>
  </source>
  <target>
    <path>/etc</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:etc_t:s0</label>
    </permissions>
  </target>
</pool>


.....
Step 4...

You call it the mount point - I call it the Storage Pool target path.
The pool-build command doesn't "create the mount point" rather it
formats the source device for usage.

Building the target path is only necessary for the Disk, Filesystem, and
Logical Storage Pools. The building of a pool initializes the storage
source device and defines the format of the data. If libvirt detects that
the source storage device has a different format, it will fail to build
in order to protect existing data. Usage of the --overwrite qualifier
will override the failure.

For the virsh pool-create option the target path should already exist
prior to the pool-create command, but supplying the --build option on
the virsh command line will perform the build just like the pool-build
command option. Similarly, the --overwrite option is available for
the pool-create command.

Defining the format is storage pool dependent. A filesystem build performs
a mkfs for the specified storage source format. A disk build performs
the 'parted' partition creation command for the specified storage
source format. A logical build performs the 'vgcreate' volume group
creation command using the pool name as the volume group name.


----

Step 5 and Step 7 repeat the "mount point" which should be changed.

For Step 5, a pool-start command performs the pool specific command
in order to make the storage volumes available for usage by guests.
For example, for the Filesystem pool a start will mount the file
system; whereas, for a Logical pool the 'vgchange' command is run
to make the volume group active.

In the Step 6 example there's an extra 's' in the example (e.g.
guest_images_fss - it should just be guest_images_fs).

For Step 7, it's a straight up change of "mount point" for "target
path".

[Section 13.2.2.2]

I think the "Note" is not all that important. Your example is doing
essentially the same operation using virt-manager instead of virsh.


In Step 3, the 3rd paragraph which has "This example uses:" seems
to have no space with the "fs: Pre-Formatted Block Device". Could
just be the rendering though.

The "Note" below Figure 13.3 uses the "partition-based" nomenclature
instead of "Filesystem-based".

Comment 7 John Ferlan 2018-03-08 13:49:34 UTC
[Section 13.2.3]

I like the way these are laid out - especially the description and
various options for XML, pool command, and VMM related terminology.
FWIW: The pool-define-as and pool-create-as commands use mostly all
the same options except those that are particular to allowing build
and overwrite of a transient pool that the pool-create-as allows.

[Section 13.2.3.1]

Table 13.1 - pool-define-as uses just "target" not "target-dev".
It's the Target Path for the pool.

I'd change "FS_directory" to be "dirpool" using the command:

   virsh pool-define-as dirpool dir --target /guest_images"

Similarly for Figure 13.4, using "dirpool" for the name and
just "/guest_images" for the Target Path.

Perhaps had a note that '/guest_images' would need to exist prior
to starting the pool.


------

[Section 13.2.3.2]

The "reformat and erase" only occurs if one overrides the current
format. More recent versions of libvirt will check and fail to build
or start if the detected format on the source device is different
than what is defined for the pool. I suppose the recommendation to
backup prior to usage is true if one cares about what currently
exists on the disk.

The prerequisites section essentially describes what the pool-build
command would do! One can perform the pool-define-as without reformatting
the source device; however, pool-start or using pool-create-as would
require either a matching format or the ability to overwrite the format.

That sequence of commands listed could be adjusted to list or print
the current format in order to ensure that there's nothing currently
on the disk. For example,

# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: IET VIRTUAL-DISK (scsi)
Disk /dev/sdb: 1074MB
Sector size (logical/physical): 512B/4096B
Partition Table: mac
Disk Flags:

Number  Start  End     Size    File system  Name   Flags
 1      512B   32.8kB  32.3kB               Apple

(parted) quit
#

Would indicate that something already exists on the source device.
Conversely output such as:

# parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: IET VIRTUAL-DISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:
(parted) quit
#

would indicate that the source device hasn't been formatted and
would not require an overwrite. Not all disk formats are known to
the parted command and libvirt does perform additional checks beyond
parted looking for formatted devices to ensure that data is not
overwritten.


In Table 13.2...

For the XML colume, instead of path='source_path', use the /dev/sdb
thus it's "<device path='/dev/sdb'/>".  You should also make it multiple
lines like the <target> row/column.

For the pool-define-as column, it's just 'target' not 'target-dev'.

For the virsh example:

  virsh pool-define-as phy_disk disk gpt --source-dev=/dev/sdb --target /dev

Your example listed 'fs' and it should have been 'disk' and you had
/dev/sdx not /dev/sdb.  The quoting around /dev aren't necessary.

......

[Section 13.2.3.3]

It's not a partition-based, it's "Filesystem-based"

Here like the "Disk" section, the prerequisite is essentally what the
pool-build command does with the source format. In order to define the
pool it's not required to perform the mkfs.ext4 command. The only reason
to have the prerequisite is if you don't perform the pool-build afterwards.

I forget at the moment how to determine what format would be on a source
device, although I suppose a mount command may tell one.

For Table 13.3

Let's add "For example, 'ext4' filesystem type" to the filesystem type row.

XML column, the "\name" should be "/name"
The <source> should be on it's own line (like the <target> is)
and the </source> should also be on it's own line.

The virsh command could be:

    virsh pool-define-as guest_images_fs fs --source-dev /dev/sd1
                                            --target /guest_images

(rather than using the dashes to indicate no option when parsing positionally
 based arguments).

Figure 13.6, your example using Name = "part_fs" doesn't match the virsh
example where the "name" is "guest_images_fs". Likewise, the target path
"/guest_images/part_fs" doesn't match the virsh "/guest_images".

It's not that important, unless you expect things to match!

Comment 8 John Ferlan 2018-03-08 13:50:39 UTC
dang - just noticed right as I'm pressing "save changes" that my example should be "--source-dev /dev/sdc1" (not /dev/sd1).

Comment 9 John Ferlan 2018-03-08 18:35:45 UTC
[Section 13.2.3.5]

NB: I don't typically use 'targetcli' as I find it's syntax to be incredibly
    obtuse and difficult to use...

w/r/t the query in Procedure 13.5 Step 3 "{Is this correct}" - not 100%
      sure, but since the example uses "/dev/vdb1" - that means something
      as created/formatted "vdb1". First off, I think "sdb" should be used
      instead as "vdb" is more related to guest virtual devices; whereas,
      "sdb" indicates more to me a SCSI based storage device.  Secondly,
      the "sdb1" would further indicate to me that someone partitioned
      /dev/sdb (or /dev/vdb in your example) using disk partitioning rather
      than perhaps LVM where I'd expect the syntax to be /dev/{vg}/{lv},
      where {vg} is volume group name and {lv} is logical volume name.
      Using the "vgs" command you can find all your volume groups and the
      "lvs" command would find your logical volumes within the volume group.
      For example, "lvs fedora" would list a few lv's within the group. In
      my case, "root" is an lv. So "/dev/fedora/root" is where the device
      would be located.

Anyway, so to answer the question, I'd just say /dev/sdb1 is the first
partition of /dev/sdb and leave it at that.

Step 3.a.i, I think you mean the "backstores/block" not "blockstores" (at
least that's what targetcli shows me).  In fact Step 7 uses "backstores".

Step 4.b creates "iqn.2010-05.com.example.server1:iscsirhel7guest", but
Step 5 seems to use "iqn.iqn.2010-05.com.example.server1:iscsirhel7guest/tpg1".
Whatever a TPG is - no idea as it's not required for an iSCSI pool. That
same "iqn.iqn..." is replicated in step 7.a output.

Step 8a output shows "iqn.1994-05.com.redhat", perhaps that should be
"iqn.2010-05.com.example.server1"?  Certainly step c and the 2nd output
bubble seems to imply that, although the ".foo" there probably should be
".server1".

FWIW: I prefer to use the direct file manipulation method and manual creation
      updating /etc/tgt and /etc/iscsi files although it doesn't seem the two
      methods mix very well as targetcli doesn't see anything I've created
      using the files directly. I assume what I do is the "older" way of
      doing things.

So that said, whether the instructions work is up to your investigation.
Whether or not the ACL's are required for each initiator - again, I'm
not clear on that. If these instructions are copied from some other book
or somewhere else, perhaps it's best to just leave a point in this guide
to wherever those instructions are kept and leave it at that. That way
we know that whatever instructions are listed here won't become out of
date at some point in the future.

As for the optional procedures, 13.6 - again, no idea. Makes me wonder if
it confuses the reader more or just allows them the possibility to know
it is possible.

Procedure 13.7 - you list using "--portal server1.example.com" and the
output has 127.0.0.1:3260,1 - that differs from step 6.b which uses an
IP address of 143.22.16.33.  The 127.0.0.1 is also known as the 'localhost'
address.

Procedure 13.8 output bubbles indicate IP address of "10.0.0.1". As long
as they're consistent it doesn't matter. My examples typically show my
"virbr0" IP address of "192.168.122.1".

Procedure 13.9 could be titled, "Using libvirt secrets for an iSCSI Storage Pool". Essentially this is "required" if someone defined a "userid" and "password"
when setting up their iSCSI server (e.g. the 'set auth userid=' and 'set auth
password=' commands in step 8, bubble 4 in procedure 13.5).

The example XML should be:

<secret ephemeral='no' private='yes'>
   <description>Passphrase for the iSCSI example.com server</description>
   <usage type='iscsi'>
      <target>iscsirhel7secret</target>
   </usage>
</secret>


A "note" to your note, the XML format for a <disk> element differs slightly:

<auth username='redhat'>
  <secret type='iscsi' usage='iscsirhel7secret'/>
</auth>

to the pool example of:

<auth type='chap' username='redhat'>
  <secret usage='iscsirhel7secret'/>
</auth>


Procedure 13.10.

While I understand the desire to describe everything related to using iSCSI
Storage Pools, the whole 'direct' and 'host' discussion can be quite confusing.
I always have to go back and look at the code. Perhaps the "best way" I can
describe it is when using the "host" mode the path to the iSCSI LUN is the
host-based path, e.g /dev/disk/by-path; whereas, for "direct" mode the path
to the iSCSI LUN is based on the Uniform Resource Identifier (or URI). The
reason for this is to allow those that define guests on their iSCSI Server
to be able to use the host path to the guest rather than using the network.

In any case, I'm not sure it's really necessary to describe, but leave it
up to you.

Table 13.5...

Fix the label from "Disk-based" to "iSCSI-based"

XML column the "\name" should be "/name"

If possible, put <source> and <host... on separate lines.

The "device path" should be "<device path" *and* putting the </source>
on it's own line would be preferred.

If possible, the /target_path could be /dev/disk/by-path

To answer your question, it the highly undocumented :

    <initiator>
      <iqn name='initiator0'/>
    </initiator>

which is a child of the <source> element like <host>... <device>...

I don't ever use it and learned by reading this that the usage would be
for if there was an ACL defined. As for a virsh option, there is none.
One would have to add the "--print-xml" option to their pool-define-as
command, pipe the output to a file, edit the file and add the above XML
snippet to add the initiator iqn name, then use the pool-define of that
XML file. A round-about way of doing things.

Another command that I was just reminded of while looking for initiator
information in virsh is:

# virsh find-storage-pool-sources-as iscsi 192.168.122.1
<sources>
  <source>
    <host name='192.168.122.1' port='3260'/>
    <device path='iqn.2013-12.com.example:iscsi-1g-disks'/>
  </source>
  <source>
    <host name='192.168.122.1' port='3260'/>
    <device path='iqn.2013-12.com.example:iscsi-chap-netpool'/>
  </source>
</sources>

might be useful to fit in somewhere to help someone figure out what
needs to be provided for the <source> based on what's descovered as
active iSCSI IQN's on the host.

[BTW: this is about all I've "saved up" while reviewing this guide on my various recent travels.  Continued updates will be more in the real time it takes me to review each section]

Comment 10 John Ferlan 2018-03-08 19:40:12 UTC
[Section 13.2.3.6]

From the recommendations section... I think you meant to say "LVM-based
Storage Pools do not provide the full flexibility of LVM". Rather than
indicating "Thin provisioning is currently not possible", perhaps it'd
be better noted that libvirt supports thin logical volumes, but does not
provide the features of thin pools.

Table 13.6 - mislabeled as "Disk-based" - should be "LVM-based" or more
technically "Logical-based".

Similar to previous comments, the XML column has problems

 * "<\name>" should be "</name>"
 * The <source> and <device path..." should be on separate lines
 * The only format-type allowed in 'lvm2', so just use that
 * The </source> should be on it's own line
 * The <target>, <path='...'>, and </target> should each be on their own line

NB: There may be multiple source device path's added if the volume group is
    to be made up from multiple disk partitions, e.g.:

   <source>
     <device path='/dev/sda1'/>
     <device path='/dev/sdb1'/>
     <device path='/dev/sdc2'/>
     ...
   </source>


The virsh command should only use virsh once and just use the switches, e.g.

   virsh pool-define-as guest_images_lvm logical --source-dev=/dev/sdc
                        --source-name libvirt_lvm --target /dev/libvirt_lvm

In order to be consistent, Figure 13.9 should use "/dev/libvirt_lvm" as the
target path as that's the VG name from the virsh example.

Comment 11 John Ferlan 2018-03-08 19:51:45 UTC
[Section 13.2.3.7]

Prerequisites:

To create a Network File System (NFS)-based Storage Pool, an NFS Server
should already be configured to be used by the host machine.

Table 13.7:

XML column

 * The "<\name>" should be "</name>"
 * The <source> and <host name..." should be on separate lines
 * The </source> should also be on a separate line

pool-define-as column

 * Use "source-host" not "sourcehost"

To be consistent with the XML output, the virsh command, and the vmm app, 
the virsh example would be:

  virsh pool-define-as nfspool netfs --source-host localhost
                       --source-path /home/net_mount
                       --target /var/lib/libvirt/images/nfspool

Comment 12 John Ferlan 2018-03-08 22:33:46 UTC
[Section 13.2.3.8]


The paragraph just before Prerequisites seems to have a few stray words...
I think you're trying to indicate "For more information on <path> and the
elements within <target>, see..."  BTW: If you add the #StoragePoolTarget
onto the html link you have, you'll link directly to the <path> and <target>
discussion, e.g.:

https://libvirt.org/formatstorage.html#StoragePoolTarget


Within the Prerequisites section, Step 3, you have "scsi_host#" which
should be "scsi_host3".

Table 13.8

XML column:

 * "<\name>" should be "</name>"
 * <source> and <adapter...> on separate lines
 * </source> on a separate line

pool-define-as column:

 * By using the --adapter-name NAME, --adapter-wwnn WWNN, and
   --adapter-wwpn WWPN, automagically a fc_host type adapter is defined.

VMM -

 * I've never used it - so I'm not sure it has all the required fields and
   probably could be dropped and noted that it doesn't have the functionality
   to define a vHBA SCSI-based Storage Pool.

Please remove the file permissions row - it'll be generated automagically.
It can be provided, but it's not required.

The "Important" dialog box - it took me a bit to figure out what was meant.
Using the "/dev/" will generate the unique short device path (e.g. /dev/sdX
device) for the volume device path instead of the physical host path, such as
/dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0. This allows
the same volume to be listed in multiple guests by multiple pools. If there
were multiple pools using the /dev/disk/by-path, the volumes listed in the
pool all display the same path value and that can cause duplicate device
type warnings.

In the "Note" dialog box... The parent is the physical HBA parent from which
the NPIV LUNs by varying paths can be used. This would be the scsi_hostN
with the "vports" and "max_vports" attributes from procedure 13.11 step 2.
The parent, parent_wwnn/parent_wwpn, or parent_fabric_wwn give varying
degrees of assurance that subsequent reboots use the same parent. Without
providing a parent, the libvirt code will find the first scsi_hostN adapter
that supports NPIV and use it. Using the parent_wwnn/parent_wwpn provides
the most assurance that across host reboots the same HBA is used. Using
only parent could result in problems if additional scsi_hostN's are added
into the configuration.  Using parent_fabric_wwn provides a way to pick
an HBA on the same fabric regardless of which exact scsi_hostN is used.


In your "Examples" section there's a "THe" that should be "The".

Your first XML output example - doesn't need the <permissions> stanza and
will just pick the first HBA that is NPIV capable in order to create a vHBA.
The /dev/disk/by-path will list the volumes within the pool by their physical
host path.

Your second XML output example - again doesn't need the <permissions> stanza
and will use the existing "scsi_host3" as the HBA in order to create the vHBA.
Use of the /dev/ in the path will result in the volumes listed by their
/dev/sdX path.

Currently the virsh command doesn't provide a way to define the parent_wwnn,
parent_wwpn, or parent_fabric_wwn fields.

So the virsh command should be:

# virsh pool-define-as vhbapool_host3 scsi --adapter-parent scsi_host3
            --adapter-wwnn 5001a4a93526d0a1 --adapter-wwpn 5001a4ace3ee047d
            --target /dev/disk/by-path

generates the XML:

<pool type='scsi'>
  <name>vhbapool_host3</name>
  <source>
    <adapter type='fc_host' parent='scsi_host3' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee047d'/>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
  </target>
</pool>

NB: One uses either "--adapter-name" OR "--adapter-wwnn", "--adapter-wwpn",
and optionally "--adapter-parent". Using just --adapter-name is for the
non 'fc_host' type adapter. 

(BTW: For virsh, you can provide the --print-xml option to see the XML that
      would be generated without having the define actually occur).

I would remove the Add a New Storage Pool dialog box examples - they won't
do what you expect.

Comment 13 John Ferlan 2018-03-12 17:20:42 UTC
[Section 13.2.4]


Could be useful to have a virsh pool-list command before you go through the
pool-destroy command sequence - just to show the 'guest_images_disk' pool
exists.

.....

[Section 13.3]


[Section 13.3.1]

In addition to the 'vol-info' output, perhaps a similar dialog box on a
"virsh vol-list $POOL" (e.g. virsh vol-list guest_images) in order to
show/list all the volumes in the $POOL.  Then showing the vol-info command
to provide more details of a volume in the $POOL.

[Section 13.3.2.1]
I'd just remove the lines for <permissions> ... </permissions>. You could
keep them in, but they're not required nor are they discussed in the section.
Besides the <group> and <mode> aren't aligned under <owner>.  If you care,
the <owner> field relates to the file's owner. In this case, 107 would be
"qemu" as would the <group> be the same - using "id #" to translate the
number into the user and group values. The mode value is an octal (hence
why it starts with a 0 - zero) representing the value to provide to a
'chmod' command representing the file mode bits (rwxrwxrwx for owner, group,
and world). The <label> is an SELinux label for the volume).

The first output bubble in section b needs to indicate which storage pool
the volume will be create in, so the output should be:

   virsh vol-create storage_pool ~/guest_volume.xml

The third display bubble needs a slight adjustment for "volume 1", e.g.

   virsh vol-clone --pool storage_pool volume1 clone1

NB: If you wanted to change "storage_pool" to "guest_images_dir" to be
consistent with the VMM example that'd be fine too.

FWIW:
Your VMM example creates a qcow2 file format, which is different than
the virsh example which is creating a raw file.

[Section 13.3.3]

The first sentence the "virsh vol-list" appears to run into "command".
IOW: Perhaps a space is required before "command".

As for the output example, that output path doesn't appear to be right,
but I'd need to see "more" of how guest_images_disk is defined. Since
your example in the prior section was using "guest_images_dir", let's
continue using that so that the output would be:

volume1   /home/VirtualMachines/guest_images_dir/volume1
volume2   /home/VirtualMachines/guest_images_dir/volume2
volume3   /home/VirtualMachines/guest_images_dir/volume3

If this were a Disk pool, then the output would be:

sdb1  /dev/sdb1
sdb2  /dev/sdb2
sdb3  /dev/sdb3

and the examples would be "virsh vol-create-as guest_images_disk sdb1 8G"
(etc. for sdb2 and sdb3).

[Section 13.3.4]

[Section 13.3.4.1]

The storage pool is not necessarily optional for the vol-delete command.
If just a name is provided, then the volume cannot be found since how
would we know which pool to delete "volume1" if it existed in both
the "default" pool and the "guest_images_dir" pool? So that's why the
man page says to provide by key or path. In that case, the path to the
pool's volume is provided, thus it's easy to know how to delete the
volume.  Hopefully that makes sense.

For example:

   virsh vol-delete /home/VirtualMachines/guest_images_dir/volume1

is the same as

   virsh vol-delete volume1 guest_images_dir

where the --pool qualifier is optional since arguments are positionally based.

FWIW:
I'm in the process of cleaning up the man page description a bit too.

[Section 13.3.4.3]

First paragraph last sentence... "specified" instead of 'specefied'.

Comment 14 John Ferlan 2018-03-12 18:38:41 UTC
[Section 13.3.5]


[Section 13.3.5.1]

The example needs to define the target bus, e.g.

Instead of "<target dev='vdb'/>" it should be "<target dev='vdb' bus='virtio'/>

[Section 13.3.5.2.1 and 13.3.5.2.2]

Use "To" instead of "Too".

[Section 13.3.5.3]

Let's not use "three" - let's be a bit more vague, e.g.

"There are multiple ways to expose a host SCSI LUN entirely to the guest.
Exposing the SCSI LUN to the guest provides the capability to execute SCSI
commands directly to the LUN on the guest. This is useful... "

NB: There's actually another way via the <hostdev...> option, but no need
to describe that here!

NB2: For each of these options, you'll note that the device='lun' should
be used since we're making the LUN available and we want to run commands
on it.

The first bubble needs to have a minor format adjustment for the "<disk..."
line - it should be aligned on the left column.

The <shareable /> should be <shareable/> and needs to aligned under <target>.

The second and third output bubbles need a lot of alignment help. The second
one also needs to use "device='lun'" not "device='disk'", especially since
the sgio='unfiltered' is provided.

Note also the movement of the <auth...> to under <source...> as well -
that's an option from RHEL 7.5 and beyond. The former syntax does work, but
it's preferable to show under <source...>:

<disk type='network' device='lun' sgio='unfiltered'>
  <driver name='qemu' type='raw'/>
  <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-net-pool/1'>
    <host name='example.com' port='3260'/>
    <auth username='myuser'>
      <secret type='iscsi' usage='libvirtiscsi'/>
    </auth>
  </source>
  <target dev='sda' bus='scsi'/>
  <shareable/>
</disk>

and

<disk type='volume' device='lun' sgio='unfiltered'>
  <driver name='qemu' type='raw'/>
  <source pool='iscsi-net-pool' volume='unit:0:0:1' mode='host'/>
  <target dev='sda' bus='scsi'/>
  <shareable/>
</disk>

And to keep going with the theme (note addition of for device path too):

# virsh pool-dumpxml iscsi-net-pool
<pool type='iscsi'>
  <name>iscsi-net-pool</name>
  <capacity unit='bytes'>11274289152</capacity>
  <allocation unit='bytes'>11274289152</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='192.168.122.1' port='3260'/>
    <device path='iqn.2013-12.com.example:iscsi-chap-netpool'/>
    <auth type='chap' username='redhat'>
      <secret usage='libvirtiscsi'/>
    </auth>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
    <permissions>
      <mode>0755</mode>
    </permissions>
  </target>
</pool>

For the NPIV/vHBA example it's (note the shareable difference too):

<disk type='volume' device='lun' sgio='unfiltered'>
  <driver name='qemu' type='raw'/>
  <source pool='vhbapool_host3' volume='unit:0:1:0'/>
  <target dev='sda' bus='scsi'/>
  <shareable/>
</disk>

and

# virsh pool-dumpxml vhbapool_host3
<pool type='scsi'>
  <name>vhbapool_host3</name>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <adapter type='fc_host' parent='scsi_host3' managed='yes' wwnn='5001a4a93526d0a1' wwpn='5001a4ace3ee045d'/>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>


Procedure 13.14

# cat sda.xml
<disk type='volume' device='lun' sgio='unfiltered'>
  <driver name='qemu' type='raw'/>
  <source pool='vhbapool_host3' volume='unit:0:1:0'/>
  <target dev='sda' bus='scsi'/>
  <shareable/>
</disk>

....

[Section 13.3.5.4]

FWIW: More recent versions of libvirt have done a much better job at
generating the controller automagically; however, it's still a good
idea to provide a controller since what libvirt may choose as a default
might be "older". For example, if not provided, libvirt may choose an
"lsilogic" SCSI controller model rather than the more recent "virtio-scsi".


[Section 13.3.6.1]

Use "virsh detach-disk Guest1 vdb"  (e.g. not vdb1 - vdb1 could work but
vdb is good enough).

Comment 17 John Ferlan 2018-06-27 16:36:59 UTC
First off - changes looks great. I certainly find this quite readable providing some amount of depth and plenty of examples!

* Probably a rendering thing, but Point 2 and Point 3 in section 13.2.2.1 show a blank line before the actual heading

* Section 13.2.2.1, Point 4 "Note". The note is fine as it exists, you could also add another sentence that using the "--overwrite" qualifier will allow the build to succeed by forcing the expected format for the build.

 BTW: It's also possible to provide the --build (and --overwrite) qualifiers to the pool-create and pool-create-as commands to avoid the separate command. Obviously this is useful for transient pools which would need to be created, built, and started.

* Section 13.2.2.1, Point 6 - there's an extra 's' in the bubble for the guest_images_fs pool name example.

* Table 13.3 has in the XML column first row "<pool type='fs'>, for example ext4" - the "for example ext4" part should move to the 3rd row or file system type after the "<format type='fs_type' />"

* Section 13.2.3.6 - second bullet in recommendations has a type should be "does" not "dos".

* Procedure 13.10. Creating a vHBA... In the examples sub section, the second example bubble in subsection "Configuring a virtual machine to use a vHBA LUN" has the "<disk type='volume' device='lun' sgio='unfiltered'>" out of alignment (could just be a rendering thing)

* Section 13.2.4.2, step 5 example bubble - add the "--all" qualifier to the command since by default 'pool-list' only shows active pools; whereas, the --all qualifier should show the inactive ones.  Nothing should show up in the output, but this is just a "formatting nit".

* Not sure you want to add it or not, but since you mention vol-wipe in section 13.3.4.3, it jiggles the memory thread that there's also 'vol-upload', 'vol-download', and 'vol-resize' examples that could be generated. Those are further described in section 20.39 and 20.40.  Maybe instead of having 13.3.4.3, there should be a section between 13.3.3 (Viewing) and 13.3.4 (Deleting) called "Managing Data" (or something like that) that would (at first) encompass the 'vol-wipe' (13.3.4.3) section and add sections for upload, download, and resize.  I believe the 'vol-clone' at the end of 13.3.2.1 could be moved into the new section.  So as to keep various manipulation command examples together.

For vol-upload - it's essentially taking data from some other pool/volume and "moving" it into the already defined volume in the pool. Similarly vol-download will take data from the named pool/volume and "move" it into the named target file.  Finally, vol-resize, does exactly what it seems - allows one to resize their volume.  

For each of these not all pools supports each volume manipulation command. The examples could be quite sparse:

    virsh vol-upload sde1 /tmp/data500m.empty disk-pool

where 'sde1' is a volume in the 'disk-pool' using data from '/tmp/data500m.empty' which for example purposes was created via 'dd if=/dev/zero of=/tmp/data500m.empty bs=1M count=512'

    virsh vol-download sde1 /tmp/data-sde1.tmp disk-pool

takes the recently uploaded data into volume 'sde1' in pool 'disk-pool' and downloads it into file 'tmp/data-sde1.tmp'

The 'virsh vol-resize file.img 100M default' run after perhaps a 'virsh vol-create-as default file.img 50M'.

You can also use the "--pool" qualifier in place of my positional ordering of the command to be consistent with other vol-* command documented.


* I think there are pieces of "13.3.5.2.1 Adding a storage volume to a guest" and "13.3.5.2.2 Adding default storage to a guest" that could be merged together. Those first two steps are identical. So that leaves the differences being selecting an existing volume from the list or creating a new volume then becomes the difference.

  Not sure of the "best way" to combine and it's not a requirement, but I do think it's certainly possible.

Comment 19 John Ferlan 2018-07-19 19:05:36 UTC
Sorry for the delay - new section looks good...  Two issues:

1. Example 13.2 (in 13.3.4.3. Downloading Data to a Storage Volume) should state:

"In this example sde1 is a volume in the disk-pool storage pool. The data in sde1 is downloaded to /tmp/data-sde1.tmp."

since the command line was "virsh vol-download sde1 /tmp/data-sde1.tmp disk-pool"

2. Example 13.3:

Should be:

virsh vol-resize --pool disk-pool sde1 100M

or

virsh vol-resize sde1 100M --pool disk-pool

not

virsh vol-resize --pool sde1 100M disk-pool