Bug 1326480

Summary: [DOC RFE] improve documentation on creating multiple bricks on same physical device
Product: Red Hat Gluster Storage Reporter: Manoj Pillai <mpillai>
Component: doc-Administration_GuideAssignee: Divya <divya>
doc-Administration_Guide sub component: Default QA Contact: krishnaram Karthick <kramdoss>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: unspecified    
Priority: unspecified CC: asriram, kramdoss, lpabon, mhideo, mpillai, nlevinki, rcyriac, rhinduja, rhs-bugs, rnachimu, rwheeler, sasundar, storage-doc, surs
Version: rhgs-3.1Keywords: FutureFeature, ZStream
Target Milestone: ---   
Target Release: RHGS 3.1.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-29 14:19:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1311845    

Description Manoj Pillai 2016-04-12 19:04:01 UTC
Document URL: 

Section Number and Name: 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/Brick_Configuration.html

Describe the issue: 

Customers typically need to create multiple bricks on a single physical device. The description of how this should be done needs to be improved.

Suggestions for improvement: 

Will update bz with suggested text.

Additional information:

Comment 2 Manoj Pillai 2016-04-13 09:33:49 UTC
Current Text:

<body>
Creating a Thin Logical Volume
After the thin pool has been created as mentioned above, a thinly
provisioned logical volume can be created in the thin pool to
serve as storage for a brick of a Red Hat Gluster Storage volume.

LVM allows multiple thinly-provisioned LVs to share a thin pool;
this allows a common pool of physical storage to be used for
multiple Red Hat Gluster Storage bricks and simplifies
provisioning. However, such sharing of the thin pool metadata and
data devices can impact performance in a number of ways.

NOTE
To avoid performance problems resulting from the sharing of the
same thin pool, Red Hat Gluster Storage recommends that the LV
for each Red Hat Gluster Storage brick have a dedicated thin pool
of its own. As Red Hat Gluster Storage volume snapshots are
created, snapshot LVs will get created and share the thin pool
with the brick LV

lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
</body>

Suggested Text:

<body>
Creating a Thin Logical Volume

After the thin pool has been created as described above, a thinly
provisioned logical volume needs to be created in the thin pool
to serve as storage for a brick of a Red Hat Gluster Storage volume.

lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool

Creating Multiple Bricks on the Same Physical Device

The steps above cover the case where a single brick is being
created on a physical device. This sub-section shows how to adapt
these steps when multiple bricks need to be created on a physical
device.

For simplicity, we make the following assumptions in the steps
below:
- 2 bricks need to be created on the same physical device
- one brick needs to be of size 4 TiB and the other 2 TiB.
- the device is /dev/sdb, and is a RAID-6 device with 12 disks
- the 12-disk RAID-6 device has been created according to the
recommendations in this chapter, i.e. with a stripe unit size of
128 KiB

1. create a single physical volume using pvcreate

pvcreate --dataalignment 1280k /dev/sdb

2. create a single volume group on the device

vgcreate --physicalextentsize 1280k vg1 /dev/sdb

3. create a separate thin pool for each brick

lvcreate --thinpool vg1/thin_pool_1 --size 4T --chunksize 1280K --poolmetadatasize 16G

lvchange --zero n vg1/thin_pool_1

lvcreate --thinpool vg1/thin_pool_2 --size 2T --chunksize 1280K --poolmetadatasize 16G

lvchange --zero n vg1/thin_pool_2

In the examples above the size of each thin pool is chosen to be
the same as the size of the brick that will be created in it.
With thin provisioning, there are many possible ways of managing
space, and a discussion of these is outside the scope of this
section.

4. create a thin logical volume for each brick

lvcreate --thin --name lv1 --virtualsize 4T vg1/thin_pool_1

lvcreate --thin --name lv2 --virtualsize 2T vg1/thin_pool_2

5. follow the recommendations in this chapter for creating and
mounting filesystems for each of the thin logical volumes

mkfs.xfs <options> /dev/vg1/lv1
mkfs.xfs <options> /dev/vg1/lv2

mount <options> /dev/vg1/lv1 <mount point 1>
mount <options> /dev/vg1/lv2 <mount point 2>
</body>

Comment 5 Divya 2016-05-04 06:55:43 UTC
Manoj,

I have updated section "13.2. Brick Configuration" based on your inputs.

Link to the latest guide; http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/571/artifact/tmp/en-US/html-single/index.html#Brick_Configuration

Please review and sign-off.

Also, one clarification in-line.

(In reply to Manoj Pillai from comment #2)
> Current Text:
> 
> <body>
> Creating a Thin Logical Volume
> After the thin pool has been created as mentioned above, a thinly
> provisioned logical volume can be created in the thin pool to
> serve as storage for a brick of a Red Hat Gluster Storage volume.
> 
> LVM allows multiple thinly-provisioned LVs to share a thin pool;
> this allows a common pool of physical storage to be used for
> multiple Red Hat Gluster Storage bricks and simplifies
> provisioning. However, such sharing of the thin pool metadata and
> data devices can impact performance in a number of ways.
> 
> NOTE
> To avoid performance problems resulting from the sharing of the
> same thin pool, Red Hat Gluster Storage recommends that the LV
> for each Red Hat Gluster Storage brick have a dedicated thin pool
> of its own. As Red Hat Gluster Storage volume snapshots are
> created, snapshot LVs will get created and share the thin pool
> with the brick LV
> 
> lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
> </body>
> 
> Suggested Text:
> 
> <body>
> Creating a Thin Logical Volume
> 
> After the thin pool has been created as described above, a thinly
> provisioned logical volume needs to be created in the thin pool
> to serve as storage for a brick of a Red Hat Gluster Storage volume.
> 
> lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
> 
> Creating Multiple Bricks on the Same Physical Device
> 
> The steps above cover the case where a single brick is being
> created on a physical device. This sub-section shows how to adapt
> these steps when multiple bricks need to be created on a physical
> device.
> 
> For simplicity, we make the following assumptions in the steps
> below:
> - 2 bricks need to be created on the same physical device
> - one brick needs to be of size 4 TiB and the other 2 TiB.
> - the device is /dev/sdb, and is a RAID-6 device with 12 disks
> - the 12-disk RAID-6 device has been created according to the
> recommendations in this chapter, i.e. with a stripe unit size of
> 128 KiB
> 
> 1. create a single physical volume using pvcreate
> 
> pvcreate --dataalignment 1280k /dev/sdb
> 
> 2. create a single volume group on the device
> 
> vgcreate --physicalextentsize 1280k vg1 /dev/sdb
> 
> 3. create a separate thin pool for each brick
> 
> lvcreate --thinpool vg1/thin_pool_1 --size 4T --chunksize 1280K
> --poolmetadatasize 16G
> 
> lvchange --zero n vg1/thin_pool_1
> 
> lvcreate --thinpool vg1/thin_pool_2 --size 2T --chunksize 1280K
> --poolmetadatasize 16G
> 
> lvchange --zero n vg1/thin_pool_2
> 
> In the examples above the size of each thin pool is chosen to be
> the same as the size of the brick that will be created in it.
> With thin provisioning, there are many possible ways of managing
> space, and a discussion of these is outside the scope of this
> section.
> 
> 4. create a thin logical volume for each brick
> 
> lvcreate --thin --name lv1 --virtualsize 4T vg1/thin_pool_1
> 
> lvcreate --thin --name lv2 --virtualsize 2T vg1/thin_pool_2
> 
> 5. follow the recommendations in this chapter for creating and
> mounting filesystems for each of the thin logical volumes

When you say "recommendations in this chapter", do you mean the configurations listed in "XFS RAID Alignment " and "Logical Block Size for the Directory" subsection? Please confirm, so that I can add them as a cross-reference to these sections.


> 
> mkfs.xfs <options> /dev/vg1/lv1
> mkfs.xfs <options> /dev/vg1/lv2
> 
> mount <options> /dev/vg1/lv1 <mount point 1>
> mount <options> /dev/vg1/lv2 <mount point 2>
> </body>

Comment 6 Manoj Pillai 2016-05-04 09:27:32 UTC
(In reply to Divya from comment #5)
> Manoj,
> 
> I have updated section "13.2. Brick Configuration" based on your inputs.
> 
> Link to the latest guide;
> http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-
> Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/571/
> artifact/tmp/en-US/html-single/index.html#Brick_Configuration
> 
> Please review and sign-off.

This needs to be reorganized. Will be easier to explain the problems over a discussion.

> 
> Also, one clarification in-line.
> 
> > 
> > 5. follow the recommendations in this chapter for creating and
> > mounting filesystems for each of the thin logical volumes
> 
> When you say "recommendations in this chapter", do you mean the
> configurations listed in "XFS RAID Alignment " and "Logical Block Size for
> the Directory" subsection? Please confirm, so that I can add them as a
> cross-reference to these sections.

It will include "XFS inode size", "XFS RAID alignment", "Logical Block Size for the Directory", "Allocation Strategy" and "Access Time". Basically, all the sub-sections dealing with mkfs.xfs and mount options.
 
> 
> 
> > 
> > mkfs.xfs <options> /dev/vg1/lv1
> > mkfs.xfs <options> /dev/vg1/lv2
> > 
> > mount <options> /dev/vg1/lv1 <mount point 1>
> > mount <options> /dev/vg1/lv2 <mount point 2>
> > </body>

Spotted an error in the "Allocation Strategy" sub-section: it says "use the -o inode64 option with the mkfs.xfs command". That should read with "the mount command".

Comment 7 Manoj Pillai 2016-05-05 05:41:11 UTC
Also, in bz #1329486, we are changing the way we disable block zeroing. Steps in comment #2 should be changed to incorporate that.

E.g, instead of:
lvcreate --thinpool vg1/thin_pool_1 --size 4T --chunksize 1280K --poolmetadatasize 16G
lvchange --zero n vg1/thin_pool_1

Use:
lvcreate --thinpool vg1/thin_pool_1 --size 4T --chunksize 1280K --poolmetadatasize 16G --zero n

Comment 8 Divya 2016-05-17 10:26:36 UTC
(In reply to Manoj Pillai from comment #6)
> (In reply to Divya from comment #5)
> > Manoj,
> > 
> > I have updated section "13.2. Brick Configuration" based on your inputs.
> > 
> > Link to the latest guide;
> > http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-
> > Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/571/
> > artifact/tmp/en-US/html-single/index.html#Brick_Configuration
> > 
> > Please review and sign-off.
> 
> This needs to be reorganized. Will be easier to explain the problems over a
> discussion.

I had a discussion with Ken (Content Strategist for Storage) regarding reorganization of the content.

Here is the draft of the new reorganised content: https://docs.google.com/document/d/1fOb-8TMbEawVpIBp4r_9AblGozUykJLu_IiKThB8uBU/edit?ts=5739db4d

Could you review and let me know if this is fine? Based on your confirmation, I will make updates to the Administration Guide.

> 
> > 
> > Also, one clarification in-line.
> > 
> > > 
> > > 5. follow the recommendations in this chapter for creating and
> > > mounting filesystems for each of the thin logical volumes
> > 
> > When you say "recommendations in this chapter", do you mean the
> > configurations listed in "XFS RAID Alignment " and "Logical Block Size for
> > the Directory" subsection? Please confirm, so that I can add them as a
> > cross-reference to these sections.
> 
> It will include "XFS inode size", "XFS RAID alignment", "Logical Block Size
> for the Directory", "Allocation Strategy" and "Access Time". Basically, all
> the sub-sections dealing with mkfs.xfs and mount options.
>  
> > 
> > 
> > > 
> > > mkfs.xfs <options> /dev/vg1/lv1
> > > mkfs.xfs <options> /dev/vg1/lv2
> > > 
> > > mount <options> /dev/vg1/lv1 <mount point 1>
> > > mount <options> /dev/vg1/lv2 <mount point 2>
> > > </body>
> 
> Spotted an error in the "Allocation Strategy" sub-section: it says "use the
> -o inode64 option with the mkfs.xfs command". That should read with "the
> mount command".

Comment 9 Manoj Pillai 2016-05-17 11:17:52 UTC
(In reply to Divya from comment #8)

So in this rewrite every sub-section has been modified to have command format and example for the two cases: (a) single brick (b) multiple bricks per physical device. Some problems I see are (a) the assumptions we use to simplify the multiple brick examples are at the top of the section, far removed from the examples (b) some steps (pvcreate, vgcreate) are the same in both cases.

I really prefer having the multiple bricks case as a separate sub-section. It keeps all the necessary information for this case in one short sub-section.

I'd suggest adding a few more reviewers to get input on which approach they'd prefer.

Comment 10 Divya 2016-05-19 08:43:17 UTC
(In reply to Manoj Pillai from comment #9)
> (In reply to Divya from comment #8)
> 
> So in this rewrite every sub-section has been modified to have command
> format and example for the two cases: (a) single brick (b) multiple bricks
> per physical device. Some problems I see are (a) the assumptions we use to
> simplify the multiple brick examples are at the top of the section, far
> removed from the examples (b) some steps (pvcreate, vgcreate) are the same
> in both cases.
> 
> I really prefer having the multiple bricks case as a separate sub-section.
> It keeps all the necessary information for this case in one short
> sub-section.
> 
> I'd suggest adding a few more reviewers to get input on which approach
> they'd prefer.

Manoj,

Based on our meeting yesterday, I have re-organized the content to enhance the usability of the docs.

Link to the latest doc: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Brick_Configuration 


PS: I am still figuring out how to link the text in the prelude of the LVM Layer step to the newly added example. The example is a bulleted list. Hence, facing this challenge. I will discuss with my team and add links. I will also update the bug as soon as I do it.

Meanwhile, request you to let me know if the changes are fine.

Thanks!

Comment 11 Manoj Pillai 2016-05-19 09:38:59 UTC
(In reply to Divya from comment #10)

I like this organization much better. Thanks!

Some corrections:
1. "Writeback caching" should not be part of XFS recommendations. But "percentage of space allocation to inodes" should be part of XFS recommendations.

2. See comment #7. The lvcreate commands in the newly added text needs to be changed to add the "--zero n" option. With that change you can remove the lvchange commands.

Comment 12 Divya 2016-05-19 13:30:50 UTC
(In reply to Manoj Pillai from comment #10)

I have made the changes suggested in Comment 10.

Link to the updated doc: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Brick_Configuration

I had discussion with the team to add cross-reference from the text in the prelude of the LVM Layer step to the newly added example. Unfortunately, it is not possible to add cross-reference to a listitem (bullet point). Hence, I have added it as citetitle to call-out as a heading name.

Please review and let me know if the changes are fine.

Comment 13 Manoj Pillai 2016-05-19 15:48:05 UTC
(In reply to Divya from comment #12)

1. The lvchange commands need to be removed

2. "--zero n" is only for the lvcreate command for thin pool creation.

"Create a thin logical volume for each brick
# lvcreate --thin --name lv1 --virtualsize 4T vg1/thin_pool_1 --zero n
# lvcreate --thin --name lv2 --virtualsize 2T vg1/thin_pool_2 --zero n"

The "--zero n" in these commands need to be removed

3.
"mount options /dev/vg1/lv1 mount point 1
mount options /dev/vg1/lv1 mount point 2"

Change to:

mount options /dev/vg1/lv1 mount_point_1
mount options /dev/vg1/lv1 mount_point_2

Comment 15 Manoj Pillai 2016-05-20 10:10:11 UTC
looks good to me.

Comment 16 Divya 2016-05-20 10:14:07 UTC
Based on Comment 15, moving the bug to ON_QA.