Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 913798 Details for
Bug 1115130
ssm: Clean up wording in documentation
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
[patch]
[PATCH] doc: various minor fixes in documentation wording
ssm.patch (text/plain), 114.87 KB, created by
Lukáš Czerner
on 2014-07-01 15:58:09 UTC
(
hide
)
Description:
[PATCH] doc: various minor fixes in documentation wording
Filename:
MIME Type:
Creator:
Lukáš Czerner
Created:
2014-07-01 15:58:09 UTC
Size:
114.87 KB
patch
obsolete
>From rayoub@redhat.com Tue Jul 1 17:07:21 2014 >Date: Tue, 1 Jul 2014 11:05:22 -0400 (EDT) >From: Ricky Ayoub <rayoub@redhat.com> >To: ssm-devel <storagemanager-devel@lists.sourceforge.net> >Cc: Tom Coughlan <coughlan@redhat.com> >Subject: [ssm-devel] [PATCH] doc: various minor fixes in documentation wording > >Signed-off-by: Ricky Ayoub <rayoub@redhat.com> >--- > INSTALL | 12 +- > README | 532 +++++++++++++++-------------- > doc/_build/man/ssm.8 | 345 +++++++++---------- > doc/src/backends/backends_introduction.rst | 6 +- > doc/src/backends/btrfs.rst | 79 +++-- > doc/src/backends/crypt.rst | 24 +- > doc/src/backends/lvm.rst | 18 +- > doc/src/backends/md.rst | 2 +- > doc/src/commands/add.txt | 14 +- > doc/src/commands/check.txt | 6 +- > doc/src/commands/commands_introduction.rst | 6 +- > doc/src/commands/create.txt | 38 +-- > doc/src/commands/list.txt | 24 +- > doc/src/commands/mount.txt | 10 +- > doc/src/commands/remove.txt | 17 +- > doc/src/commands/resize.txt | 14 +- > doc/src/commands/snapshot.txt | 10 +- > doc/src/description.rst | 6 +- > doc/src/download.rst | 16 +- > doc/src/env_variables.rst | 18 +- > doc/src/examples.rst | 46 +-- > doc/src/for_developers.rst | 78 ++--- > doc/src/install.rst | 4 +- > doc/src/man_examples.rst | 6 +- > doc/src/requirements.rst | 8 +- > 25 files changed, 663 insertions(+), 676 deletions(-) > >diff --git a/INSTALL b/INSTALL >index 339124e..8cff865 100644 >--- a/INSTALL >+++ b/INSTALL >@@ -9,10 +9,10 @@ To install System Storage Manager into your system simply run: > python setup.py install > > as root in the System Storage Manager directory. Make sure that your >-system configuration meet the *requirements* in order for ssm to work >+system configuration meets the *requirements* in order for ssm to work > correctly. > >-Note that you can run **ssm** even without installation from using the >+Note that you can run **ssm** even without installation by using the > local sources with: > > bin/ssm.local >@@ -22,12 +22,12 @@ Requirements > ************ > > Python 2.6 or higher is required to run this tool. System Storage >-Manager can only be run as root since most of the commands requires >+Manager can only be run as root since most of the commands require > root privileges. > >-There are other requirements listed bellow, but note that you do not >-necessarily need all dependencies for all backends, however if some of >-the tools required by the backend is missing, the backend would not >+There are other requirements listed below, but note that you do not >+necessarily need all dependencies for all backends. However if some of >+the tools required by a backend are missing, that backend will not > work. > > >diff --git a/README b/README >index 71b09e8..e3ed256 100644 >--- a/README >+++ b/README >@@ -8,8 +8,8 @@ A single tool to manage your storage. > Description > *********** > >-System Storage Manager provides easy to use command line interface to >-manage your storage using various technologies like lvm, btrfs, >+System Storage Manager provides an easy to use command line interface >+to manage your storage using various technologies like lvm, btrfs, > encrypted volumes and more. > > In more sophisticated enterprise storage environments, management with >@@ -51,39 +51,42 @@ Commands > Introduction > ************ > >-System Storage Manager have several commands you can specify on the >-command line as a first argument to the ssm. They all have specific >-use and its own arguments, but global ssm arguments are propagated to >-all commands. >+System Storage Manager has several commands that you can specify on >+the command line as a first argument to ssm. They all have a specific >+use and their own arguments, but global ssm arguments are propagated >+to all commands. > > > Create command > ************** > >-This command creates a new volume with defined parameters. If >-**device** is provided it will be used to create a volume, hence it >-will be added into the **pool** prior the volume creation (See *Add >-command section*). More devices can be used to create a volume. >- >-If the **device** is already used in the different pool, then **ssm** >-will ask you whether you want to remove it from the original pool. If >-you decline, or the removal fails, then the **volume** creation fails >-if the *SIZE* was not provided. On the other hand, if the *SIZE* is >-provided and some devices can not be added to the **pool** the volume >-creation might succeed if there is enough space in the **pool**. >- >-*POOL* name can be specified as well. If the pool exists new volume >-will be created from that pool (optionally adding **device** into the >-pool). However if the *POOL* does not exist **ssm** will attempt to >-create a new pool with provided **device** and then create a new >-volume from this pool. If **--backend** argument is omitted, the >-default **ssm** backend will be used. Default backend is *lvm*. >- >-**ssm** also supports creating RAID configuration, however some back- >-ends might not support all the levels, or it might not support RAID at >-all. In this case, volume creation will fail. >- >-If **mount** point is provided **ssm** will attempt to mount the >+This command creates a new volume with defined parameters. If a >+**device** is provided it will be used to create the volume, hence it >+will be added into the **pool** prior to volume creation (See *Add >+command section*). More than one device can be used to create a >+volume. >+ >+If the **device** is already being used in a different pool, then >+**ssm** will ask you whether you want to remove it from the original >+pool. If you decline, or the removal fails, then the **volume** >+creation fails if the *SIZE* was not provided. On the other hand, if >+the *SIZE* is provided and some devices can not be added to the >+**pool**, the volume creation might still succeed if there is enough >+space in the **pool**. >+ >+The *POOL* name can be specified as well. If the pool exists, a new >+volume will be created from that pool (optionally adding **device** >+into the pool). However if the *POOL* does not exist, then **ssm** >+will attempt to create a new pool with the provided **device**, and >+then create a new volume from this pool. If the **--backend** argument >+is omitted, the default **ssm** backend will be used. The default >+backend is *lvm*. >+ >+**ssm** also supports creating a RAID configuration, however some >+back-ends might not support all RAID levels, or may not even support >+RAID at all. In this case, volume creation will fail. >+ >+If a **mount** point is provided, **ssm** will attempt to mount the > volume after it is created. However it will fail if mountable file > system is not present on the volume. > >@@ -91,19 +94,19 @@ system is not present on the volume. > List command > ************ > >-List informations about all detected devices, pools, volumes and >-snapshots found in the system. **list** command can be used either >-alone to list all the information, or you can request specific section >-only. >+Lists information about all detected devices, pools, volumes and >+snapshots found on the system. The **list** command can be used either >+alone to list all of the information, or you can request specific >+sections only. > >-Following sections can be specified: >+The following sections can be specified: > > {volumes | vol} > List information about all **volumes** found in the system. > > {devices | dev} >- List information about all **devices** found in the system. Some >- devices are intentionally hidden, like for example cdrom, or DM/MD >+ List information about all **devices** found on the system. Some >+ devices are intentionally hidden, like for example cdrom or DM/MD > devices since those are actually listed as volumes. > > {pools | pool} >@@ -115,52 +118,54 @@ Following sections can be specified: > > {snapshots | snap} > List information about all **snapshots** found in the system. Note >- that some back-ends does not support snapshotting and some can not >- distinguish between snapshot and regular volume. in this case >- **ssm** will try to recognize volume name in order to identify >+ that some back-ends do not support snapshotting and some cannot >+ distinguish snapshot from regular volumes. In this case, **ssm** >+ will try to recognize the volume name in order to identify a > **snapshot**, but if the **ssm** regular expression does not match >- the snapshot pattern, this snapshot will not be recognized. >+ the snapshot pattern, the problematic snapshot will not be >+ recognized. > > > Remove command > ************** > >-This command removes **item** from the system. Multiple items can be >-specified. If the **item** can not be removed for some reason, it will >-be skipped. >+This command removes an **item** from the system. Multiple items can >+be specified. If the **item** cannot be removed for some reason, it >+will be skipped. > >-**item** can represent: >+An **item** can be any of the following: > > device >- Remove **device** from the pool. Note that this can not be done in >- some cases where the device is used by pool. You can use **-f** >- argument to *force* removal. If the device does not belong to any >- pool, it will be skipped. >+ Remove a **device** from the pool. Note that this cannot be done in >+ some cases where the device is being used by the pool. You can use >+ the **-f** argument to *force* removal. If the device does not >+ belong to any pool, it will be skipped. > > pool >- Remove the **pool** from the system. This will also remove all >+ Remove a **pool** from the system. This will also remove all > volumes created from that pool. > > volume >- Remove the **volume** from the system. Note that this will fail if >- the **volume** is mounted and it can not be *forced* with **-f**. >+ Remove a **volume** from the system. Note that this will fail if >+ the **volume** is mounted and cannot be *forced* with **-f**. > > > Resize command > ************** > > Change size of the **volume** and file system. If there is no file >-system only the **volume** itself will be resized. You can specify >+system, only the **volume** itself will be resized. You can specify a > **device** to add into the **volume** pool prior the resize. Note that >-**device** will only be added into the pool if the **volume** size is >-going to grow. >+the **device** will only be added into the pool if the **volume** size >+is going to grow. > >-If the **device** is already used in the different pool, then **ssm** >-will ask you whether you want to remove it from the original pool. >+If the **device** is already used in a different pool, then **ssm** >+will ask you whether or not you want to remove it from the original >+pool. > >-In some cases file system has to be mounted in order to resize. This >-will be handled by **ssm** automatically by mounting the **volume** >-temporarily. >+In some cases, the file system has to be mounted in order to resize. >+This will be handled by **ssm** automatically by mounting the >+**volume** temporarily. > > Note that resizing btrfs subvolume is not supported, only the whole > file system can be resized. >@@ -173,20 +178,20 @@ Check the file system consistency on the **volume**. You can specify > multiple volumes to check. If there is no file system on the > **volume**, this **volume** will be skipped. > >-In some cases file system has to be mounted in order to check the file >-system This will be handled by **ssm** automatically by mounting the >-**volume** temporarily. >+In some cases the file system has to be mounted in order to check the >+file system. This will be handled by **ssm** automatically by >+mounting the **volume** temporarily. > > > Snapshot command > **************** > >-Take a snapshot of existing **volume**. This operation will fail if >-back-end which the **volume** belongs to does not support >-snapshotting. Note that you can not specify both *NAME* and *DESC* >+Take a snapshot of an existing **volume**. This operation will fail if >+the back-end to which the **volume** belongs to does not support >+snapshotting. Note that you cannot specify both *NAME* and *DESC* > since those options are mutually exclusive. > >-In some cases file system has to be mounted in order to take a >+In some cases the file system has to be mounted in order to take a > snapshot of the **volume**. This will be handled by **ssm** > automatically by mounting the **volume** temporarily. > >@@ -194,14 +199,14 @@ automatically by mounting the **volume** temporarily. > Add command > *********** > >-This command adds **device** into the pool. The **device** will not be >-added if it's already part of different pool by default, but user will >-be asked whether to remove the device from it's pool. When multiple >-devices are provided, all of them are added into the pool. If one of >-the devices can not be added into the pool for any reason, add command >-will fail. If no pool is specified, default pool will be chosen. In >-the case of non existing pool, it will be created using provided >-devices. >+This command adds a **device** into the pool. By default, the >+**device** will not be added if it's already a part of a different >+pool, but the user will be asked whether or not to remove the device >+from its pool. When multiple devices are provided, all of them are >+added into the pool. If one of the devices cannot be added into the >+pool for any reason, the add command will fail. If no pool is >+specified, the default pool will be chosen. In the case of a non >+existing pool, it will be created using the provided devices. > > > Backends >@@ -211,18 +216,19 @@ Backends > Introduction > ************ > >-Ssm aims to create unified user interface for various technologies >+Ssm aims to create a unified user interface for various technologies > like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and > possibly more. In order to do so we have a core abstraction layer in > "ssmlib/main.py". This abstraction layer should ideally know nothing > about the underlying technology, but rather comply with **device**, >-**pool** and **volume** abstraction. >+**pool** and **volume** abstractions. > > Various backends can be registered in "ssmlib/main.py" in order to >-handle specific storage technology implementing methods like *create*, >-*snapshot*, or *remove* volumes and pools. The core will then call >-these methods to manage the storage without needing to know what lies >-underneath it. There are already several backends registered in ssm. >+handle specific storage technology, implementing methods like >+*create*, *snapshot*, or *remove* volumes and pools. The core will >+then call these methods to manage the storage without needing to know >+what lies underneath it. There are already several backends registered >+in ssm. > > > Btrfs backend >@@ -237,64 +243,66 @@ Pools, volumes and snapshots can be created with btrfs backend and > here is what it means from the btrfs point of view: > > pool >- Pool is actually a btrfs file system itself, because it can be >- extended by adding more devices, or shrink by removing devices from >+ A pool is actually a btrfs file system itself, because it can be >+ extended by adding more devices, or shrunk by removing devices from > it. Subvolumes and snapshots can also be created. When the new >- btrfs pool should be created **ssm** simply creates a btrfs file >+ btrfs pool should be created, **ssm** simply creates a btrfs file > system, which means that every new btrfs pool has one volume of the > same name as the pool itself which can not be removed without >- removing the entire pool. Default btrfs pool name is >+ removing the entire pool. The default btrfs pool name is > **btrfs_pool**. > >- When creating new btrfs pool, the name of the pool is used as the >- file system label. If there is already existing btrfs file system >- in the system without a label, btrfs pool name will be generated >- for internal use in the following format "btrfs_{device base >- name}". >+ When creating a new btrfs pool, the name of the pool is used as the >+ file system label. If there is an already existing btrfs file >+ system in the system without a label, a btrfs pool name will be >+ generated for internal use in the following format "btrfs_{device >+ base name}". > >- Btrfs pool is created when **create** or **add** command is used >- with devices specified and non existing pool name. >+ A btrfs pool is created when the **create** or **add** command is >+ used with specified devices and non existing pool name. > > volume >- Volume in btrfs back-end is actually just btrfs subvolume with the >- exception of the first volume created on btrfs pool creation, which >- is the file system itself. Subvolumes can only be created on btrfs >- file system when it is mounted, but user does not have to worry >- about that since **ssm** will automatically mount the file system >- temporarily in order to create a new subvolume. >- >- Volume name is used as subvolume path in the btrfs file system and >- every object in this path must exists in order to create a volume. >- Volume name for internal tracking and for representing to the user >- is generated in the format "{pool_name}:{volume name}", but volumes >- can be also referenced with its mount point. >- >- Btrfs volumes are only shown in the *list* output, when the file >- system is mounted, with the exception of the main btrfs volume - >- the file system itself. >- >- Also note that btrfs volumes and subvolumes can not be resized. >- This is mainly limitation of the btrfs tools which currently does >- not work reliably. >- >- New btrfs volume can be created with **create** command. >+ A volume in the btrfs back-end is actually just btrfs subvolume >+ with the exception of the first volume created on btrfs pool >+ creation, which is the file system itself. Subvolumes can only be >+ created on the btrfs file system when it is mounted, but the user >+ does not have to worry about that since **ssm** will automatically >+ mount the file system temporarily in order to create a new >+ subvolume. >+ >+ The volume name is used as subvolume path in the btrfs file system >+ and every object in this path must exist in order to create a >+ volume. The volume name for internal tracking and that is visible >+ to the user is generated in the format "{pool_name}:{volume name}", >+ but volumes can be also referenced by its mount point. >+ >+ The btrfs volumes are only shown in the *list* output, when the >+ file system is mounted, with the exception of the main btrfs volume >+ - the file system itself. >+ >+ Also note that btrfs volumes and subvolumes cannot be resized. This >+ is mainly limitation of the btrfs tools which currently do not work >+ reliably. >+ >+ A new btrfs volume can be created with the **create** command. > > snapshot >- Btrfs file system support subvolume snapshotting, so you can take a >- snapshot of any btrfs volume in the system with **ssm**. However >- btrfs does not distinguish between subvolumes and snapshots, >- because snapshot actually is just a subvolume with some block >- shared with different subvolume. It means, that **ssm** is not able >- to recognize btrfs snapshot directly, but instead it is trying to >- recognize special name format of the btrfs volume. However, if the >- *NAME* is specified when creating snapshot which does not match the >+ The btrfs file system supports subvolume snapshotting, so you can >+ take a snapshot of any btrfs volume in the system with **ssm**. >+ However btrfs does not distinguish between subvolumes and >+ snapshots, because a snapshot is actually just a subvolume with >+ some blocks shared with a different subvolume. This means, that >+ **ssm** is not able to directly recognize a btrfs snapshot. >+ Instead, **ssm** will try to recognize a special name format of the >+ btrfs volume that denotes it is a snapshot. However, if the *NAME* >+ is specified when creating snapshot which does not match the > special pattern, snapshot will not be recognized by the **ssm** and > it will be listed as regular btrfs volume. > >- New btrfs snapshot can be created with **snapshot** command. >+ A new btrfs snapshot can be created with the **snapshot** command. > > device >- Btrfs does not require any special device to be created on. >+ Btrfs does not require a special device to be created on. > > > Lvm backend >@@ -304,66 +312,68 @@ Pools, volumes and snapshots can be created with lvm, which pretty > much match the lvm abstraction. > > pool >- Lvm pool is just *volume group* in lvm language. It means that it >- is grouping devices and new logical volumes can be created out of >- the lvm pool. Default lvm pool name is **lvm_pool**. >+ An lvm pool is just a *volume group* in lvm language. It means that >+ it is grouping devices and new logical volumes can be created out >+ of the lvm pool. The default lvm pool name is **lvm_pool**. > >- Lvm pool is created when **create** or **add** command is used with >- devices specified and non existing pool name. >+ An lvm pool is created when the **create** or **add** commands are >+ used with specified devices and a non existing pool name. > > volume >- Lvm volume is just *logical volume* in lvm language. Lvm volume can >- be created wit **create** command. >+ An lvm volume is just a *logical volume* in lvm language. An lvm >+ volume can be created with the **create** command. > > snapshot > Lvm volumes can be snapshotted as well. When a snapshot is created >- from the lvm volume, new *snapshot* volume is created, which can be >- handled as any other lvm volume. Unlike *btrfs* lvm is able to >+ from the lvm volume, a new *snapshot* volume is created, which can >+ be handled as any other lvm volume. Unlike *btrfs* lvm is able to > distinguish snapshot from regular volume, so there is no need for a > snapshot name to match special pattern. > > device >- Lvm requires *physical device* to be created on the device, but >+ Lvm requires a *physical device* to be created on the device, but > with **ssm** this is transparent for the user. > > > Crypt backend > ************* > >-Crypt backend in **ssm** uses cryptsetup and dm-crypt target to manage >-encrypted volumes. Crypt backend can be used as a regular backend for >-creating encrypted volumes on top of regular block devices, or even >-other volumes (lvm or md volumes for example). Or it can be used to >-create encrypted lvm volumes right away in a single step. >+The crypt backend in **ssm** uses cryptsetup and dm-crypt target to >+manage encrypted volumes. The crypt backend can be used as a regular >+backend for creating encrypted volumes on top of regular block >+devices, or even other volumes (lvm or md volumes for example). Or it >+can be used to create encrypted lvm volumes right away in a single >+step. > > Only volumes can be created with crypt backend. This backend does not > support pooling and does not require special devices. > > pool >- Crypt backend does not support pooling it is not possible to create >- crypt pool or add a device into a pool. >+ The crypt backend does not support pooling, and it is not possible >+ to create crypt pool or add a device into a pool. > > volume >- Volume in crypt backend is the volume created by dm-crypt which >- represent the data on the original encrypted device in unencrypted >- form. Crypt backend does not support pooling, so only one device >- can be used to create crypt volume. It also does not support raid >- or any device concatenation. >- >- Currently two modes, or extensions are supported luks and plain. >- Luks is used by default.For more information about the extensions >+ A volume in the crypt backend is the volume created by dm-crypt >+ which represents the data on the original encrypted device in >+ unencrypted form. The crypt backend does not support pooling, so >+ only one device can be used to create crypt volume. It also does >+ not support raid or any device concatenation. >+ >+ Currently two modes, or extensions are supported: luks and plain. >+ Luks is used by default. For more information about the extensions, > please see **cryptsetup** manual page. > > snapshot >- Crypt backend does not support snapshotting, however if the >- encrypted volume is created on top of the lvm volume, the lvm >- volume itself can be snapshotted. The snapshot can be then opened >- by using **cryptsetup**. It is possible that this might change in >- the future so that **ssm** will be able to activate the volume >- directly without the extra step. >+ The crypt backend does not support snapshotting, however if the >+ encrypted volume is created on top of an lvm volume, the lvm volume >+ itself can be snapshotted. The snapshot can be then opened by using >+ **cryptsetup**. It is possible that this might change in the future >+ so that **ssm** will be able to activate the volume directly >+ without the extra step. > > device >- Crypt backend does not require any special device to be created on. >+ The crypt backend does not require a special device to be created >+ on. > > > Environment variables >@@ -371,22 +381,22 @@ Environment variables > > SSM_DEFAULT_BACKEND > Specify which backend will be used by default. This can be >- overridden by specifying **-b** or **--backend** argument. >- Currently only *lvm* and *btrfs* is supported. >+ overridden by specifying the **-b** or **--backend** argument. >+ Currently only *lvm* and *btrfs* are supported. > > SSM_LVM_DEFAULT_POOL >- Name of the default lvm pool to be used if **-p** or **--pool** >+ Name of the default lvm pool to be used if the **-p** or **--pool** > argument is omitted. > > SSM_BTRFS_DEFAULT_POOL >- Name of the default btrfs pool to be used if **-p** or **--pool** >- argument is omitted. >+ Name of the default btrfs pool to be used if the **-p** or >+ **--pool** argument is omitted. > > SSM_PREFIX_FILTER >- When this is set **ssm** will filter out all devices, volumes and >- pools which name does not start with this prefix. It is used mainly >- in **ssm** test suite to make sure that we do not scramble local >- system configuration. >+ When this is set, **ssm** will filter out all devices, volumes and >+ pools whose name does not start with this prefix. It is used mainly >+ in the **ssm** test suite to make sure that we do not scramble the >+ local system configuration. > > > Quick examples >@@ -418,8 +428,9 @@ List system storage: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ------------------------------------------------------------------------------ > >-Creating a volume of defined size with the defined file system. The >-default back-end is set to lvm and lvm default pool name is lvm_pool: >+Create a volume of the defined size with the defined file system. The >+default back-end is set to lvm and the lvm default pool name (volume >+group) is lvm_pool: > > # ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1 > >@@ -428,17 +439,18 @@ volume to 10GB: > > # ssm resize -s-5G /dev/lvm_pool/lvol001 > >-Resize the volume to 100G, but it would require to add more devices >-into the pool: >+Resize the volume to 100G, but it may require adding more devices into >+the pool: > >- # ssm resize -s 25G /dev/lvm_pool/lvol001 /dev/loop2 >+ # ssm resize -s 100G /dev/lvm_pool/lvol001 /dev/loop2 > >-Now we can try to create new lvm volume named 'myvolume' from the >-remaining pool space with xfs file system and mount it to /mnt/test1: >+Now we can try to create a new lvm volume named 'myvolume' from the >+remaining pool space with the xfs file system and mount it to >+/mnt/test1: > > # ssm create --fs xfs --name myvolume /mnt/test1 > >-List all volumes with file system: >+List all volumes with file systems: > > # ssm list filesystems > ----------------------------------------------------------------------------------------------- >@@ -451,19 +463,19 @@ List all volumes with file system: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ----------------------------------------------------------------------------------------------- > >-You can then easily remove the old volume by: >+You can then easily remove the old volume with: > > # ssm remove /dev/lvm_pool/lvol001 > >-Now lest try to create btrfs volume. Btrfs is separate backend, not >-just a file system. That is because btrfs itself have integrated >-volume manager. Defaul btrfs pool name is btrfs_pool.: >+Now let's try to create a btrfs volume. Btrfs is a separate backend, >+not just a file system. That is because btrfs itself has an integrated >+volume manager. The default btrfs pool name is btrfs_pool.: > > # ssm -b btrfs create /dev/loop3 /dev/loop4 > >-Now create we btrfs subvolumes. Note that btrfs file system has to be >-mounted in order to create subvolumes. However ssm will handle it for >-you.: >+Now we create btrfs subvolumes. Note that the btrfs file system has to >+be mounted in order to create subvolumes. However ssm will handle this >+for you.: > > # ssm create -p btrfs_pool > # ssm create -n new_subvolume -p btrfs_pool >@@ -502,10 +514,10 @@ you.: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ----------------------------------------------------------------------------------------------- > >-Now let's free up some of the loop devices so we cat try to add them >-into then btrfs_pool. So we'll simply remove lvm mvolume and resize >-lvol001 so we can remove /dev/loop2. Note that myvolume is mounted so >-we have to unmount it first.: >+Now let's free up some of the loop devices so that we can try to add >+them into the btrfs_pool. So we'll simply remove lvm myvolume and >+resize lvol001 so we can remove /dev/loop2. Note that myvolume is >+mounted so we have to unmount it first.: > > # umount /mnt/test1 > # ssm remove /dev/lvm_pool/myvolume >@@ -516,8 +528,8 @@ Add device to the btrfs file system: > > # ssm add /dev/loop2 -p btrfs_pool > >-Set' see what happend. Note that to actually see btrfs subvolumes you >-have to mount the file system first: >+Now let's see what happened. Note that to actually see btrfs >+subvolumes you have to mount the file system first: > > # mount -L btrfs_pool /mnt/test1/ > # ssm list volumes >@@ -533,9 +545,9 @@ have to mount the file system first: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ------------------------------------------------------------------------------------------------------------------------ > >-Remove the whole lvm pool and one of the btrfs subvolume, and one >-unused device from the btrfs pool btrfs_loop3. Note that with btrfs, >-pool have the same name as the volume: >+Remove the whole lvm pool, one of the btrfs subvolumes, and one unused >+device from the btrfs pool btrfs_loop3. Note that with btrfs, pools >+have the same name as their volumes: > > # ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/ > >@@ -546,14 +558,14 @@ Snapshots can also be done with ssm: > > With lvm, you can also create snapshots: > >- root# ssm create -s 10G /dev/loop[01] >+ # ssm create -s 10G /dev/loop[01] > # ssm snapshot /dev/lvm_pool/lvol001 > > Now list all snapshots. Note that btrfs snapshots are actually just > subvolumes with some blocks shared with the original subvolume, so >-there currently no way to distinguish between those. ssm is using a >-little trick to search for name patters to recognize snapshots, so if >-you specify your own name for the snapshot ssm will not recognize it >+there is currently no way to distinguish between those. ssm is using a >+little trick to search for name patterns to recognize snapshots, so if >+you specify your own name for the snapshot, ssm will not recognize it > as snapshot, but rather as regular volume (subvolume). This problem > does not exist with lvm.: > >@@ -574,10 +586,10 @@ To install System Storage Manager into your system simply run: > python setup.py install > > as root in the System Storage Manager directory. Make sure that your >-system configuration meet the *requirements* in order for ssm to work >+system configuration meets the *requirements* in order for ssm to work > correctly. > >-Note that you can run **ssm** even without installation from using the >+Note that you can run **ssm** even without installation by using the > local sources with: > > bin/ssm.local >@@ -587,12 +599,12 @@ Requirements > ************ > > Python 2.6 or higher is required to run this tool. System Storage >-Manager can only be run as root since most of the commands requires >+Manager can only be run as root since most of the commands require > root privileges. > >-There are other requirements listed bellow, but note that you do not >-necessarily need all dependencies for all backends, however if some of >-the tools required by the backend is missing, the backend would not >+There are other requirements listed below, but note that you do not >+necessarily need all dependencies for all backends. However if some of >+the tools required by a backend are missing, that backend will not > work. > > >@@ -665,7 +677,7 @@ Crypt backend > For developers > ************** > >-We are accepting patches! If you're interested contributing to the >+We are accepting patches! If you're interested in contributing to the > System Storage Manager code, just checkout the git repository located > on SourceForge. Please, base all of your work on the "devel" branch > since it is more up-to-date and it will save us some work when merging >@@ -680,9 +692,9 @@ are appreciated. See *Mailing list section* section. > Tests > ===== > >-System Storage Manager contains regression testing suite to make sure >-that we do not break thing that should already work. And we recommend >-every developer to run tests before sending patches: >+System Storage Manager contains a regression testing suite to make >+sure that we do not break things that should already work. We >+recommend that every developer run these tests before sending patches: > > python test.py > >@@ -699,100 +711,102 @@ Tests in System Storage Manager are divided into four levels. > configuration in any way. It actually should not invoke any shell > command, and if it does it's a bug. > >-3. Second part of unittests is backend testing. We are mainly testing >- whether ssm commands result in proper backend operations. It does >- not require root permissions and it does not touch your system >- configuration in any way. It actually should not invoke any shell >- command and if it does it's a bug. >+3. Second part of unittests is backend testing. We are mainly >+ testing whether ssm commands result in proper backend operations. >+ It does not require root permissions and it does not touch your >+ system configuration in any way. It actually should not invoke any >+ shell command and if it does it's a bug. > >-4. And finally there are real bash tests located in "tests/bashtests". >- Bash tests are divided into files. Each file tests one command for >- one backend and it containing series of test cases followed by >- checks whether the command created the expected result. In order to >- test real system commands we have to create system device to test >- on and not touch any of the existing system configuration. >+4. And finally there are real bash tests located in >+ "tests/bashtests". Bash tests are divided into files. Each file >+ tests one command for one backend and it contains a series of test >+ cases followed by checks as to whether the command created the >+ expected result. In order to test real system commands we have to >+ create a system device to test on and not touch the existing system >+ configuration. > > Before each test a number of devices are created using *dmsetup* in > the test directory. These devices will be used in test cases >- instead of real devices. Real operation are performed in those >- devices as it would on the real system devices. It implies that >- this phase requires root privileges and it would not be run >- otherwise. In order to make sure that **ssm** does not touch any >- existing system configuration, each device, poor and volume name is >- include special prefix and SSM_PREFIX_FILTER environment variable >- is set to make **ssm** to exclude all items which does not match >- this filter. >- >- Even though we tried hard to make sure that the bash tests does not >- change any of your system configuration the recommendation is >- **not** to run tests as with root privileges on your work or >- production system, but rather run it on your testing machine. >+ instead of real devices. Real operations are performed in those >+ devices as they would be on the real system devices. This phase >+ requires root privileges and it will not be run otherwise. In order >+ to make sure that **ssm** does not touch any existing system >+ configuration, each device, pool and volume name includes a special >+ prefix, and the SSM_PREFIX_FILTER environment variable is set to >+ make **ssm** to exclude all items which does not match this special >+ prefix. >+ >+ Even though we tried hard to make sure that the bash tests do not >+ change your system configuration, we recommend you **not** to run >+ tests with root privileges on your work or production system, but >+ rather to run them on your testing machine. > > If you change or create new functionality, please make sure that it is > covered by the System Storage Manager regression test suite to make > sure that we do not break it unintentionally. > >-Important: Please, make sure to run full tests before you send a patch to the >- mailing list. To do so, simply run "python test.py" as root on your >- test machine. >+Important: Please, make sure to run full tests before you send a >+ patch to the mailing list. To do so, simply run "python test.py" as >+ root on your test machine. > > > Documentation > ============= > >-System Storage Manager documentation is stored in "doc/" directory. >-The documentation is build using **sphinx** software which help us not >-to duplicate texts for different type of documentation (man page, html >-pages, readme). If you are going to modify documentation, please make >-sure not to modify manual page, html pages or README directly, but >-rather modify "doc/*.rst" and "doc/src/*.rst" files accordingly so the >-change is propagated to all documents. >+System Storage Manager documentation is stored in the "doc/" >+directory. The documentation is built using **sphinx** software which >+helps us not to duplicate text for different types of documentation >+(man page, html pages, readme). If you are going to modify >+documentation, please make sure not to modify manual page, html pages >+or README directly, but rather modify the "doc/*.rst" and >+"doc/src/*.rst" files accordingly so that the change is propagated to >+all documents. > > Moreover, parts of the documentation such as *synopsis* or ssm command >-*options* are parsed directly from the ssm help output. It means that >-when you're going to add or change argument into **ssm** the only >-thing you have to do is to add or change it in the "ssmlib/main.py" >-source code and then run "make dist" in the "doc/" directory and all >-the documents should be updated automatically. >+*options* are parsed directly from the ssm help output. This means >+that when you're going to add or change arguments into **ssm** the >+only thing you have to do is to add or change it in the >+"ssmlib/main.py" source code and then run "make dist" in the "doc/" >+directory and all the documents should be updated automatically. > >-Important: Please make sure you update the documentation when you add or change >- **ssm** functionality if the format of the change requires it. Then >- regenerate all the documents using "make dist" and include changes >- in the patch. >+Important: Please make sure you update the documentation when you >+ add or change **ssm** functionality if the format of the change >+ requires it. Then regenerate all the documents using "make dist" and >+ include changes in the patch. > > > Mailing list > ============ > > System Storage Manager developers communicate via the mailing list. >-Address of our mailing list is storagemanager- >+The address of our mailing list is storagemanager- > devel@lists.sourceforge.net and you can subscribe on the SourceForge > project page https://lists.sourceforge.net/lists/listinfo > /storagemanager-devel. Mailing list archives can be found here > http://sourceforge.net/mailarchive/forum.php?forum_name > =storagemanager-devel. > >-This is also the list where to send patches and where the review >-process is happening. We do not have separate *user* mailing list, so >-feel free to drop your questions there as well. >+This is also the list where patches are sent and where the review >+process is happening. We do not have a separate *user* mailing list, >+so feel free to drop your questions there as well. > > > Posting patches > =============== > > As already mentioned, we are accepting patches! And we are very happy >-for every contribution. If you're going to send a path in, please make >-sure to follow some simple rules: >+for every contribution. If you're going to send a patch in, please >+make sure to follow some simple rules: > > 1. Before you're going to post a patch, please run our regression > testing suite to make sure that your change does not break someone >- else work. See *Tests section* >+ else's work. See *Tests section* > >-2. If you're making a change that might require documentation update, >- please update the documentation as well. See *Documentation >+2. If you're making a change that might require documentation >+ update, please update the documentation as well. See *Documentation > section* > >-3. Make sure your patch have all the requisites such as *short >+3. Make sure your patch has all the requisites such as a *short > description* preferably 50 characters long at max describing the > main idea of the change. *Long description* describing what was > changed with and why and finally Signed-off-by tag. >@@ -801,6 +815,6 @@ sure to follow some simple rules: > the patch inlined in the email body. It is much better for review > process. > >-Hint: You can use **git** to do all the work for you. "git format-patch" >- and "git send-email" will help you with creating and sending the >- patch. >+Hint: You can use **git** to do all the work for you. "git format- >+ patch" and "git send-email" will help you with creating and sending >+ the patch. >diff --git a/doc/_build/man/ssm.8 b/doc/_build/man/ssm.8 >index ab777bf..ead01d2 100644 >--- a/doc/_build/man/ssm.8 >+++ b/doc/_build/man/ssm.8 >@@ -1,6 +1,6 @@ > .\" Man page generated from reStructuredText. > . >-.TH "SSM" "8" "October 02, 2013" "0.4" "System Storage Manager" >+.TH "SSM" "8" "June 30, 2014" "0.4" "System Storage Manager" > .SH NAME > ssm \- System Storage Manager: a single tool to manage your storage > . >@@ -30,33 +30,6 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] > .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] > .in \\n[rst2man-indent\\n[rst2man-indent-level]]u > .. >-. >-.nr rst2man-indent-level 0 >-. >-.de1 rstReportMargin >-\\$1 \\n[an-margin] >-level \\n[rst2man-indent-level] >-level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] >-- >-\\n[rst2man-indent0] >-\\n[rst2man-indent1] >-\\n[rst2man-indent2] >-.. >-.de1 INDENT >-.\" .rstReportMargin pre: >-. RS \\$1 >-. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] >-. nr rst2man-indent-level +1 >-.\" .rstReportMargin post: >-.. >-.de UNINDENT >-. RE >-.\" indent \\n[an-margin] >-.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] >-.nr rst2man-indent-level -1 >-.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] >-.in \\n[rst2man-indent\\n[rst2man-indent-level]]u >-.. > .SH SYNOPSIS > .sp > \fBssm\fP [\fB\-h\fP] [\fB\-\-version\fP] [\fB\-v\fP] [\fB\-f\fP] [\fB\-b\fP BACKEND] [\fB\-n\fP] {check,resize,create,list,add,remove,snapshot,mount} ... >@@ -78,9 +51,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] > \fBssm\fP \fBmount\fP [\fB\-h\fP] [\fB\-o\fP OPTIONS] \fBvolume\fP directory > .SH DESCRIPTION > .sp >-System Storage Manager provides easy to use command line interface to manage >-your storage using various technologies like lvm, btrfs, encrypted volumes and >-more. >+System Storage Manager provides an easy to use command line interface to >+manage your storage using various technologies like lvm, btrfs, encrypted >+volumes and more. > .sp > In more sophisticated enterprise storage environments, management with Device > Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is >@@ -123,38 +96,38 @@ debugging purposes. > .SH SYSTEM STORAGE MANAGER COMMANDS > .SS Introduction > .sp >-System Storage Manager have several commands you can specify on the command >-line as a first argument to the ssm. They all have specific use and its own >-arguments, but global ssm arguments are propagated to all commands. >+System Storage Manager has several commands that you can specify on the >+command line as a first argument to ssm. They all have a specific use and >+their own arguments, but global ssm arguments are propagated to all commands. > .SS Create command > .sp > \fBssm\fP \fBcreate\fP [\fB\-h\fP] [\fB\-s\fP SIZE] [\fB\-n\fP NAME] [\fB\-\-fstype\fP FSTYPE] [\fB\-r\fP LEVEL] [\fB\-I\fP STRIPESIZE] [\fB\-i\fP STRIPES] [\fB\-p\fP POOL] [\fB\-e\fP [{luks,plain}]] [\fBdevice\fP [\fBdevice\fP ...]] [mount] > .sp >-This command creates a new volume with defined parameters. If \fBdevice\fP is >-provided it will be used to create a volume, hence it will be added into the >-\fBpool\fP prior the volume creation (See \fIAdd command section\fP). More devices can be used to create a volume. >+This command creates a new volume with defined parameters. If a \fBdevice\fP is >+provided it will be used to create the volume, hence it will be added into the >+\fBpool\fP prior to volume creation (See \fIAdd command section\fP). More than one device can be used to create a volume. > .sp >-If the \fBdevice\fP is already used in the different pool, then \fBssm\fP will >+If the \fBdevice\fP is already being used in a different pool, then \fBssm\fP will > ask you whether you want to remove it from the original pool. If you decline, >-or the removal fails, then the \fBvolume\fP creation fails if the \fISIZE\fP was >-not provided. On the other hand, if the \fISIZE\fP is provided and some devices >-can not be added to the \fBpool\fP the volume creation might succeed if there >+or the removal fails, then the \fBvolume\fP creation fails if the \fISIZE\fP was not >+provided. On the other hand, if the \fISIZE\fP is provided and some devices can >+not be added to the \fBpool\fP, the volume creation might still succeed if there > is enough space in the \fBpool\fP\&. > .sp >-\fIPOOL\fP name can be specified as well. If the pool exists new volume will be >-created from that pool (optionally adding \fBdevice\fP into the pool). However >-if the \fIPOOL\fP does not exist \fBssm\fP will attempt to create a new pool with >-provided \fBdevice\fP and then create a new volume from this pool. If >-\fB\-\-backend\fP argument is omitted, the default \fBssm\fP backend will be used. >-Default backend is \fIlvm\fP\&. >+The \fIPOOL\fP name can be specified as well. If the pool exists, a new volume >+will be created from that pool (optionally adding \fBdevice\fP into the pool). >+However if the \fIPOOL\fP does not exist, then \fBssm\fP will attempt to create a >+new pool with the provided \fBdevice\fP, and then create a new volume from this >+pool. If the \fB\-\-backend\fP argument is omitted, the default \fBssm\fP backend >+will be used. The default backend is \fIlvm\fP\&. > .sp >-\fBssm\fP also supports creating RAID configuration, however some back\-ends >-might not support all the levels, or it might not support RAID at all. In >+\fBssm\fP also supports creating a RAID configuration, however some back\-ends >+might not support all RAID levels, or may not even support RAID at all. In > this case, volume creation will fail. > .sp >-If \fBmount\fP point is provided \fBssm\fP will attempt to mount the volume after >-it is created. However it will fail if mountable file system is not present >-on the volume. >+If a \fBmount\fP point is provided, \fBssm\fP will attempt to mount the volume >+after it is created. However it will fail if mountable file system is not >+present on the volume. > .INDENT 0.0 > .TP > .B \-h\fP,\fB \-\-help >@@ -213,19 +186,19 @@ specified. > .sp > \fBssm\fP \fBlist\fP [\fB\-h\fP] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}] > .sp >-List informations about all detected devices, pools, volumes and snapshots found >-in the system. \fBlist\fP command can be used either alone to list all the >-information, or you can request specific section only. >+Lists information about all detected devices, pools, volumes and snapshots found >+on the system. The \fBlist\fP command can be used either alone to list all of the >+information, or you can request specific sections only. > .sp >-Following sections can be specified: >+The following sections can be specified: > .INDENT 0.0 > .TP > .B {volumes | vol} > List information about all \fBvolumes\fP found in the system. > .TP > .B {devices | dev} >-List information about all \fBdevices\fP found in the system. Some devices are >-intentionally hidden, like for example cdrom, or DM/MD devices since those >+List information about all \fBdevices\fP found on the system. Some devices >+are intentionally hidden, like for example cdrom or DM/MD devices since those > are actually listed as volumes. > .TP > .B {pools | pool} >@@ -236,12 +209,12 @@ List information about all volumes containing \fBfilesystems\fP found in > the system. > .TP > .B {snapshots | snap} >-List information about all \fBsnapshots\fP found in the system. Note that some >-back\-ends does not support snapshotting and some can not distinguish between >-snapshot and regular volume. in this case \fBssm\fP will try to recognize >-volume name in order to identify \fBsnapshot\fP, but if the \fBssm\fP regular >-expression does not match the snapshot pattern, this snapshot will not be >-recognized. >+List information about all \fBsnapshots\fP found in the system. Note that >+some back\-ends do not support snapshotting and some cannot distinguish >+snapshot from regular volumes. In this case, \fBssm\fP will try to recognize the >+volume name in order to identify a \fBsnapshot\fP, but if the \fBssm\fP regular >+expression does not match the snapshot pattern, the problematic snapshot will >+not be recognized. > .UNINDENT > .INDENT 0.0 > .TP >@@ -252,25 +225,26 @@ show this help message and exit > .sp > \fBssm\fP \fBremove\fP [\fB\-h\fP] [\fB\-a\fP] [\fBitems\fP [\fBitems\fP ...]] > .sp >-This command removes \fBitem\fP from the system. Multiple items can be specified. >-If the \fBitem\fP can not be removed for some reason, it will be skipped. >+This command removes an \fBitem\fP from the system. Multiple items can be >+specified. If the \fBitem\fP cannot be removed for some reason, it will be >+skipped. > .sp >-\fBitem\fP can represent: >+An \fBitem\fP can be any of the following: > .INDENT 0.0 > .TP > .B device >-Remove \fBdevice\fP from the pool. Note that this can not be done in some >-cases where the device is used by pool. You can use \fB\-f\fP argument to >+Remove a \fBdevice\fP from the pool. Note that this cannot be done in some >+cases where the device is being used by the pool. You can use the \fB\-f\fP argument to > \fIforce\fP removal. If the device does not belong to any pool, it will be > skipped. > .TP > .B pool >-Remove the \fBpool\fP from the system. This will also remove all volumes >+Remove a \fBpool\fP from the system. This will also remove all volumes > created from that pool. > .TP > .B volume >-Remove the \fBvolume\fP from the system. Note that this will fail if the >-\fBvolume\fP is mounted and it can not be \fIforced\fP with \fB\-f\fP\&. >+Remove a \fBvolume\fP from the system. Note that this will fail if the >+\fBvolume\fP is mounted and cannot be \fIforced\fP with \fB\-f\fP\&. > .UNINDENT > .INDENT 0.0 > .TP >@@ -284,16 +258,16 @@ Remove all pools in the system. > .sp > \fBssm\fP \fBresize\fP [\fB\-h\fP] [\fB\-s\fP SIZE] \fBvolume\fP [\fBdevice\fP [\fBdevice\fP ...]] > .sp >-Change size of the \fBvolume\fP and file system. If there is no file system only >-the \fBvolume\fP itself will be resized. You can specify \fBdevice\fP to add into >-the \fBvolume\fP pool prior the resize. Note that \fBdevice\fP will only be added >+Change size of the \fBvolume\fP and file system. If there is no file system, only >+the \fBvolume\fP itself will be resized. You can specify a \fBdevice\fP to add into >+the \fBvolume\fP pool prior the resize. Note that the \fBdevice\fP will only be added > into the pool if the \fBvolume\fP size is going to grow. > .sp >-If the \fBdevice\fP is already used in the different pool, then \fBssm\fP will >-ask you whether you want to remove it from the original pool. >+If the \fBdevice\fP is already used in a different pool, then \fBssm\fP will >+ask you whether or not you want to remove it from the original pool. > .sp >-In some cases file system has to be mounted in order to resize. This will be >-handled by \fBssm\fP automatically by mounting the \fBvolume\fP temporarily. >+In some cases, the file system has to be mounted in order to resize. This will >+be handled by \fBssm\fP automatically by mounting the \fBvolume\fP temporarily. > .sp > Note that resizing btrfs subvolume is not supported, only the whole file > system can be resized. >@@ -319,9 +293,9 @@ Check the file system consistency on the \fBvolume\fP\&. You can specify multipl > volumes to check. If there is no file system on the \fBvolume\fP, this \fBvolume\fP > will be skipped. > .sp >-In some cases file system has to be mounted in order to check the file system >-This will be handled by \fBssm\fP automatically by mounting the \fBvolume\fP >-temporarily. >+In some cases the file system has to be mounted in order to check the file >+system. This will be handled by \fBssm\fP automatically by mounting the >+\fBvolume\fP temporarily. > .INDENT 0.0 > .TP > .B \-h\fP,\fB \-\-help >@@ -331,12 +305,12 @@ show this help message and exit > .sp > \fBssm\fP \fBsnapshot\fP [\fB\-h\fP] [\fB\-s\fP SIZE] [\fB\-d\fP DEST | \fB\-n\fP NAME] volume > .sp >-Take a snapshot of existing \fBvolume\fP\&. This operation will fail if back\-end >-which the \fBvolume\fP belongs to does not support snapshotting. Note that >-you can not specify both \fINAME\fP and \fIDESC\fP since those options are mutually >-exclusive. >+Take a snapshot of an existing \fBvolume\fP\&. This operation will fail if the >+back\-end to which the \fBvolume\fP belongs to does not support snapshotting. >+Note that you cannot specify both \fINAME\fP and \fIDESC\fP since those options are >+mutually exclusive. > .sp >-In some cases file system has to be mounted in order to take a snapshot of >+In some cases the file system has to be mounted in order to take a snapshot of > the \fBvolume\fP\&. This will be handled by \fBssm\fP automatically by mounting the > \fBvolume\fP temporarily. > .INDENT 0.0 >@@ -366,13 +340,13 @@ specified default backend policy will be performed. > .sp > \fBssm\fP \fBadd\fP [\fB\-h\fP] [\fB\-p\fP POOL] \fBdevice\fP [\fBdevice\fP ...] > .sp >-This command adds \fBdevice\fP into the pool. The \fBdevice\fP will not be added if >-it\(aqs already part of different pool by default, but user will be asked whether >-to remove the device from it\(aqs pool. When multiple devices are provided, >-all of them are added into the pool. If one of the devices can not be added >-into the pool for any reason, add command will fail. If no pool is specified, >-default pool will be chosen. In the case of non existing pool, it will be >-created using provided devices. >+This command adds a \fBdevice\fP into the pool. By default, the \fBdevice\fP will >+not be added if it\(aqs already a part of a different pool, but the user will be >+asked whether or not to remove the device from its pool. When multiple devices >+are provided, all of them are added into the pool. If one of the devices >+cannot be added into the pool for any reason, the add command will fail. If no >+pool is specified, the default pool will be chosen. In the case of a non >+existing pool, it will be created using the provided devices. > .INDENT 0.0 > .TP > .B \-h\fP,\fB \-\-help >@@ -386,13 +360,13 @@ pool is used. > .sp > \fBssm\fP \fBmount\fP [\fB\-h\fP] [\fB\-o\fP OPTIONS] \fBvolume\fP directory > .sp >-This command will mount the \fBvolume\fP at specified \fBdirectory\fP\&. The >-\fBvolume\fP can be specified in the same way as with \fBmount(8)\fP, however >-in addition one can also specify \fBvolume\fP in the format as it appear in >-the \fBssm list\fP table. >+This command will mount the \fBvolume\fP at the specified \fBdirectory\fP\&. The >+\fBvolume\fP can be specified in the same way as with \fBmount(8)\fP, however in >+addition, one can also specify a \fBvolume\fP in the format as it appears in the >+\fBssm list\fP table. > .sp > For example, instead of finding out what the device and subvolume id of the >-btrfs subvolume "btrfs_pool:vol001" is in order to mount it, on can simply >+btrfs subvolume "btrfs_pool:vol001" is in order to mount it, one can simply > call \fBssm mount btrfs_pool:vol001 /mnt/test\fP\&. > .sp > One can also specify \fIOPTIONS\fP in the same way as with \fBmount(8)\fP\&. >@@ -409,14 +383,14 @@ equivalent to the same mount(8) option. > .SH BACK-ENDS > .SS Introduction > .sp >-Ssm aims to create unified user interface for various technologies like Device >+Ssm aims to create a unified user interface for various technologies like Device > Mapper (dm), Btrfs file system, Multiple Devices (md) and possibly more. In > order to do so we have a core abstraction layer in \fBssmlib/main.py\fP\&. This > abstraction layer should ideally know nothing about the underlying technology, >-but rather comply with \fBdevice\fP, \fBpool\fP and \fBvolume\fP abstraction. >+but rather comply with \fBdevice\fP, \fBpool\fP and \fBvolume\fP abstractions. > .sp > Various backends can be registered in \fBssmlib/main.py\fP in order to handle >-specific storage technology implementing methods like \fIcreate\fP, \fIsnapshot\fP, or >+specific storage technology, implementing methods like \fIcreate\fP, \fIsnapshot\fP, or > \fIremove\fP volumes and pools. The core will then call these methods to manage > the storage without needing to know what lies underneath it. There are already > several backends registered in ssm. >@@ -432,61 +406,60 @@ is what it means from the btrfs point of view: > .INDENT 0.0 > .TP > .B pool >-Pool is actually a btrfs file system itself, because it can be extended >-by adding more devices, or shrink by removing devices from it. Subvolumes >-and snapshots can also be created. When the new btrfs pool should be created >-\fBssm\fP simply creates a btrfs file system, which means that every new >-btrfs pool has one volume of the same name as the pool itself which can >-not be removed without removing the entire pool. Default btrfs pool name is >-\fBbtrfs_pool\fP\&. >-.sp >-When creating new btrfs pool, the name of the pool is used as the file >-system label. If there is already existing btrfs file system in the system >-without a label, btrfs pool name will be generated for internal use >-in the following format "btrfs_{device base name}". >-.sp >-Btrfs pool is created when \fBcreate\fP or \fBadd\fP command is used with >-devices specified and non existing pool name. >+A pool is actually a btrfs file system itself, because it can be extended >+by adding more devices, or shrunk by removing devices from it. Subvolumes >+and snapshots can also be created. When the new btrfs pool should be >+created, \fBssm\fP simply creates a btrfs file system, which means that every >+new btrfs pool has one volume of the same name as the pool itself which can >+not be removed without removing the entire pool. The default btrfs pool >+name is \fBbtrfs_pool\fP\&. >+.sp >+When creating a new btrfs pool, the name of the pool is used as the file >+system label. If there is an already existing btrfs file system in the system >+without a label, a btrfs pool name will be generated for internal use in the >+following format "btrfs_{device base name}". >+.sp >+A btrfs pool is created when the \fBcreate\fP or \fBadd\fP command is used >+with specified devices and non existing pool name. > .TP > .B volume >-Volume in btrfs back\-end is actually just btrfs subvolume with the >-exception of the first volume created on btrfs pool creation, which is >-the file system itself. Subvolumes can only be created on btrfs file >-system when it is mounted, but user does not have to >-worry about that since \fBssm\fP will automatically mount the file >-system temporarily in order to create a new subvolume. >-.sp >-Volume name is used as subvolume path in the btrfs file system and every >-object in this path must exists in order to create a volume. Volume name >-for internal tracking and for representing to the user is generated in >-the format "{pool_name}:{volume name}", but volumes can be also referenced >-with its mount point. >-.sp >-Btrfs volumes are only shown in the \fIlist\fP output, when the file system is >+A volume in the btrfs back\-end is actually just btrfs subvolume with the >+exception of the first volume created on btrfs pool creation, which is the >+file system itself. Subvolumes can only be created on the btrfs file system >+when it is mounted, but the user does not have to worry about that since >+\fBssm\fP will automatically mount the file system temporarily in order to >+create a new subvolume. >+.sp >+The volume name is used as subvolume path in the btrfs file system and >+every object in this path must exist in order to create a volume. The volume >+name for internal tracking and that is visible to the user is generated in the >+format "{pool_name}:{volume name}", but volumes can be also referenced by its >+mount point. >+.sp >+The btrfs volumes are only shown in the \fIlist\fP output, when the file system is > mounted, with the exception of the main btrfs volume \- the file system > itself. > .sp >-Also note that btrfs volumes and subvolumes can not be resized. This is >-mainly limitation of the btrfs tools which currently does not work >-reliably. >+Also note that btrfs volumes and subvolumes cannot be resized. This is >+mainly limitation of the btrfs tools which currently do not work reliably. > .sp >-New btrfs volume can be created with \fBcreate\fP command. >+A new btrfs volume can be created with the \fBcreate\fP command. > .TP > .B snapshot >-Btrfs file system support subvolume snapshotting, so you can take a snapshot >-of any btrfs volume in the system with \fBssm\fP\&. However btrfs does not >-distinguish between subvolumes and snapshots, because snapshot actually is >-just a subvolume with some block shared with different subvolume. It means, >-that \fBssm\fP is not able to recognize btrfs snapshot directly, but instead >-it is trying to recognize special name format of the btrfs volume. However, >-if the \fINAME\fP is specified when creating snapshot which does not match the >-special pattern, snapshot will not be recognized by the \fBssm\fP and it will >-be listed as regular btrfs volume. >-.sp >-New btrfs snapshot can be created with \fBsnapshot\fP command. >+The btrfs file system supports subvolume snapshotting, so you can take a >+snapshot of any btrfs volume in the system with \fBssm\fP\&. However btrfs does >+not distinguish between subvolumes and snapshots, because a snapshot is >+actually just a subvolume with some blocks shared with a different subvolume. >+This means, that \fBssm\fP is not able to directly recognize a btrfs snapshot. >+Instead, \fBssm\fP will try to recognize a special name format of the btrfs >+volume that denotes it is a snapshot. However, if the \fINAME\fP is specified when >+creating snapshot which does not match the special pattern, snapshot will not >+be recognized by the \fBssm\fP and it will be listed as regular btrfs volume. >+.sp >+A new btrfs snapshot can be created with the \fBsnapshot\fP command. > .TP > .B device >-Btrfs does not require any special device to be created on. >+Btrfs does not require a special device to be created on. > .UNINDENT > .SS Lvm backend > .sp >@@ -495,32 +468,32 @@ the lvm abstraction. > .INDENT 0.0 > .TP > .B pool >-Lvm pool is just \fIvolume group\fP in lvm language. It means that it is >-grouping devices and new logical volumes can be created out of the lvm >-pool. Default lvm pool name is \fBlvm_pool\fP\&. >+An lvm pool is just a \fIvolume group\fP in lvm language. It means that it is >+grouping devices and new logical volumes can be created out of the lvm pool. >+The default lvm pool name is \fBlvm_pool\fP\&. > .sp >-Lvm pool is created when \fBcreate\fP or \fBadd\fP command is used with >-devices specified and non existing pool name. >+An lvm pool is created when the \fBcreate\fP or \fBadd\fP commands are used >+with specified devices and a non existing pool name. > .TP > .B volume >-Lvm volume is just \fIlogical volume\fP in lvm language. Lvm volume can be >-created wit \fBcreate\fP command. >+An lvm volume is just a \fIlogical volume\fP in lvm language. An lvm volume >+can be created with the \fBcreate\fP command. > .TP > .B snapshot > Lvm volumes can be snapshotted as well. When a snapshot is created from >-the lvm volume, new \fIsnapshot\fP volume is created, which can be handled as >+the lvm volume, a new \fIsnapshot\fP volume is created, which can be handled as > any other lvm volume. Unlike \fIbtrfs\fP lvm is able > to distinguish snapshot from regular volume, so there is no need for a > snapshot name to match special pattern. > .TP > .B device >-Lvm requires \fIphysical device\fP to be created on the device, but with >+Lvm requires a \fIphysical device\fP to be created on the device, but with > \fBssm\fP this is transparent for the user. > .UNINDENT > .SS Crypt backend > .sp >-Crypt backend in \fBssm\fP uses cryptsetup and dm\-crypt target to manage >-encrypted volumes. Crypt backend can be used as a regular backend for >+The crypt backend in \fBssm\fP uses cryptsetup and dm\-crypt target to manage >+encrypted volumes. The crypt backend can be used as a regular backend for > creating encrypted volumes on top of regular block devices, or even other > volumes (lvm or md volumes for example). Or it can be used to create > encrypted lvm volumes right away in a single step. >@@ -530,35 +503,35 @@ support pooling and does not require special devices. > .INDENT 0.0 > .TP > .B pool >-Crypt backend does not support pooling it is not possible to create >-crypt pool or add a device into a pool. >+The crypt backend does not support pooling, and it is not possible to >+create crypt pool or add a device into a pool. > .TP > .B volume >-Volume in crypt backend is the volume created by dm\-crypt which >-represent the data on the original encrypted device in unencrypted form. >-Crypt backend does not support pooling, so only one device can be used >+A volume in the crypt backend is the volume created by dm\-crypt which >+represents the data on the original encrypted device in unencrypted form. >+The crypt backend does not support pooling, so only one device can be used > to create crypt volume. It also does not support raid or any device > concatenation. > .sp >-Currently two modes, or extensions are supported luks and plain. Luks >-is used by default.For more information about the extensions please see >+Currently two modes, or extensions are supported: luks and plain. Luks >+is used by default. For more information about the extensions, please see > \fBcryptsetup\fP manual page. > .TP > .B snapshot >-Crypt backend does not support snapshotting, however if the encrypted >-volume is created on top of the lvm volume, the lvm volume itself can >+The crypt backend does not support snapshotting, however if the encrypted >+volume is created on top of an lvm volume, the lvm volume itself can > be snapshotted. The snapshot can be then opened by using \fBcryptsetup\fP\&. > It is possible that this might change in the future so that \fBssm\fP will > be able to activate the volume directly without the extra step. > .TP > .B device >-Crypt backend does not require any special device to be created on. >+The crypt backend does not require a special device to be created on. > .UNINDENT > .SS MD backend > .sp > MD backend in \fBssm\fP is currently limited to only gather the information > about MD volumes in the system. You can not create or manage MD volumes >-or pools, but it will be extended in the future. >+or pools, but this functionality will be extended in the future. > .SH EXAMPLES > .sp > \fBList\fP system storage information: >@@ -585,7 +558,7 @@ or pools, but it will be extended in the future. > .UNINDENT > .UNINDENT > .sp >-\fBCreate\fP a new 100GB \fBvolume\fP with default lvm backend using \fI/dev/sda\fP and >+\fBCreate\fP a new 100GB \fBvolume\fP with the default lvm backend using \fI/dev/sda\fP and > \fI/dev/sdb\fP with xfs file system: > .INDENT 0.0 > .INDENT 3.5 >@@ -598,7 +571,7 @@ or pools, but it will be extended in the future. > .UNINDENT > .UNINDENT > .sp >-\fBCreate\fP a new \fBvolume\fP with btrfs backend using \fI/dev/sda\fP and \fI/dev/sdb\fP and >+\fBCreate\fP a new \fBvolume\fP with a btrfs backend using \fI/dev/sda\fP and \fI/dev/sdb\fP and > let the volume to be RAID 1: > .INDENT 0.0 > .INDENT 3.5 >@@ -611,7 +584,7 @@ let the volume to be RAID 1: > .UNINDENT > .UNINDENT > .sp >-Using lvm backend \fBcreate\fP a RAID 0 \fBvolume\fP with devices \fI/dev/sda\fP and >+Using the lvm backend \fBcreate\fP a RAID 0 \fBvolume\fP with devices \fI/dev/sda\fP and > \fI/dev/sdb\fP with 128kB stripe size, ext4 file system and mount it on > \fI/home\fP: > .INDENT 0.0 >@@ -703,21 +676,21 @@ Using lvm backend \fBcreate\fP a RAID 0 \fBvolume\fP with devices \fI/dev/sda\fP > .TP > .B SSM_DEFAULT_BACKEND > Specify which backend will be used by default. This can be overridden by >-specifying \fB\-b\fP or \fB\-\-backend\fP argument. Currently only \fIlvm\fP and \fIbtrfs\fP >-is supported. >+specifying the \fB\-b\fP or \fB\-\-backend\fP argument. Currently only \fIlvm\fP and >+\fIbtrfs\fP are supported. > .TP > .B SSM_LVM_DEFAULT_POOL >-Name of the default lvm pool to be used if \fB\-p\fP or \fB\-\-pool\fP argument >-is omitted. >+Name of the default lvm pool to be used if the \fB\-p\fP or \fB\-\-pool\fP >+argument is omitted. > .TP > .B SSM_BTRFS_DEFAULT_POOL >-Name of the default btrfs pool to be used if \fB\-p\fP or \fB\-\-pool\fP argument >-is omitted. >+Name of the default btrfs pool to be used if the \fB\-p\fP or \fB\-\-pool\fP >+argument is omitted. > .TP > .B SSM_PREFIX_FILTER >-When this is set \fBssm\fP will filter out all devices, volumes and pools >-which name does not start with this prefix. It is used mainly in \fBssm\fP >-test suite to make sure that we do not scramble local system >+When this is set, \fBssm\fP will filter out all devices, volumes and pools >+whose name does not start with this prefix. It is used mainly in the \fBssm\fP >+test suite to make sure that we do not scramble the local system > configuration. > .UNINDENT > .SH LICENCE >@@ -739,11 +712,11 @@ along with this program. If not, see <\fI\%http://www.gnu.org/licenses/\fP>. > .SH REQUIREMENTS > .sp > Python 2.6 or higher is required to run this tool. System Storage Manager >-can only be run as root since most of the commands requires root privileges. >+can only be run as root since most of the commands require root privileges. > .sp >-There are other requirements listed bellow, but note that you do not >-necessarily need all dependencies for all backends, however if some of the >-tools required by the backend is missing, the backend would not work. >+There are other requirements listed below, but note that you do not >+necessarily need all dependencies for all backends. However if some of the >+tools required by a backend are missing, that backend will not work. > .SS Python modules > .INDENT 0.0 > .IP \(bu 2 >@@ -809,7 +782,7 @@ cryptsetup > .sp > \fBSystem storage manager\fP is available from > \fI\%http://storagemanager.sourceforge.net\fP\&. You can subscribe to >-\fI\%storagemanager-devel@lists.sourceforge.net\fP to follow the current development. >+\fI\%storagemanager\-devel@lists.sourceforge.net\fP to follow the current development. > .SH AUTHOR > Lukáš Czerner <lczerner@redhat.com> > .SH COPYRIGHT >diff --git a/doc/src/backends/backends_introduction.rst b/doc/src/backends/backends_introduction.rst >index 6ac0313..1c5d044 100644 >--- a/doc/src/backends/backends_introduction.rst >+++ b/doc/src/backends/backends_introduction.rst >@@ -1,14 +1,14 @@ > Introduction > ============ > >-Ssm aims to create unified user interface for various technologies like Device >+Ssm aims to create a unified user interface for various technologies like Device > Mapper (dm), Btrfs file system, Multiple Devices (md) and possibly more. In > order to do so we have a core abstraction layer in ``ssmlib/main.py``. This > abstraction layer should ideally know nothing about the underlying technology, >-but rather comply with **device**, **pool** and **volume** abstraction. >+but rather comply with **device**, **pool** and **volume** abstractions. > > Various backends can be registered in ``ssmlib/main.py`` in order to handle >-specific storage technology implementing methods like *create*, *snapshot*, or >+specific storage technology, implementing methods like *create*, *snapshot*, or > *remove* volumes and pools. The core will then call these methods to manage > the storage without needing to know what lies underneath it. There are already > several backends registered in ssm. >diff --git a/doc/src/backends/btrfs.rst b/doc/src/backends/btrfs.rst >index 89abd2e..e2c92dd 100644 >--- a/doc/src/backends/btrfs.rst >+++ b/doc/src/backends/btrfs.rst >@@ -12,58 +12,57 @@ Pools, volumes and snapshots can be created with btrfs backend and here > is what it means from the btrfs point of view: > > pool >- Pool is actually a btrfs file system itself, because it can be extended >- by adding more devices, or shrink by removing devices from it. Subvolumes >- and snapshots can also be created. When the new btrfs pool should be created >- **ssm** simply creates a btrfs file system, which means that every new >- btrfs pool has one volume of the same name as the pool itself which can >- not be removed without removing the entire pool. Default btrfs pool name is >- **btrfs_pool**. >+ A pool is actually a btrfs file system itself, because it can be extended >+ by adding more devices, or shrunk by removing devices from it. Subvolumes >+ and snapshots can also be created. When the new btrfs pool should be >+ created, **ssm** simply creates a btrfs file system, which means that every >+ new btrfs pool has one volume of the same name as the pool itself which can >+ not be removed without removing the entire pool. The default btrfs pool >+ name is **btrfs_pool**. > >- When creating new btrfs pool, the name of the pool is used as the file >- system label. If there is already existing btrfs file system in the system >- without a label, btrfs pool name will be generated for internal use >- in the following format "btrfs_{device base name}". >+ When creating a new btrfs pool, the name of the pool is used as the file >+ system label. If there is an already existing btrfs file system in the system >+ without a label, a btrfs pool name will be generated for internal use in the >+ following format "btrfs_{device base name}". > >- Btrfs pool is created when **create** or **add** command is used with >- devices specified and non existing pool name. >+ A btrfs pool is created when the **create** or **add** command is used >+ with specified devices and non existing pool name. > > volume >- Volume in btrfs back-end is actually just btrfs subvolume with the >- exception of the first volume created on btrfs pool creation, which is >- the file system itself. Subvolumes can only be created on btrfs file >- system when it is mounted, but user does not have to >- worry about that since **ssm** will automatically mount the file >- system temporarily in order to create a new subvolume. >+ A volume in the btrfs back-end is actually just btrfs subvolume with the >+ exception of the first volume created on btrfs pool creation, which is the >+ file system itself. Subvolumes can only be created on the btrfs file system >+ when it is mounted, but the user does not have to worry about that since >+ **ssm** will automatically mount the file system temporarily in order to >+ create a new subvolume. > >- Volume name is used as subvolume path in the btrfs file system and every >- object in this path must exists in order to create a volume. Volume name >- for internal tracking and for representing to the user is generated in >- the format "{pool_name}:{volume name}", but volumes can be also referenced >- with its mount point. >+ The volume name is used as subvolume path in the btrfs file system and >+ every object in this path must exist in order to create a volume. The volume >+ name for internal tracking and that is visible to the user is generated in the >+ format "{pool_name}:{volume name}", but volumes can be also referenced by its >+ mount point. > >- Btrfs volumes are only shown in the *list* output, when the file system is >+ The btrfs volumes are only shown in the *list* output, when the file system is > mounted, with the exception of the main btrfs volume - the file system > itself. > >- Also note that btrfs volumes and subvolumes can not be resized. This is >- mainly limitation of the btrfs tools which currently does not work >- reliably. >+ Also note that btrfs volumes and subvolumes cannot be resized. This is >+ mainly limitation of the btrfs tools which currently do not work reliably. > >- New btrfs volume can be created with **create** command. >+ A new btrfs volume can be created with the **create** command. > > snapshot >- Btrfs file system support subvolume snapshotting, so you can take a snapshot >- of any btrfs volume in the system with **ssm**. However btrfs does not >- distinguish between subvolumes and snapshots, because snapshot actually is >- just a subvolume with some block shared with different subvolume. It means, >- that **ssm** is not able to recognize btrfs snapshot directly, but instead >- it is trying to recognize special name format of the btrfs volume. However, >- if the *NAME* is specified when creating snapshot which does not match the >- special pattern, snapshot will not be recognized by the **ssm** and it will >- be listed as regular btrfs volume. >+ The btrfs file system supports subvolume snapshotting, so you can take a >+ snapshot of any btrfs volume in the system with **ssm**. However btrfs does >+ not distinguish between subvolumes and snapshots, because a snapshot is >+ actually just a subvolume with some blocks shared with a different subvolume. >+ This means, that **ssm** is not able to directly recognize a btrfs snapshot. >+ Instead, **ssm** will try to recognize a special name format of the btrfs >+ volume that denotes it is a snapshot. However, if the *NAME* is specified when >+ creating snapshot which does not match the special pattern, snapshot will not >+ be recognized by the **ssm** and it will be listed as regular btrfs volume. > >- New btrfs snapshot can be created with **snapshot** command. >+ A new btrfs snapshot can be created with the **snapshot** command. > > device >- Btrfs does not require any special device to be created on. >+ Btrfs does not require a special device to be created on. >diff --git a/doc/src/backends/crypt.rst b/doc/src/backends/crypt.rst >index a4462d2..6c70380 100644 >--- a/doc/src/backends/crypt.rst >+++ b/doc/src/backends/crypt.rst >@@ -1,8 +1,8 @@ > Crypt backend > ============= > >-Crypt backend in **ssm** uses cryptsetup and dm-crypt target to manage >-encrypted volumes. Crypt backend can be used as a regular backend for >+The crypt backend in **ssm** uses cryptsetup and dm-crypt target to manage >+encrypted volumes. The crypt backend can be used as a regular backend for > creating encrypted volumes on top of regular block devices, or even other > volumes (lvm or md volumes for example). Or it can be used to create > encrypted lvm volumes right away in a single step. >@@ -12,26 +12,26 @@ support pooling and does not require special devices. > > > pool >- Crypt backend does not support pooling it is not possible to create >- crypt pool or add a device into a pool. >+ The crypt backend does not support pooling, and it is not possible to >+ create crypt pool or add a device into a pool. > > volume >- Volume in crypt backend is the volume created by dm-crypt which >- represent the data on the original encrypted device in unencrypted form. >- Crypt backend does not support pooling, so only one device can be used >+ A volume in the crypt backend is the volume created by dm-crypt which >+ represents the data on the original encrypted device in unencrypted form. >+ The crypt backend does not support pooling, so only one device can be used > to create crypt volume. It also does not support raid or any device > concatenation. > >- Currently two modes, or extensions are supported luks and plain. Luks >- is used by default.For more information about the extensions please see >+ Currently two modes, or extensions are supported: luks and plain. Luks >+ is used by default. For more information about the extensions, please see > **cryptsetup** manual page. > > snapshot >- Crypt backend does not support snapshotting, however if the encrypted >- volume is created on top of the lvm volume, the lvm volume itself can >+ The crypt backend does not support snapshotting, however if the encrypted >+ volume is created on top of an lvm volume, the lvm volume itself can > be snapshotted. The snapshot can be then opened by using **cryptsetup**. > It is possible that this might change in the future so that **ssm** will > be able to activate the volume directly without the extra step. > > device >- Crypt backend does not require any special device to be created on. >+ The crypt backend does not require a special device to be created on. >diff --git a/doc/src/backends/lvm.rst b/doc/src/backends/lvm.rst >index 52ca373..cfc6100 100644 >--- a/doc/src/backends/lvm.rst >+++ b/doc/src/backends/lvm.rst >@@ -5,24 +5,24 @@ Pools, volumes and snapshots can be created with lvm, which pretty much match > the lvm abstraction. > > pool >- Lvm pool is just *volume group* in lvm language. It means that it is >- grouping devices and new logical volumes can be created out of the lvm >- pool. Default lvm pool name is **lvm_pool**. >+ An lvm pool is just a *volume group* in lvm language. It means that it is >+ grouping devices and new logical volumes can be created out of the lvm pool. >+ The default lvm pool name is **lvm_pool**. > >- Lvm pool is created when **create** or **add** command is used with >- devices specified and non existing pool name. >+ An lvm pool is created when the **create** or **add** commands are used >+ with specified devices and a non existing pool name. > > volume >- Lvm volume is just *logical volume* in lvm language. Lvm volume can be >- created wit **create** command. >+ An lvm volume is just a *logical volume* in lvm language. An lvm volume >+ can be created with the **create** command. > > snapshot > Lvm volumes can be snapshotted as well. When a snapshot is created from >- the lvm volume, new *snapshot* volume is created, which can be handled as >+ the lvm volume, a new *snapshot* volume is created, which can be handled as > any other lvm volume. Unlike :ref:`btrfs <btrfs-backend>` lvm is able > to distinguish snapshot from regular volume, so there is no need for a > snapshot name to match special pattern. > > device >- Lvm requires *physical device* to be created on the device, but with >+ Lvm requires a *physical device* to be created on the device, but with > **ssm** this is transparent for the user. >diff --git a/doc/src/backends/md.rst b/doc/src/backends/md.rst >index d2bfa75..522c9ce 100644 >--- a/doc/src/backends/md.rst >+++ b/doc/src/backends/md.rst >@@ -3,4 +3,4 @@ MD backend > > MD backend in **ssm** is currently limited to only gather the information > about MD volumes in the system. You can not create or manage MD volumes >-or pools, but it will be extended in the future. >+or pools, but this functionality will be extended in the future. >diff --git a/doc/src/commands/add.txt b/doc/src/commands/add.txt >index ac76da8..58d7b83 100644 >--- a/doc/src/commands/add.txt >+++ b/doc/src/commands/add.txt >@@ -1,8 +1,8 @@ >-This command adds **device** into the pool. The **device** will not be added if >-it's already part of different pool by default, but user will be asked whether >-to remove the device from it's pool. When multiple devices are provided, >-all of them are added into the pool. If one of the devices can not be added >-into the pool for any reason, add command will fail. If no pool is specified, >-default pool will be chosen. In the case of non existing pool, it will be >-created using provided devices. >+This command adds a **device** into the pool. By default, the **device** will >+not be added if it's already a part of a different pool, but the user will be >+asked whether or not to remove the device from its pool. When multiple devices >+are provided, all of them are added into the pool. If one of the devices >+cannot be added into the pool for any reason, the add command will fail. If no >+pool is specified, the default pool will be chosen. In the case of a non >+existing pool, it will be created using the provided devices. > >diff --git a/doc/src/commands/check.txt b/doc/src/commands/check.txt >index d4b8899..8e7a23e 100644 >--- a/doc/src/commands/check.txt >+++ b/doc/src/commands/check.txt >@@ -2,7 +2,7 @@ Check the file system consistency on the **volume**. You can specify multiple > volumes to check. If there is no file system on the **volume**, this **volume** > will be skipped. > >-In some cases file system has to be mounted in order to check the file system >-This will be handled by **ssm** automatically by mounting the **volume** >-temporarily. >+In some cases the file system has to be mounted in order to check the file >+system. This will be handled by **ssm** automatically by mounting the >+**volume** temporarily. > >diff --git a/doc/src/commands/commands_introduction.rst b/doc/src/commands/commands_introduction.rst >index 6c4328c..c9592e5 100644 >--- a/doc/src/commands/commands_introduction.rst >+++ b/doc/src/commands/commands_introduction.rst >@@ -1,6 +1,6 @@ > Introduction > ============ > >-System Storage Manager have several commands you can specify on the command >-line as a first argument to the ssm. They all have specific use and its own >-arguments, but global ssm arguments are propagated to all commands. >+System Storage Manager has several commands that you can specify on the >+command line as a first argument to ssm. They all have a specific use and >+their own arguments, but global ssm arguments are propagated to all commands. >diff --git a/doc/src/commands/create.txt b/doc/src/commands/create.txt >index 6f69094..20b3a14 100644 >--- a/doc/src/commands/create.txt >+++ b/doc/src/commands/create.txt >@@ -1,27 +1,27 @@ >-This command creates a new volume with defined parameters. If **device** is >-provided it will be used to create a volume, hence it will be added into the >-**pool** prior the volume creation (See :ref:`Add command section >-<add-command>`). More devices can be used to create a volume. >+This command creates a new volume with defined parameters. If a **device** is >+provided it will be used to create the volume, hence it will be added into the >+**pool** prior to volume creation (See :ref:`Add command section >+<add-command>`). More than one device can be used to create a volume. > >-If the **device** is already used in the different pool, then **ssm** will >+If the **device** is already being used in a different pool, then **ssm** will > ask you whether you want to remove it from the original pool. If you decline, >-or the removal fails, then the **volume** creation fails if the *SIZE* was >-not provided. On the other hand, if the *SIZE* is provided and some devices >-can not be added to the **pool** the volume creation might succeed if there >+or the removal fails, then the **volume** creation fails if the *SIZE* was not >+provided. On the other hand, if the *SIZE* is provided and some devices can >+not be added to the **pool**, the volume creation might still succeed if there > is enough space in the **pool**. > >-*POOL* name can be specified as well. If the pool exists new volume will be >-created from that pool (optionally adding **device** into the pool). However >-if the *POOL* does not exist **ssm** will attempt to create a new pool with >-provided **device** and then create a new volume from this pool. If >-**--backend** argument is omitted, the default **ssm** backend will be used. >-Default backend is *lvm*. >+The *POOL* name can be specified as well. If the pool exists, a new volume >+will be created from that pool (optionally adding **device** into the pool). >+However if the *POOL* does not exist, then **ssm** will attempt to create a >+new pool with the provided **device**, and then create a new volume from this >+pool. If the **--backend** argument is omitted, the default **ssm** backend >+will be used. The default backend is *lvm*. > >-**ssm** also supports creating RAID configuration, however some back-ends >-might not support all the levels, or it might not support RAID at all. In >+**ssm** also supports creating a RAID configuration, however some back-ends >+might not support all RAID levels, or may not even support RAID at all. In > this case, volume creation will fail. > >-If **mount** point is provided **ssm** will attempt to mount the volume after >-it is created. However it will fail if mountable file system is not present >-on the volume. >+If a **mount** point is provided, **ssm** will attempt to mount the volume >+after it is created. However it will fail if mountable file system is not >+present on the volume. > >diff --git a/doc/src/commands/list.txt b/doc/src/commands/list.txt >index 32c58d6..0886ee3 100644 >--- a/doc/src/commands/list.txt >+++ b/doc/src/commands/list.txt >@@ -1,15 +1,15 @@ >-List informations about all detected devices, pools, volumes and snapshots found >-in the system. **list** command can be used either alone to list all the >-information, or you can request specific section only. >+Lists information about all detected devices, pools, volumes and snapshots found >+on the system. The **list** command can be used either alone to list all of the >+information, or you can request specific sections only. > >-Following sections can be specified: >+The following sections can be specified: > > {volumes | vol} > List information about all **volumes** found in the system. > > {devices | dev} >- List information about all **devices** found in the system. Some devices are >- intentionally hidden, like for example cdrom, or DM/MD devices since those >+ List information about all **devices** found on the system. Some devices >+ are intentionally hidden, like for example cdrom or DM/MD devices since those > are actually listed as volumes. > > {pools | pool} >@@ -20,10 +20,10 @@ Following sections can be specified: > the system. > > {snapshots | snap} >- List information about all **snapshots** found in the system. Note that some >- back-ends does not support snapshotting and some can not distinguish between >- snapshot and regular volume. in this case **ssm** will try to recognize >- volume name in order to identify **snapshot**, but if the **ssm** regular >- expression does not match the snapshot pattern, this snapshot will not be >- recognized. >+ List information about all **snapshots** found in the system. Note that >+ some back-ends do not support snapshotting and some cannot distinguish >+ snapshot from regular volumes. In this case, **ssm** will try to recognize the >+ volume name in order to identify a **snapshot**, but if the **ssm** regular >+ expression does not match the snapshot pattern, the problematic snapshot will >+ not be recognized. > >diff --git a/doc/src/commands/mount.txt b/doc/src/commands/mount.txt >index ab57ec4..a151334 100644 >--- a/doc/src/commands/mount.txt >+++ b/doc/src/commands/mount.txt >@@ -1,10 +1,10 @@ >-This command will mount the **volume** at specified **directory**. The >-**volume** can be specified in the same way as with **mount(8)**, however >-in addition one can also specify **volume** in the format as it appear in >-the **ssm list** table. >+This command will mount the **volume** at the specified **directory**. The >+**volume** can be specified in the same way as with **mount(8)**, however in >+addition, one can also specify a **volume** in the format as it appears in the >+**ssm list** table. > > For example, instead of finding out what the device and subvolume id of the >-btrfs subvolume "btrfs_pool:vol001" is in order to mount it, on can simply >+btrfs subvolume "btrfs_pool:vol001" is in order to mount it, one can simply > call **ssm mount btrfs_pool:vol001 /mnt/test**. > > One can also specify *OPTIONS* in the same way as with **mount(8)**. >diff --git a/doc/src/commands/remove.txt b/doc/src/commands/remove.txt >index 4698e0b..a347031 100644 >--- a/doc/src/commands/remove.txt >+++ b/doc/src/commands/remove.txt >@@ -1,20 +1,21 @@ >-This command removes **item** from the system. Multiple items can be specified. >-If the **item** can not be removed for some reason, it will be skipped. >+This command removes an **item** from the system. Multiple items can be >+specified. If the **item** cannot be removed for some reason, it will be >+skipped. > >-**item** can represent: >+An **item** can be any of the following: > > device >- Remove **device** from the pool. Note that this can not be done in some >- cases where the device is used by pool. You can use **-f** argument to >+ Remove a **device** from the pool. Note that this cannot be done in some >+ cases where the device is being used by the pool. You can use the **-f** argument to > *force* removal. If the device does not belong to any pool, it will be > skipped. > > pool >- Remove the **pool** from the system. This will also remove all volumes >+ Remove a **pool** from the system. This will also remove all volumes > created from that pool. > > volume >- Remove the **volume** from the system. Note that this will fail if the >- **volume** is mounted and it can not be *forced* with **-f**. >+ Remove a **volume** from the system. Note that this will fail if the >+ **volume** is mounted and cannot be *forced* with **-f**. > > >diff --git a/doc/src/commands/resize.txt b/doc/src/commands/resize.txt >index b3da17b..1a980b4 100644 >--- a/doc/src/commands/resize.txt >+++ b/doc/src/commands/resize.txt >@@ -1,13 +1,13 @@ >-Change size of the **volume** and file system. If there is no file system only >-the **volume** itself will be resized. You can specify **device** to add into >-the **volume** pool prior the resize. Note that **device** will only be added >+Change size of the **volume** and file system. If there is no file system, only >+the **volume** itself will be resized. You can specify a **device** to add into >+the **volume** pool prior the resize. Note that the **device** will only be added > into the pool if the **volume** size is going to grow. > >-If the **device** is already used in the different pool, then **ssm** will >-ask you whether you want to remove it from the original pool. >+If the **device** is already used in a different pool, then **ssm** will >+ask you whether or not you want to remove it from the original pool. > >-In some cases file system has to be mounted in order to resize. This will be >-handled by **ssm** automatically by mounting the **volume** temporarily. >+In some cases, the file system has to be mounted in order to resize. This will >+be handled by **ssm** automatically by mounting the **volume** temporarily. > > Note that resizing btrfs subvolume is not supported, only the whole file > system can be resized. >diff --git a/doc/src/commands/snapshot.txt b/doc/src/commands/snapshot.txt >index 134f108..05c9e4d 100644 >--- a/doc/src/commands/snapshot.txt >+++ b/doc/src/commands/snapshot.txt >@@ -1,9 +1,9 @@ >-Take a snapshot of existing **volume**. This operation will fail if back-end >-which the **volume** belongs to does not support snapshotting. Note that >-you can not specify both *NAME* and *DESC* since those options are mutually >-exclusive. >+Take a snapshot of an existing **volume**. This operation will fail if the >+back-end to which the **volume** belongs to does not support snapshotting. >+Note that you cannot specify both *NAME* and *DESC* since those options are >+mutually exclusive. > >-In some cases file system has to be mounted in order to take a snapshot of >+In some cases the file system has to be mounted in order to take a snapshot of > the **volume**. This will be handled by **ssm** automatically by mounting the > **volume** temporarily. > >diff --git a/doc/src/description.rst b/doc/src/description.rst >index 50cbacb..5e970fb 100644 >--- a/doc/src/description.rst >+++ b/doc/src/description.rst >@@ -1,9 +1,9 @@ > Description > =========== > >-System Storage Manager provides easy to use command line interface to manage >-your storage using various technologies like lvm, btrfs, encrypted volumes and >-more. >+System Storage Manager provides an easy to use command line interface to >+manage your storage using various technologies like lvm, btrfs, encrypted >+volumes and more. > > In more sophisticated enterprise storage environments, management with Device > Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is >diff --git a/doc/src/download.rst b/doc/src/download.rst >index 505d371..d1a519f 100644 >--- a/doc/src/download.rst >+++ b/doc/src/download.rst >@@ -3,15 +3,15 @@ Download > > You can get System Storage Manager from the git repository on SourceForge > project page http://sourceforge.net/p/storagemanager/code/. There are two >-branches: ``master`` which contains stable release and ``devel`` >-when the development is happening. Once in a while ``devel`` branch is >-merged into ``master`` releasing new version of System Storage Manager. >-Obviously ``devel`` branch is more up-to-date, however it might not be >-as stable as ``master`` branch. System Storage Manager have its own >-regression testing suite, so we're trying hard not to break things >-that already works. >+branches: ``master`` which contains the stable release and ``devel`` where >+the development is happening. Once in a while the ``devel`` branch is >+merged into ``master``, releasing new version of System Storage Manager. >+Obviously the ``devel`` branch is more up-to-date, however it might not be >+as stable as the ``master`` branch. System Storage Manager has its own >+regression testing suite, so we're trying hard not to break things that >+already work. > >-You can check out ``master`` branch of the git repository:: >+You can check out the ``master`` branch of the git repository:: > > git clone git://git.code.sf.net/p/storagemanager/code ssm > >diff --git a/doc/src/env_variables.rst b/doc/src/env_variables.rst >index f27f639..7931979 100644 >--- a/doc/src/env_variables.rst >+++ b/doc/src/env_variables.rst >@@ -3,19 +3,19 @@ Environment variables > > SSM_DEFAULT_BACKEND > Specify which backend will be used by default. This can be overridden by >- specifying **-b** or **--backend** argument. Currently only *lvm* and *btrfs* >- is supported. >+ specifying the **-b** or **--backend** argument. Currently only *lvm* and >+ *btrfs* are supported. > > SSM_LVM_DEFAULT_POOL >- Name of the default lvm pool to be used if **-p** or **--pool** argument >- is omitted. >+ Name of the default lvm pool to be used if the **-p** or **--pool** >+ argument is omitted. > > SSM_BTRFS_DEFAULT_POOL >- Name of the default btrfs pool to be used if **-p** or **--pool** argument >- is omitted. >+ Name of the default btrfs pool to be used if the **-p** or **--pool** >+ argument is omitted. > > SSM_PREFIX_FILTER >- When this is set **ssm** will filter out all devices, volumes and pools >- which name does not start with this prefix. It is used mainly in **ssm** >- test suite to make sure that we do not scramble local system >+ When this is set, **ssm** will filter out all devices, volumes and pools >+ whose name does not start with this prefix. It is used mainly in the **ssm** >+ test suite to make sure that we do not scramble the local system > configuration. >diff --git a/doc/src/examples.rst b/doc/src/examples.rst >index c677207..87641e2 100644 >--- a/doc/src/examples.rst >+++ b/doc/src/examples.rst >@@ -29,8 +29,8 @@ List system storage:: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ------------------------------------------------------------------------------ > >-Creating a volume of defined size with the defined file system. The default >-back-end is set to lvm and lvm default pool name is lvm_pool:: >+Create a volume of the defined size with the defined file system. The default >+back-end is set to lvm and the lvm default pool name (volume group) is lvm_pool:: > > # ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1 > >@@ -41,17 +41,17 @@ to 10GB:: > # ssm resize -s-5G /dev/lvm_pool/lvol001 > > >-Resize the volume to 100G, but it would require to add more devices into the >+Resize the volume to 100G, but it may require adding more devices into the > pool:: > >- # ssm resize -s 25G /dev/lvm_pool/lvol001 /dev/loop2 >+ # ssm resize -s 100G /dev/lvm_pool/lvol001 /dev/loop2 > >-Now we can try to create new lvm volume named 'myvolume' from the remaining pool >-space with xfs file system and mount it to /mnt/test1:: >+Now we can try to create a new lvm volume named 'myvolume' from the remaining pool >+space with the xfs file system and mount it to /mnt/test1:: > > # ssm create --fs xfs --name myvolume /mnt/test1 > >-List all volumes with file system:: >+List all volumes with file systems:: > > # ssm list filesystems > ----------------------------------------------------------------------------------------------- >@@ -64,18 +64,18 @@ List all volumes with file system:: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ----------------------------------------------------------------------------------------------- > >-You can then easily remove the old volume by:: >+You can then easily remove the old volume with:: > > # ssm remove /dev/lvm_pool/lvol001 > >-Now lest try to create btrfs volume. Btrfs is separate backend, not just a >-file system. That is because btrfs itself have integrated volume manager. >-Defaul btrfs pool name is btrfs_pool.:: >+Now let's try to create a btrfs volume. Btrfs is a separate backend, not just a >+file system. That is because btrfs itself has an integrated volume manager. >+The default btrfs pool name is btrfs_pool.:: > > # ssm -b btrfs create /dev/loop3 /dev/loop4 > >-Now create we btrfs subvolumes. Note that btrfs file system has to be mounted >-in order to create subvolumes. However ssm will handle it for you.:: >+Now we create btrfs subvolumes. Note that the btrfs file system has to be mounted >+in order to create subvolumes. However ssm will handle this for you.:: > > # ssm create -p btrfs_pool > # ssm create -n new_subvolume -p btrfs_pool >@@ -114,8 +114,8 @@ in order to create subvolumes. However ssm will handle it for you.:: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ----------------------------------------------------------------------------------------------- > >-Now let's free up some of the loop devices so we cat try to add them into >-then btrfs_pool. So we'll simply remove lvm mvolume and resize lvol001 so we >+Now let's free up some of the loop devices so that we can try to add them into >+the btrfs_pool. So we'll simply remove lvm myvolume and resize lvol001 so we > can remove /dev/loop2. Note that myvolume is mounted so we have to unmount it > first.:: > >@@ -128,7 +128,7 @@ Add device to the btrfs file system:: > > # ssm add /dev/loop2 -p btrfs_pool > >-Set' see what happend. Note that to actually see btrfs subvolumes you have to >+Now let's see what happened. Note that to actually see btrfs subvolumes you have to > mount the file system first:: > > # mount -L btrfs_pool /mnt/test1/ >@@ -145,9 +145,9 @@ mount the file system first:: > /dev/sda5 49.44 GB ext4 49.44 GB 29.77 GB part /mnt/test > ------------------------------------------------------------------------------------------------------------------------ > >-Remove the whole lvm pool and one of the btrfs subvolume, and one >-unused device from the btrfs pool btrfs_loop3. Note that with btrfs, pool >-have the same name as the volume:: >+Remove the whole lvm pool, one of the btrfs subvolumes, and one unused device >+from the btrfs pool btrfs_loop3. Note that with btrfs, pools have the same >+name as their volumes:: > > # ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/ > >@@ -158,14 +158,14 @@ Snapshots can also be done with ssm:: > > With lvm, you can also create snapshots:: > >- root# ssm create -s 10G /dev/loop[01] >+ # ssm create -s 10G /dev/loop[01] > # ssm snapshot /dev/lvm_pool/lvol001 > > Now list all snapshots. Note that btrfs snapshots are actually just subvolumes >-with some blocks shared with the original subvolume, so there currently no >+with some blocks shared with the original subvolume, so there is currently no > way to distinguish between those. ssm is using a little trick to search for >-name patters to recognize snapshots, so if you specify your own name for the >-snapshot ssm will not recognize it as snapshot, but rather as regular volume >+name patterns to recognize snapshots, so if you specify your own name for the >+snapshot, ssm will not recognize it as snapshot, but rather as regular volume > (subvolume). This problem does not exist with lvm.:: > > # ssm list snapshots >diff --git a/doc/src/for_developers.rst b/doc/src/for_developers.rst >index 589f779..be58d83 100644 >--- a/doc/src/for_developers.rst >+++ b/doc/src/for_developers.rst >@@ -3,7 +3,7 @@ > For developers > ============== > >-We are accepting patches! If you're interested contributing to the System >+We are accepting patches! If you're interested in contributing to the System > Storage Manager code, just checkout the git repository located on > SourceForge. Please, base all of your work on the ``devel`` branch since > it is more up-to-date and it will save us some work when merging your >@@ -19,9 +19,9 @@ appreciated. See :ref:`Mailing list section <mailing-list>` section. > Tests > ----- > >-System Storage Manager contains regression testing suite to make sure that we >-do not break thing that should already work. And we recommend every developer >-to run tests before sending patches:: >+System Storage Manager contains a regression testing suite to make sure that we >+do not break things that should already work. We recommend that every developer >+run these tests before sending patches:: > > python test.py > >@@ -44,24 +44,24 @@ Tests in System Storage Manager are divided into four levels. > > #. And finally there are real bash tests located in ``tests/bashtests``. Bash > tests are divided into files. Each file tests one command for one backend >- and it containing series of test cases followed by checks whether the command >- created the expected result. In order to test real system commands we have >- to create system device to test on and not touch any of the existing system >+ and it contains a series of test cases followed by checks as to whether the >+ command created the expected result. In order to test real system commands we >+ have to create a system device to test on and not touch the existing system > configuration. > >- Before each test a number of devices are created using *dmsetup* in the test >- directory. These devices will be used in test cases instead of real devices. >- Real operation are performed in those devices as it would on the real system >- devices. It implies that this phase requires root privileges and it would >- not be run otherwise. In order to make sure that **ssm** does not touch any >- existing system configuration, each device, poor and volume name is include >- special prefix and SSM_PREFIX_FILTER environment variable is set to make >- **ssm** to exclude all items which does not match this filter. >+ Before each test a number of devices are created using *dmsetup* in the >+ test directory. These devices will be used in test cases instead of real >+ devices. Real operations are performed in those devices as they would be on >+ the real system devices. This phase requires root privileges and it will not >+ be run otherwise. In order to make sure that **ssm** does not touch any >+ existing system configuration, each device, pool and volume name includes a >+ special prefix, and the SSM_PREFIX_FILTER environment variable is set to make >+ **ssm** to exclude all items which does not match this special prefix. > >- Even though we tried hard to make sure that the bash tests does not change >- any of your system configuration the recommendation is **not** to run tests >- as with root privileges on your work or production system, but rather run >- it on your testing machine. >+ Even though we tried hard to make sure that the bash tests do not change >+ your system configuration, we recommend you **not** to run tests with root >+ privileges on your work or production system, but rather to run them on your >+ testing machine. > > If you change or create new functionality, please make sure that it is covered > by the System Storage Manager regression test suite to make sure that we do not >@@ -77,20 +77,20 @@ break it unintentionally. > Documentation > ------------- > >-System Storage Manager documentation is stored in ``doc/`` directory. The >-documentation is build using **sphinx** software which help us not to duplicate >-texts for different type of documentation (man page, html pages, readme). If >-you are going to modify documentation, please make sure not to modify manual >-page, html pages or README directly, but rather modify ``doc/*.rst`` and >-``doc/src/*.rst`` files accordingly so the change is propagated to all >-documents. >+System Storage Manager documentation is stored in the ``doc/`` directory. The >+documentation is built using **sphinx** software which helps us not to >+duplicate text for different types of documentation (man page, html pages, >+readme). If you are going to modify documentation, please make sure not to >+modify manual page, html pages or README directly, but rather modify the >+``doc/*.rst`` and ``doc/src/*.rst`` files accordingly so that the change is >+propagated to all documents. > > Moreover, parts of the documentation such as *synopsis* or ssm command >-*options* are parsed directly from the ssm help output. It means that when >-you're going to add or change argument into **ssm** the only thing you >-have to do is to add or change it in the ``ssmlib/main.py`` source code and >-then run ``make dist`` in the ``doc/`` directory and all the documents should >-be updated automatically. >+*options* are parsed directly from the ssm help output. This means that when >+you're going to add or change arguments into **ssm** the only thing you have >+to do is to add or change it in the ``ssmlib/main.py`` source code and then >+run ``make dist`` in the ``doc/`` directory and all the documents should be >+updated automatically. > > .. important:: > Please make sure you update the documentation when you add or change >@@ -103,33 +103,33 @@ be updated automatically. > Mailing list > ------------ > >-System Storage Manager developers communicate via the mailing list. Address of >-our mailing list is storagemanager-devel@lists.sourceforge.net and you can >-subscribe on the SourceForge project page >+System Storage Manager developers communicate via the mailing list. The >+address of our mailing list is storagemanager-devel@lists.sourceforge.net and >+you can subscribe on the SourceForge project page > https://lists.sourceforge.net/lists/listinfo/storagemanager-devel. Mailing > list archives can be found here > http://sourceforge.net/mailarchive/forum.php?forum_name=storagemanager-devel. > >-This is also the list where to send patches and where the review process is >-happening. We do not have separate *user* mailing list, so feel free to drop >+This is also the list where patches are sent and where the review process is >+happening. We do not have a separate *user* mailing list, so feel free to drop > your questions there as well. > > Posting patches > --------------- > > As already mentioned, we are accepting patches! And we are very happy for every >-contribution. If you're going to send a path in, please make sure to follow >+contribution. If you're going to send a patch in, please make sure to follow > some simple rules: > > #. Before you're going to post a patch, please run our regression testing suite >- to make sure that your change does not break someone else work. See >+ to make sure that your change does not break someone else's work. See > :ref:`Tests section <test-section>` > > #. If you're making a change that might require documentation update, please > update the documentation as well. See :ref:`Documentation section > <documentation-section>` > >-#. Make sure your patch have all the requisites such as *short description* >+#. Make sure your patch has all the requisites such as a *short description* > preferably 50 characters long at max describing the main idea of the change. > *Long description* describing what was changed with and why and finally > Signed-off-by tag. >diff --git a/doc/src/install.rst b/doc/src/install.rst >index a8e1d41..a787754 100644 >--- a/doc/src/install.rst >+++ b/doc/src/install.rst >@@ -6,10 +6,10 @@ To install System Storage Manager into your system simply run:: > python setup.py install > > as root in the System Storage Manager directory. Make sure that your system >-configuration meet the :ref:`requirements <ssm-requirements>` in order for ssm >+configuration meets the :ref:`requirements <ssm-requirements>` in order for ssm > to work correctly. > >-Note that you can run **ssm** even without installation from using the local >+Note that you can run **ssm** even without installation by using the local > sources with:: > > bin/ssm.local >diff --git a/doc/src/man_examples.rst b/doc/src/man_examples.rst >index b83b797..6c6f1db 100644 >--- a/doc/src/man_examples.rst >+++ b/doc/src/man_examples.rst >@@ -9,17 +9,17 @@ Examples > > # ssm list pools > >-**Create** a new 100GB **volume** with default lvm backend using */dev/sda* and >+**Create** a new 100GB **volume** with the default lvm backend using */dev/sda* and > */dev/sdb* with xfs file system:: > > # ssm create --size 100G --fs xfs /dev/sda /dev/sdb > >-**Create** a new **volume** with btrfs backend using */dev/sda* and */dev/sdb* and >+**Create** a new **volume** with a btrfs backend using */dev/sda* and */dev/sdb* and > let the volume to be RAID 1:: > > # ssm -b btrfs create --raid 1 /dev/sda /dev/sdb > >-Using lvm backend **create** a RAID 0 **volume** with devices */dev/sda* and >+Using the lvm backend **create** a RAID 0 **volume** with devices */dev/sda* and > */dev/sdb* with 128kB stripe size, ext4 file system and mount it on > */home*:: > >diff --git a/doc/src/requirements.rst b/doc/src/requirements.rst >index 7d14547..8ba078c 100644 >--- a/doc/src/requirements.rst >+++ b/doc/src/requirements.rst >@@ -4,11 +4,11 @@ Requirements > ============ > > Python 2.6 or higher is required to run this tool. System Storage Manager >-can only be run as root since most of the commands requires root privileges. >+can only be run as root since most of the commands require root privileges. > >-There are other requirements listed bellow, but note that you do not >-necessarily need all dependencies for all backends, however if some of the >-tools required by the backend is missing, the backend would not work. >+There are other requirements listed below, but note that you do not >+necessarily need all dependencies for all backends. However if some of the >+tools required by a backend are missing, that backend will not work. > > > Python modules >-- >1.9.3 > >------------------------------------------------------------------------------ >Open source business process management suite built on Java and Eclipse >Turn processes into business applications with Bonita BPM Community Edition >Quickly connect people, data, and systems into organized workflows >Winner of BOSSIE, CODIE, OW2 and Gartner awards >http://p.sf.net/sfu/Bonitasoft >_______________________________________________ >storagemanager-devel mailing list >storagemanager-devel@lists.sourceforge.net >https://lists.sourceforge.net/lists/listinfo/storagemanager-devel >System Storage Manager (http://sourceforge.net/p/storagemanager/home/Home/)
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 1115130
: 913798