Back to bug 1389484

Who When What Removed Added
Red Hat Bugzilla Rules Engine 2016-10-27 17:26:36 UTC Target Release 2.2 2.1
Hemanth Kumar 2016-10-27 17:28:14 UTC Target Release 2.1 2.2
Harish NV Rao 2016-10-27 17:31:38 UTC Target Release 2.2 2.1
CC hnallurv
Jason Dillaman 2016-10-27 18:08:26 UTC CC mchristi
Mike Christie 2016-10-27 18:17:34 UTC CC jdillama
Assignee jdillama mchristi
Mike Christie 2016-10-27 18:20:46 UTC CC pcuzner
Ken Dreyer (Red Hat) 2016-10-27 21:30:58 UTC CC kdreyer
John Poelstra 2016-11-01 00:49:22 UTC Target Release 2.1 2.2
Mike Christie 2016-11-01 18:58:33 UTC Blocks 1379890
Hemanth Kumar 2016-11-03 07:35:22 UTC Flags needinfo?(mchristi)
Mike Christie 2016-11-03 18:05:59 UTC Flags needinfo?(mchristi)
Mike Christie 2016-11-03 20:37:31 UTC Doc Text Cause:

The ceph-iscsi-ansible playbook will enable dm-multipath for all devices and disable kpartx. This will cause the multipath layer to claim devices before Ceph has had a chance to and disable automatic partition setup for other system disks that are using dm-multipath.


Consequence:

After a reboot, Ceph's OSDs will fail to initialize and system disks with partitions will not be automatically mounted.


Workaround (if any):


1. After ansible-playbook ceph-iscsi-gw.yml is run log into each node running a iSCSI target and run

multipath -ll

if there are disks that the user did not intend to be used by dm-multipath, for example disks being used by OSDs, run

multipath -w device_name
multipath -f device_name

example:

multipath -w mpatha
multipath -f mpatha

2. Open /etc/multiapth.conf on each node running a iSCSI target and in the defaults section remove the global skip_partx and change the global user_friendly_names value to yes:

defaults {
user_friendly_names yes
find_multipaths no
}


2. By default, the ansible iscsi modules unblacklisted everything. Unless, you are using dm-multipath for specific devices you can blacklist everything again by adding

devnode ".*"

to the uncommenented out blacklist {} section at the bottom of the file so it looks like this:

blacklist {
devnode ".*"
}

3. We do want dm-multipath for rbd devices, so add an exception for it by adding a blacklist_exceptions section or adding rbd to an existing one in multipath.conf:

blacklist_exceptions {
devnode "^rbd[0-9]"
}

4. For rbd devices add the following to multipath.conf:

devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}

5. Reboot node.


Result:

The OSD and iSCSI gateway services will initialize automatically after the system has been rebooted.
Doc Type If docs needed, set a value Known Issue
Mike Christie 2016-11-03 20:40:06 UTC Doc Text Cause:

The ceph-iscsi-ansible playbook will enable dm-multipath for all devices and disable kpartx. This will cause the multipath layer to claim devices before Ceph has had a chance to and disable automatic partition setup for other system disks that are using dm-multipath.


Consequence:

After a reboot, Ceph's OSDs will fail to initialize and system disks with partitions will not be automatically mounted.


Workaround (if any):


1. After ansible-playbook ceph-iscsi-gw.yml is run log into each node running a iSCSI target and run

multipath -ll

if there are disks that the user did not intend to be used by dm-multipath, for example disks being used by OSDs, run

multipath -w device_name
multipath -f device_name

example:

multipath -w mpatha
multipath -f mpatha

2. Open /etc/multiapth.conf on each node running a iSCSI target and in the defaults section remove the global skip_partx and change the global user_friendly_names value to yes:

defaults {
user_friendly_names yes
find_multipaths no
}


2. By default, the ansible iscsi modules unblacklisted everything. Unless, you are using dm-multipath for specific devices you can blacklist everything again by adding

devnode ".*"

to the uncommenented out blacklist {} section at the bottom of the file so it looks like this:

blacklist {
devnode ".*"
}

3. We do want dm-multipath for rbd devices, so add an exception for it by adding a blacklist_exceptions section or adding rbd to an existing one in multipath.conf:

blacklist_exceptions {
devnode "^rbd[0-9]"
}

4. For rbd devices add the following to multipath.conf:

devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}

5. Reboot node.


Result:

The OSD and iSCSI gateway services will initialize automatically after the system has been rebooted.
Cause:

The ceph-iscsi-ansible playbook will enable dm-multipath for all devices and disable kpartx. This will cause the multipath layer to claim devices before Ceph has had a chance to and disable automatic partition setup for other system disks that are using dm-multipath.


Consequence:

After a reboot, Ceph's OSDs will fail to initialize and system disks using dm-multipath with partitions will not be automatically mounted, so the system could fail to boot.

Workaround (if any):


1. After ansible-playbook ceph-iscsi-gw.yml is run log into each node running a iSCSI target and run

multipath -ll

if there are disks that the user did not intend to be used by dm-multipath, for example disks being used by OSDs, run

multipath -w device_name
multipath -f device_name

example:

multipath -w mpatha
multipath -f mpatha

2. Open /etc/multiapth.conf on each node running a iSCSI target and in the defaults section remove the global skip_partx and change the global user_friendly_names value to yes:

defaults {
user_friendly_names yes
find_multipaths no
}


2. By default, the ansible iscsi modules unblacklisted everything. Unless, you are using dm-multipath for specific devices you can blacklist everything again by adding

devnode ".*"

to the uncommenented out blacklist {} section at the bottom of the file so it looks like this:

blacklist {
devnode ".*"
}

3. We do want dm-multipath for rbd devices, so add an exception for it by adding a blacklist_exceptions section or adding rbd to an existing one in multipath.conf:

blacklist_exceptions {
devnode "^rbd[0-9]"
}

4. For rbd devices add the following to multipath.conf:

devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}

5. Reboot node.


Result:

The OSD and iSCSI gateway services will initialize automatically after the system has been rebooted.
Harish NV Rao 2016-11-18 07:56:51 UTC Blocks 1379890 1383917
Bara Ancincova 2016-11-21 13:48:54 UTC Docs Contact bancinco
Doc Text Cause:

The ceph-iscsi-ansible playbook will enable dm-multipath for all devices and disable kpartx. This will cause the multipath layer to claim devices before Ceph has had a chance to and disable automatic partition setup for other system disks that are using dm-multipath.


Consequence:

After a reboot, Ceph's OSDs will fail to initialize and system disks using dm-multipath with partitions will not be automatically mounted, so the system could fail to boot.

Workaround (if any):


1. After ansible-playbook ceph-iscsi-gw.yml is run log into each node running a iSCSI target and run

multipath -ll

if there are disks that the user did not intend to be used by dm-multipath, for example disks being used by OSDs, run

multipath -w device_name
multipath -f device_name

example:

multipath -w mpatha
multipath -f mpatha

2. Open /etc/multiapth.conf on each node running a iSCSI target and in the defaults section remove the global skip_partx and change the global user_friendly_names value to yes:

defaults {
user_friendly_names yes
find_multipaths no
}


2. By default, the ansible iscsi modules unblacklisted everything. Unless, you are using dm-multipath for specific devices you can blacklist everything again by adding

devnode ".*"

to the uncommenented out blacklist {} section at the bottom of the file so it looks like this:

blacklist {
devnode ".*"
}

3. We do want dm-multipath for rbd devices, so add an exception for it by adding a blacklist_exceptions section or adding rbd to an existing one in multipath.conf:

blacklist_exceptions {
devnode "^rbd[0-9]"
}

4. For rbd devices add the following to multipath.conf:

devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}

5. Reboot node.


Result:

The OSD and iSCSI gateway services will initialize automatically after the system has been rebooted.
.Ceph OSD daemons fail to initialize and DM-Multipath disks are not automatically mounted on iSCSI nodes

The `ceph-iscsi-gw.yml` Ansible playbook enables device mapper multipathing (DM-Multipath) and disables the `kpartx` utility, which causes the multipath layer to claim device before Ceph disables automatic partition setup for other system disks that use DM-Multipath. Consequently, after a reboot, Ceph OSD daemons fail to initialize and system disks that use DM-Multipath with partitions are not automatically mounted so that the system can fail to boot.

To work around this problem:

. After executing the `ceph-iscsi-gw.yml`, log into each node that runs an iSCSI target and display the current multipath configuration:
+
----
$ multipath -ll
----

. If you see any devices that you did not intend to be used by DM-Multipath, for example OSD disks, remove them from the DM-Multipath configuration.

.. Remove their World Wide Identifiers (WWIDs) from the WWIDs file:
+
----
$ multipath -w <device_name>
----

.. Flush their multipath device maps:
+
----
$ multipath -f device_name
----

. Edit the `/etc/multipath.conf` file on each node that runs an iSCSI target.

.. Comment out the `skip-partx` variable.

.. Set the `user_friendly_names` variable to `yes`:
+
----
defaults {
user_friendly_names yes
find_multipaths no
}
----

.. Blacklist all devices.
+
----
blacklist {
devnode ".*"
}
----

.. DM-Mutlipath is used with Ceph Block Devices. If you use them, add an exception. Edit `^rbd[0-9]` as needed.
+
----
blacklist_exceptions {
devnode "^rbd[0-9]"
}
----
+
In addition, add the following entry for the Ceph Block Devices:
+
----
devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}
----

. Reboot the nodes. The OSD and iSCSI gateway services will initialize automatically after the reboot.
Flags needinfo?(mchristi)
Mike Christie 2016-11-21 22:22:03 UTC Flags needinfo?(mchristi)
Bara Ancincova 2016-11-22 09:17:28 UTC Doc Text .Ceph OSD daemons fail to initialize and DM-Multipath disks are not automatically mounted on iSCSI nodes

The `ceph-iscsi-gw.yml` Ansible playbook enables device mapper multipathing (DM-Multipath) and disables the `kpartx` utility, which causes the multipath layer to claim device before Ceph disables automatic partition setup for other system disks that use DM-Multipath. Consequently, after a reboot, Ceph OSD daemons fail to initialize and system disks that use DM-Multipath with partitions are not automatically mounted so that the system can fail to boot.

To work around this problem:

. After executing the `ceph-iscsi-gw.yml`, log into each node that runs an iSCSI target and display the current multipath configuration:
+
----
$ multipath -ll
----

. If you see any devices that you did not intend to be used by DM-Multipath, for example OSD disks, remove them from the DM-Multipath configuration.

.. Remove their World Wide Identifiers (WWIDs) from the WWIDs file:
+
----
$ multipath -w <device_name>
----

.. Flush their multipath device maps:
+
----
$ multipath -f device_name
----

. Edit the `/etc/multipath.conf` file on each node that runs an iSCSI target.

.. Comment out the `skip-partx` variable.

.. Set the `user_friendly_names` variable to `yes`:
+
----
defaults {
user_friendly_names yes
find_multipaths no
}
----

.. Blacklist all devices.
+
----
blacklist {
devnode ".*"
}
----

.. DM-Mutlipath is used with Ceph Block Devices. If you use them, add an exception. Edit `^rbd[0-9]` as needed.
+
----
blacklist_exceptions {
devnode "^rbd[0-9]"
}
----
+
In addition, add the following entry for the Ceph Block Devices:
+
----
devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}
----

. Reboot the nodes. The OSD and iSCSI gateway services will initialize automatically after the reboot.
.Ceph OSD daemons fail to initialize and DM-Multipath disks are not automatically mounted on iSCSI nodes

The `ceph-iscsi-gw.yml` Ansible playbook enables device mapper multipathing (DM-Multipath) and disables the `kpartx` utility, which causes the multipath layer to claim device before Ceph disables automatic partition setup for other system disks that use DM-Multipath. Consequently, after a reboot, Ceph OSD daemons fail to initialize and system disks that use DM-Multipath with partitions are not automatically mounted so that the system can fail to boot.

To work around this problem:

. After executing the `ceph-iscsi-gw.yml`, log into each node that runs an iSCSI target and display the current multipath configuration:
+
----
$ multipath -ll
----

. If you see any devices that you did not intend to be used by DM-Multipath, for example OSD disks, remove them from the DM-Multipath configuration.

.. Remove their World Wide Identifiers (WWIDs) from the WWIDs file:
+
----
$ multipath -w <device_name>
----

.. Flush their multipath device maps:
+
----
$ multipath -f device_name
----

. Edit the `/etc/multipath.conf` file on each node that runs an iSCSI target.

.. Comment out the `skip-partx` variable.

.. Set the `user_friendly_names` variable to `yes`:
+
----
defaults {
user_friendly_names yes
find_multipaths no
}
----

.. Blacklist all devices.
+
----
blacklist {
devnode ".*"
}
----

.. DM-Mutlipath is used with Ceph Block Devices, so you must add an exception for them. Edit `^rbd[0-9]` as needed.
+
----
blacklist_exceptions {
devnode "^rbd[0-9]"
}
----
+
In addition, add the following entry for the Ceph Block Devices:
+
----
devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}
----

. Reboot the nodes. The OSD and iSCSI gateway services will initialize automatically after the reboot.
Bara Ancincova 2016-11-22 12:38:36 UTC Doc Text .Ceph OSD daemons fail to initialize and DM-Multipath disks are not automatically mounted on iSCSI nodes

The `ceph-iscsi-gw.yml` Ansible playbook enables device mapper multipathing (DM-Multipath) and disables the `kpartx` utility, which causes the multipath layer to claim device before Ceph disables automatic partition setup for other system disks that use DM-Multipath. Consequently, after a reboot, Ceph OSD daemons fail to initialize and system disks that use DM-Multipath with partitions are not automatically mounted so that the system can fail to boot.

To work around this problem:

. After executing the `ceph-iscsi-gw.yml`, log into each node that runs an iSCSI target and display the current multipath configuration:
+
----
$ multipath -ll
----

. If you see any devices that you did not intend to be used by DM-Multipath, for example OSD disks, remove them from the DM-Multipath configuration.

.. Remove their World Wide Identifiers (WWIDs) from the WWIDs file:
+
----
$ multipath -w <device_name>
----

.. Flush their multipath device maps:
+
----
$ multipath -f device_name
----

. Edit the `/etc/multipath.conf` file on each node that runs an iSCSI target.

.. Comment out the `skip-partx` variable.

.. Set the `user_friendly_names` variable to `yes`:
+
----
defaults {
user_friendly_names yes
find_multipaths no
}
----

.. Blacklist all devices.
+
----
blacklist {
devnode ".*"
}
----

.. DM-Mutlipath is used with Ceph Block Devices, so you must add an exception for them. Edit `^rbd[0-9]` as needed.
+
----
blacklist_exceptions {
devnode "^rbd[0-9]"
}
----
+
In addition, add the following entry for the Ceph Block Devices:
+
----
devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}
----

. Reboot the nodes. The OSD and iSCSI gateway services will initialize automatically after the reboot.
.Ceph OSD daemons fail to initialize and DM-Multipath disks are not automatically mounted on iSCSI nodes

The `ceph-iscsi-gw.yml` Ansible playbook enables device mapper multipathing (DM-Multipath) and disables the `kpartx` utility. This behavior causes the multipath layer to claim a device before Ceph disables automatic partition setup for other system disks that use DM-Multipath. Consequently, after a reboot, Ceph OSD daemons fail to initialize, and system disks that use DM-Multipath with partitions are not automatically mounted. Because of that the system can fail to boot.

To work around this problem:

. After executing the `ceph-iscsi-gw.yml`, log into each node that runs an iSCSI target and display the current multipath configuration:
+
----
$ multipath -ll
----

. If you see any devices that you did not intend to be used by DM-Multipath, for example OSD disks, remove them from the DM-Multipath configuration.

.. Remove the devices World Wide Identifiers (WWIDs) from the WWIDs file:
+
----
$ multipath -w <device_name>
----

.. Flush the devices multipath device maps:
+
----
$ multipath -f device_name
----

. Edit the `/etc/multipath.conf` file on each node that runs an iSCSI target.

.. Comment out the `skip-partx` variable.

.. Set the `user_friendly_names` variable to `yes`:
+
----
defaults {
user_friendly_names yes
find_multipaths no
}
----

.. Blacklist all devices:
+
----
blacklist {
devnode ".*"
}
----

.. DM-Multipath is used with Ceph Block Devices, therefore you must add an exception for them. Edit `^rbd[0-9]` as needed:
+
----
blacklist_exceptions {
devnode "^rbd[0-9]"
}
----
+
.. Add the following entry for the Ceph Block Devices:
+
----
devices {
device {
vendor "Ceph"
product "RBD"
skip_kpartx yes
user_friendly_names no
}
}
----

. Reboot the nodes. The OSD and iSCSI gateway services will initialize automatically after the reboot.
Jason Dillaman 2017-01-04 21:11:25 UTC Status NEW CLOSED
Resolution --- WONTFIX
Last Closed 2017-01-04 16:11:25 UTC
Drew Harris 2017-07-30 15:35:01 UTC Sub Component RBD
Component Ceph RBD
Veera Raghava Reddy 2022-02-21 18:06:03 UTC Severity unspecified medium

Back to bug 1389484