Bug 1566107 - targetcli: On saveconfig dump control string max_data_area_mb [rhel-7.5.z]
Summary: targetcli: On saveconfig dump control string max_data_area_mb [rhel-7.5.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: targetcli
Version: 7.5
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: ---
Assignee: Maurizio Lombardi
QA Contact: Martin Hoyer
Marek Suchánek
URL:
Whiteboard:
Depends On: 1565063
Blocks: 1555191
TreeView+ depends on / blocked
 
Reported: 2018-04-11 14:20 UTC by Oneata Mircea Teodor
Modified: 2018-05-14 16:13 UTC (History)
10 users (show)

Fixed In Version: targetcli-2.1.fb46-3.el7
Doc Type: If docs needed, set a value
Doc Text:
Previously, the "targetcli saveconfig" command did not save the max_data_area_mb control string. As a consequence, the property was lost when restarting the target service or rebooting the node. With this update, the property is saved and restored correctly.
Clone Of: 1565063
Environment:
Last Closed: 2018-05-14 16:12:42 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1403 0 None None None 2018-05-14 16:13:06 UTC

Description Oneata Mircea Teodor 2018-04-11 14:20:25 UTC
This bug has been copied from bug #1565063 and has been proposed to be backported to 7.5 z-stream (EUS).

Comment 4 Sweta Anandpara 2018-05-04 10:27:19 UTC
Validated this on a RHGS cluster having the package targetcli-2.1.fb46-4.el7.noarch

The default value of the key "max_data_area_mb" is 8. Used the below command to configure/increase it to 32, and then checked that it was successfully set. Also verified that when the config is saved, the new (changed) value is stored in /etc/target/saveconfig.json, thereby we don't lose that data when the config is cleared and restored.

targetcli /backstores/user:glfs create ob1 1048576 ozone@10.70.47.65/block-store/74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a 74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a control="max_data_area_mb=32"

This is good to validate the patch that has gone in in this bug. Please feel free to move this bug to verified after a round of regression. Thanks!

Logs pasted below:

[root@dhcp47-65 ~]# targetcli ls; targetcli clearconfig confirm=Trueo- / ........................................................................................................ [...]
  o- backstores ............................................................................................. [...]
  | o- block ................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................ [Storage Objects: 0]
  | o- pscsi ................................................................................. [Storage Objects: 0]
  | o- ramdisk ............................................................................... [Storage Objects: 0]
  | o- user:glfs ............................................................................. [Storage Objects: 1]
  |   o- ob1 ............ [ozone@10.70.47.65/block-store/74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a (1.0MiB) deactivated]
  |     o- alua .................................................................................. [ALUA Groups: 1]
  |       o- default_tg_pt_gp ...................................................... [ALUA state: Active/optimized]
  o- iscsi ........................................................................................... [Targets: 0]
  o- loopback ........................................................................................ [Targets: 0]
All configuration cleared
[root@dhcp47-65 ~]# targetcli ls
o- / ........................................................................................................ [...]
  o- backstores ............................................................................................. [...]
  | o- block ................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................ [Storage Objects: 0]
  | o- pscsi ................................................................................. [Storage Objects: 0]
  | o- ramdisk ............................................................................... [Storage Objects: 0]
  | o- user:glfs ............................................................................. [Storage Objects: 0]
  o- iscsi ........................................................................................... [Targets: 0]
  o- loopback ........................................................................................ [Targets: 0]
[root@dhcp47-65 ~]# 
[root@dhcp47-65 ~]# 
[root@dhcp47-65 ~]# targetcli /backstores/user:glfs create ob1 1048576 ozone@10.70.47.65/block-store/74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a 74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a control="max_data_area_mb=32"
Created user-backed storage object ob1 size 1048576.
[root@dhcp47-65 ~]# cat /sys/kernel/config/target/core/user_0/ob1/attrib/max_data_area_mb32
[root@dhcp47-65 ~]#
[root@dhcp47-65 ~]# targetcli / saveconfig
Configuration saved to /etc/target/saveconfig.json
[root@dhcp47-65 ~]# cat /etc/target/saveconfig.json 
{
  "fabric_modules": [], 
  "storage_objects": [
    {
      "alua_tpgs": [
        {
          "alua_access_state": 0, 
          "alua_access_status": 0, 
          "alua_access_type": 3, 
          "alua_support_active_nonoptimized": 1, 
          "alua_support_active_optimized": 1, 
          "alua_support_offline": 1, 
          "alua_support_standby": 1, 
          "alua_support_transitioning": 1, 
          "alua_support_unavailable": 1, 
          "alua_write_metadata": 0, 
          "implicit_trans_secs": 0, 
          "name": "default_tg_pt_gp", 
          "nonop_delay_msecs": 100, 
          "preferred": 0, 
          "tg_pt_gp_id": 0, 
          "trans_delay_msecs": 0
        }
      ], 
      "attributes": {
        "cmd_time_out": 30, 
        "dev_size": 1048576, 
        "qfull_time_out": -1
      }, 
      "config": "glfs/ozone@10.70.47.65/block-store/74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a", 
      "control": "max_data_area_mb=32", 
      "hw_max_sectors": 128, 
      "name": "ob1", 
      "plugin": "user", 
      "size": 1048576, 
      "wwn": "74a1ac0a-d2ed-4cb5-a555-9a6a23a6576a"
    }
  ], 
  "targets": []
}
[root@dhcp47-65 ~]# rpm -qa | grep targetcli python-configshell python-rtslib
grep: python-configshell: No such file or directory
grep: python-rtslib: No such file or directory
[root@dhcp47-65 ~]# rpm -qa | grep targetcli
targetcli-2.1.fb46-4.el7.noarch
[root@dhcp47-65 ~]# rpm -qa | grep configshell
python-configshell-1.1.fb23-4.el7_5.noarch
[root@dhcp47-65 ~]# rpm -qa | grep rtslib
python-rtslib-2.1.fb63-11.el7.noarch
[root@dhcp47-65 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-54.8.el7rhgs.x86_64
python-gluster-3.8.4-54.8.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-14.el7rhgs.noarch
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-libs-3.8.4-54.8.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.8.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
glusterfs-events-3.8.4-54.8.el7rhgs.x86_64
gluster-block-0.2.1-14.1.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
glusterfs-3.8.4-54.8.el7rhgs.x86_64
glusterfs-server-3.8.4-54.8.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.8.el7rhgs.x86_64
glusterfs-rdma-3.8.4-54.8.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.8.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-api-3.8.4-54.8.el7rhgs.x86_64
[root@dhcp47-65 ~]#

Comment 5 Martin Hoyer 2018-05-04 10:41:24 UTC
(In reply to Sweta Anandpara from comment #4)
Thank You very much!
Our regression tests have not found any issue on RHEL-7.5 with following packages updated:
python-configshell-1.1.fb23-4.el7_5
python-rtslib-2.1.fb63-11.el7_5
targetcli-2.1.fb46-4.el7_5

Comment 10 errata-xmlrpc 2018-05-14 16:12:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1403


Note You need to log in before you can comment on or make changes to this bug.