RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1678631 - Establish environment variable to set D-Bus timeout parameter
Summary: Establish environment variable to set D-Bus timeout parameter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: stratis-cli
Version: 8.2
Hardware: ppc64le
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.2
Assignee: mulhern
QA Contact: guazhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-19 09:17 UTC by Jakub Krysl
Modified: 2021-09-06 15:22 UTC (History)
3 users (show)

Fixed In Version: 1.0.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 15:41:56 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-24213 0 None None None 2021-09-06 15:22:56 UTC
Red Hat Product Errata RHBA-2020:1634 0 None None None 2020-04-28 15:42:06 UTC

Description Jakub Krysl 2019-02-19 09:17:52 UTC
Description of problem:
When creating a snapshot on ppc64le, I receive $SUBJECT. Command with --propagate:
# stratis --propagate fs snapshot test_pool test_fs test_snapshot
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/dbus_python_client_gen/_invokers.py", line 304, in dbus_func
    return dbus_method(*xformed_args, timeout=timeout)
  File "/usr/lib64/python3.6/site-packages/dbus/proxies.py", line 145, in __call__
    **keywords)
  File "/usr/lib64/python3.6/site-packages/dbus/connection.py", line 651, in call_blocking
    message, timeout)
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/stratis_cli/_main.py", line 48, in the_func
    result.func(result)
  File "/usr/lib/python3.6/site-packages/stratis_cli/_actions/_logical.py", line 138, in snapshot_filesystem
    'snapshot_name': namespace.snapshot_name
  File "/usr/lib/python3.6/site-packages/dbus_python_client_gen/_invokers.py", line 306, in dbus_func
    raise DPClientInvocationError() from err
dbus_python_client_gen._errors.DPClientInvocationError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/bin/stratis", line 33, in <module>
    main()
  File "/usr/bin/stratis", line 29, in main
    return run()(sys.argv[1:])
  File "/usr/lib/python3.6/site-packages/stratis_cli/_main.py", line 52, in the_func
    raise StratisCliActionError(command_line_args, result) from err
stratis_cli._errors.StratisCliActionError: Action selected by command-line arguments ['--propagate', 'fs', 'snapshot', 'test_pool', 'test_fs', 'test_snapshot'] which were parsed to Namespace(func=<function LogicalActions.snapshot_filesystem at 0x7fff8a2b7ae8>, origin_name='test_fs', pool_name='test_pool', propagate=True, snapshot_name='test_snapshot') failed

Version-Release number of selected component (if applicable):
stratisd-1.0.3-1.el8.ppc64le

How reproducible:
100% (2*)

Steps to Reproduce:
1. stratis pool create test_pool /dev/sda
2. stratis fs create test_pool test_fs
3. stratis fs snapshot test_pool test_fs test_snapshot

Actual results:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply

Expected results:
Snapshot created

Additional info:

Comment 2 mulhern 2019-02-19 13:38:49 UTC
Some quick notes: This does not occur in the GetManagedObjects call, but in the snapshot call itself. We do not know why the CLI did not receive a reply.

First question: Was the snapshot properly created?

Comment 5 Tony Asleson 2019-02-19 21:09:42 UTC
If I'm following the log correctly, /dev/sda is an iSCSI attached disk.  Additionally when it's cleared using dd

INFO: [2019-02-18 11:38:02] Cleaning superblock of device /dev/sda with params cmd = dd if=/dev/zero of=/dev/sda bs=4k count=2
INFO: [2019-02-18 06:38:02] Running: 'dd if=/dev/zero of=/dev/sda bs=4k count=2'...
2+0 records in
2+0 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.078948 s, 104 kB/s

It's performance doesn't appear to be super fast.

As the snapshot does get created, but the command line is timing out, my best guess is the disk IO is slow enough that we are exceeding the 2 minute timeout the command line uses for the dbus interface when we are creating a snapshot.  I started up a ppc64le system using a locally attached disk for Stratis and it takes about 20 seconds to do a snapshot.


We may want to expose an environmental variable or a command line argument which allows the user to specify a longer timeout.

Comment 7 Jakub Krysl 2019-02-20 08:37:01 UTC
(In reply to Tony Asleson from comment #5)
> If I'm following the log correctly, /dev/sda is an iSCSI attached disk.

Yes, you are correct.

> Additionally when it's cleared using dd
> 
> INFO: [2019-02-18 11:38:02] Cleaning superblock of device /dev/sda with
> params cmd = dd if=/dev/zero of=/dev/sda bs=4k count=2
> INFO: [2019-02-18 06:38:02] Running: 'dd if=/dev/zero of=/dev/sda bs=4k
> count=2'...
> 2+0 records in
> 2+0 records out
> 8192 bytes (8.2 kB, 8.0 KiB) copied, 0.078948 s, 104 kB/s
> 
> It's performance doesn't appear to be super fast.

This is way slower than it should be, I'll have to investigate.

> 
> As the snapshot does get created, but the command line is timing out, my
> best guess is the disk IO is slow enough that we are exceeding the 2 minute
> timeout the command line uses for the dbus interface when we are creating a
> snapshot.  I started up a ppc64le system using a locally attached disk for
> Stratis and it takes about 20 seconds to do a snapshot.
> 

If you check attached test log, at the end is test summary including how long certain tasks took. Here are tasks that took much longer than the others (which took from 1 to 6 seconds):

  Setup name: stratis/setup/storage/setup_iscsi                                       Status: PASS       Elapsed Time: 03m35s
   Test name: stratis/stratis_cli/fs/create_destroy_success                           Status: PASS       Elapsed Time: 01m37s
  Setup name: stratis/setup/fs_create                                                 Status: PASS       Elapsed Time: 01m35s
   Test name: stratis/stratis_cli/fs/snapshot_success                                 Status: FAIL       Elapsed Time: 03m45s

stratis/setup/storage/setup_iscsi takes so long because there are 120s waits to establish NFS lock for the iSCSI LUN.
stratis/stratis_cli/fs/create_destroy_success creates and destroys filesystem, nothing else.
stratis/setup/fs_create creates filesystem. Compared to the previous one it seems creation of filesystem takes quite long here, deletion does not.
stratis/stratis_cli/fs/snapshot_success creates snapshot on existing filesystem (and deletes it afterwards). I am not sure if the creation itself took the 3m45s and deletion was almost instant or it timed out after 2m and deletion took 1m45s. I am working on splitting these into separate tests to have this information provided.

> We may want to expose an environmental variable or a command line argument
> which allows the user to specify a longer timeout.

That would be quite elegant way of solving this. CLI argument seems better, as you might want to specify different timeouts for different commands.

Comment 10 mulhern 2019-06-21 21:48:08 UTC
The plan is for stratis to observe an environment variable,
STRATIS_DBUS_TIMEOUT, which it will read on invocation,
which will allow the user to set the timeout.
Most likely the units will be seconds, perhaps milliseconds
if that seems more desirable.

Comment 11 mulhern 2019-07-23 15:26:57 UTC
GitHub issue: https://github.com/stratis-storage/stratis-cli/issues/252.

Comment 12 Jakub Krysl 2019-10-15 14:49:05 UTC
Mass migration to Guangwu.

Comment 14 guazhang@redhat.com 2019-11-26 08:17:41 UTC
Hello

[root@ibm-p8-kvm-02-guest-01 ~]# rpm -qa |grep stratis
stratis-cli-2.0.0-1.el8.noarch
stratisd-2.0.0-4.el8.ppc64le
[root@ibm-p8-kvm-02-guest-01 ~]# uname -a
Linux ibm-p8-kvm-02-guest-01.virt.pnr.lab.eng.rdu2.redhat.com 4.18.0-151.el8.ppc64le #1 SMP Fri Nov 15 18:43:51 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux
[root@ibm-p8-kvm-02-guest-01 ~]# 


[root@ibm-p8-kvm-02-guest-01 ~]# stratis pool create test_pool /dev/sda
[root@ibm-p8-kvm-02-guest-01 ~]# stratis fs create test_pool test_fs

[root@ibm-p8-kvm-02-guest-01 ~]# 
[root@ibm-p8-kvm-02-guest-01 ~]# stratis fs snapshot test_pool test_fs test_snapshot

[root@ibm-p8-kvm-02-guest-01 ~]# 
[root@ibm-p8-kvm-02-guest-01 ~]# lsblk
NAME                                                                             MAJ:MIN RM  SIZE RO TYPE    MOUNTPOINT
sda                                                                                8:0    0    2G  0 disk    
`-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-physical-originsub          253:3    0    2G  0 stratis 
  |-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-thinmeta             253:4    0   16M  0 stratis 
  | `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-thinpool-pool           253:7    0  1.9G  0 stratis 
  |   |-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-9962f3cf8dcf4d9c961f9cc7515cb9b1
  |   |                                                                          253:8    0    1T  0 stratis 
  |   `-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-0ea9300e341b45988b5851bb5e35c768
  |                                                                              253:9    0    1T  0 stratis 
  |-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-thindata             253:5    0  1.9G  0 stratis 
  | `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-thinpool-pool           253:7    0  1.9G  0 stratis 
  |   |-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-9962f3cf8dcf4d9c961f9cc7515cb9b1
  |   |                                                                          253:8    0    1T  0 stratis 
  |   `-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-0ea9300e341b45988b5851bb5e35c768
  |                                                                              253:9    0    1T  0 stratis 
  `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-mdv                  253:6    0   16M  0 stratis 
vda                                                                              252:0    0   80G  0 disk    
|-vda1                                                                           252:1    0    4M  0 part    
|-vda2                                                                           252:2    0    1G  0 part    /boot
`-vda3                                                                           252:3    0   79G  0 part    
  |-rhel_ibm--p8--kvm--02--guest--01-root                                        253:0    0 47.7G  0 lvm     /
  |-rhel_ibm--p8--kvm--02--guest--01-swap                                        253:1    0    8G  0 lvm     [SWAP]
  `-rhel_ibm--p8--kvm--02--guest--01-home                                        253:2    0 23.3G  0 lvm     /home
[root@ibm-p8-kvm-02-guest-01 ~]# 




so move to verified

Comment 17 errata-xmlrpc 2020-04-28 15:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1634


Note You need to log in before you can comment on or make changes to this bug.