Bug 1678631
Summary: | Establish environment variable to set D-Bus timeout parameter | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Jakub Krysl <jkrysl> |
Component: | stratis-cli | Assignee: | mulhern <amulhern> |
Status: | CLOSED ERRATA | QA Contact: | guazhang <guazhang> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.2 | CC: | amulhern, dkeefe, rhandlin |
Target Milestone: | rc | ||
Target Release: | 8.2 | ||
Hardware: | ppc64le | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 1.0.5 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-04-28 15:41:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jakub Krysl
2019-02-19 09:17:52 UTC
Some quick notes: This does not occur in the GetManagedObjects call, but in the snapshot call itself. We do not know why the CLI did not receive a reply. First question: Was the snapshot properly created? If I'm following the log correctly, /dev/sda is an iSCSI attached disk. Additionally when it's cleared using dd INFO: [2019-02-18 11:38:02] Cleaning superblock of device /dev/sda with params cmd = dd if=/dev/zero of=/dev/sda bs=4k count=2 INFO: [2019-02-18 06:38:02] Running: 'dd if=/dev/zero of=/dev/sda bs=4k count=2'... 2+0 records in 2+0 records out 8192 bytes (8.2 kB, 8.0 KiB) copied, 0.078948 s, 104 kB/s It's performance doesn't appear to be super fast. As the snapshot does get created, but the command line is timing out, my best guess is the disk IO is slow enough that we are exceeding the 2 minute timeout the command line uses for the dbus interface when we are creating a snapshot. I started up a ppc64le system using a locally attached disk for Stratis and it takes about 20 seconds to do a snapshot. We may want to expose an environmental variable or a command line argument which allows the user to specify a longer timeout. (In reply to Tony Asleson from comment #5) > If I'm following the log correctly, /dev/sda is an iSCSI attached disk. Yes, you are correct. > Additionally when it's cleared using dd > > INFO: [2019-02-18 11:38:02] Cleaning superblock of device /dev/sda with > params cmd = dd if=/dev/zero of=/dev/sda bs=4k count=2 > INFO: [2019-02-18 06:38:02] Running: 'dd if=/dev/zero of=/dev/sda bs=4k > count=2'... > 2+0 records in > 2+0 records out > 8192 bytes (8.2 kB, 8.0 KiB) copied, 0.078948 s, 104 kB/s > > It's performance doesn't appear to be super fast. This is way slower than it should be, I'll have to investigate. > > As the snapshot does get created, but the command line is timing out, my > best guess is the disk IO is slow enough that we are exceeding the 2 minute > timeout the command line uses for the dbus interface when we are creating a > snapshot. I started up a ppc64le system using a locally attached disk for > Stratis and it takes about 20 seconds to do a snapshot. > If you check attached test log, at the end is test summary including how long certain tasks took. Here are tasks that took much longer than the others (which took from 1 to 6 seconds): Setup name: stratis/setup/storage/setup_iscsi Status: PASS Elapsed Time: 03m35s Test name: stratis/stratis_cli/fs/create_destroy_success Status: PASS Elapsed Time: 01m37s Setup name: stratis/setup/fs_create Status: PASS Elapsed Time: 01m35s Test name: stratis/stratis_cli/fs/snapshot_success Status: FAIL Elapsed Time: 03m45s stratis/setup/storage/setup_iscsi takes so long because there are 120s waits to establish NFS lock for the iSCSI LUN. stratis/stratis_cli/fs/create_destroy_success creates and destroys filesystem, nothing else. stratis/setup/fs_create creates filesystem. Compared to the previous one it seems creation of filesystem takes quite long here, deletion does not. stratis/stratis_cli/fs/snapshot_success creates snapshot on existing filesystem (and deletes it afterwards). I am not sure if the creation itself took the 3m45s and deletion was almost instant or it timed out after 2m and deletion took 1m45s. I am working on splitting these into separate tests to have this information provided. > We may want to expose an environmental variable or a command line argument > which allows the user to specify a longer timeout. That would be quite elegant way of solving this. CLI argument seems better, as you might want to specify different timeouts for different commands. The plan is for stratis to observe an environment variable, STRATIS_DBUS_TIMEOUT, which it will read on invocation, which will allow the user to set the timeout. Most likely the units will be seconds, perhaps milliseconds if that seems more desirable. GitHub issue: https://github.com/stratis-storage/stratis-cli/issues/252. Mass migration to Guangwu. Hello [root@ibm-p8-kvm-02-guest-01 ~]# rpm -qa |grep stratis stratis-cli-2.0.0-1.el8.noarch stratisd-2.0.0-4.el8.ppc64le [root@ibm-p8-kvm-02-guest-01 ~]# uname -a Linux ibm-p8-kvm-02-guest-01.virt.pnr.lab.eng.rdu2.redhat.com 4.18.0-151.el8.ppc64le #1 SMP Fri Nov 15 18:43:51 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux [root@ibm-p8-kvm-02-guest-01 ~]# [root@ibm-p8-kvm-02-guest-01 ~]# stratis pool create test_pool /dev/sda [root@ibm-p8-kvm-02-guest-01 ~]# stratis fs create test_pool test_fs [root@ibm-p8-kvm-02-guest-01 ~]# [root@ibm-p8-kvm-02-guest-01 ~]# stratis fs snapshot test_pool test_fs test_snapshot [root@ibm-p8-kvm-02-guest-01 ~]# [root@ibm-p8-kvm-02-guest-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2G 0 disk `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-physical-originsub 253:3 0 2G 0 stratis |-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-thinmeta 253:4 0 16M 0 stratis | `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-thinpool-pool 253:7 0 1.9G 0 stratis | |-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-9962f3cf8dcf4d9c961f9cc7515cb9b1 | | 253:8 0 1T 0 stratis | `-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-0ea9300e341b45988b5851bb5e35c768 | 253:9 0 1T 0 stratis |-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-thindata 253:5 0 1.9G 0 stratis | `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-thinpool-pool 253:7 0 1.9G 0 stratis | |-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-9962f3cf8dcf4d9c961f9cc7515cb9b1 | | 253:8 0 1T 0 stratis | `-stratis-1-d4e7d7b6d1bb40c9af3d9e5f07901559-thin-fs-0ea9300e341b45988b5851bb5e35c768 | 253:9 0 1T 0 stratis `-stratis-1-private-d4e7d7b6d1bb40c9af3d9e5f07901559-flex-mdv 253:6 0 16M 0 stratis vda 252:0 0 80G 0 disk |-vda1 252:1 0 4M 0 part |-vda2 252:2 0 1G 0 part /boot `-vda3 252:3 0 79G 0 part |-rhel_ibm--p8--kvm--02--guest--01-root 253:0 0 47.7G 0 lvm / |-rhel_ibm--p8--kvm--02--guest--01-swap 253:1 0 8G 0 lvm [SWAP] `-rhel_ibm--p8--kvm--02--guest--01-home 253:2 0 23.3G 0 lvm /home [root@ibm-p8-kvm-02-guest-01 ~]# so move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1634 |