This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1257762 - need better failure warning when thin pool meta device is full
need better failure warning when thin pool meta device is full
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.2
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Zdenek Kabelac
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-27 18:41 EDT by Corey Marthaler
Modified: 2017-08-02 02:49 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2015-08-27 18:41:44 EDT
Description of problem:
I was checking out/verifying bug 1163530, but in this case did not set the thin_pool_autoextend_threshold. However shouldn't there still be a similar easy to read message when the meta data space is full like below?

"Free space in pool vg/pool is above threshold, new volumes are not allowed."




        thin_pool_autoextend_threshold = 100
        thin_pool_autoextend_percent = 20

[root@host-112 ~]# lvs -a -o +devices snapper_thinp/POOL
  LV   VG            Attr       LSize Pool Origin Data% Meta% Devices      
  POOL snapper_thinp twi-aotzM- 1.00g             0.00  99.41 POOL_tdata(0)


  Rounding up size to full physical extent 12.00 MiB
  WARNING: Sum of all thin volume sizes (12.54 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  Logical volume "virt985" created.
  Rounding up size to full physical extent 12.00 MiB
  WARNING: Sum of all thin volume sizes (12.55 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  device-mapper: message ioctl on (253:4) failed: Operation not supported
  Failed to process thin pool message "set_transaction_id 986 987".
  Failed to suspend and send message snapper_thinp/POOL.
  Rounding up size to full physical extent 12.00 MiB
  device-mapper: message ioctl on (253:4) failed: Operation not supported
  Failed to process thin pool message "delete 987".
  Failed to suspend and send message snapper_thinp/POOL.
  Rounding up size to full physical extent 12.00 MiB
  device-mapper: message ioctl on (253:4) failed: Operation not supported
  Failed to process thin pool message "delete 987".
  Failed to suspend and send message snapper_thinp/POOL.
  Rounding up size to full physical extent 12.00 MiB
  device-mapper: message ioctl on (253:4) failed: Operation not supported
  Failed to process thin pool message "delete 987".
  Failed to suspend and send message snapper_thinp/POOL.
  Rounding up size to full physical extent 12.00 MiB


[89067.895695] device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
[89069.013823] device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
[89349.877969] device-mapper: space map metadata: unable to allocate new metadata block
[89349.878885] device-mapper: thin: 253:4: metadata operation 'dm_pool_commit_metadata' failed: error = -28
[89349.879933] device-mapper: thin: 253:4: aborting current metadata transaction
[89349.882392] device-mapper: thin: 253:4: switching pool to read-only mode
[89349.883430] device-mapper: thin: 253:4: unable to service pool target messages in READ_ONLY or FAIL mode
[89350.565263] device-mapper: thin: 253:4: unable to service pool target messages in READ_ONLY or FAIL mode
[89351.124968] device-mapper: thin: 253:4: unable to service pool target messages in READ_ONLY or FAIL mode



Version-Release number of selected component (if applicable):
3.10.0-306.el7.x86_64
lvm2-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
lvm2-libs-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
lvm2-cluster-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-libs-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-event-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-event-libs-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
Comment 1 Zdenek Kabelac 2015-08-28 04:53:35 EDT
Yeah - I've been thinking about it for couple times - we will likely need to deploy more detection code inside activation tree - i.e. checking
status of thin-pool directly before posting messages (as of now these are 2 quite separate tasks).

It may get useful in few other situations as well I'm trying to resolve.

So far the logic was like - if you don't want lvm2 to maintain threshold, it's left completely upon you to ensure there is space in both data and metadata areas.

The solution will most likely not be part RH7.2 release.

Note You need to log in before you can comment on or make changes to this bug.