RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1796132 - assertion "count to be initialized not in use" ((*journalValue == atomicLoad32(decrementCount)))
Summary: assertion "count to be initialized not in use" ((*journalValue == atomicLoad3...
Keywords:
Status: CLOSED DUPLICATE of bug 1765253
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kmod-kvdo
Version: 8.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.0
Assignee: vdo-internal
QA Contact: vdo-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-29 16:58 UTC by Corey Marthaler
Modified: 2021-09-06 15:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-29 17:04:26 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-33791 0 None None None 2021-09-06 15:32:27 UTC

Description Corey Marthaler 2020-01-29 16:58:03 UTC
Description of problem:
I hit this while running pvmove testing on native lvm2 vdo volumes. 


Moving data from /dev/sdf2 (pv_shuffle_B) to /dev/sdm2 one LV at a time (pvmove -n) on hayes-02
hayes-02: pvmove  -n pv_shuffle_B/vdo -v /dev/sdf2 /dev/sdm2 &
  Archiving volume group "pv_shuffle_B" metadata (seqno 67).
  Creating logical volume pvmove0
  Moving 1024 extents of logical volume pv_shuffle_B/vpool0_vdata.
  activation/volume_list configuration setting not defined: Checking only host tags for pv_shuffle_B/vdo.
  Creating pv_shuffle_B-pvmove0
  Loading table for pv_shuffle_B-pvmove0 (253:80).
  Loading table for pv_shuffle_B-vpool0_vdata (253:72).
  Loading table for pv_shuffle_B-vpool0-vpool (253:73).
  Suppressed pv_shuffle_B-vpool0-vpool (253:73) identical table reload.
  Loading table for pv_shuffle_B-vdo (253:74).
  Suppressed pv_shuffle_B-vdo (253:74) identical table reload.
  Not monitoring pv_shuffle_B/vpool0 with libdevmapper-event-lvm2vdo.so
  Unmonitored LVM-A8imPrmNFpjLU5GqTnBcIZgO7r42UJ91nGa4SkCe9Vj8BwpiPvwbzm5xsqZFlLSO-vpool for events
  Suspending pv_shuffle_B-vdo (253:74) with device flush
  Suspending pv_shuffle_B-vpool0-vpool (253:73) with device flush
  Suspending pv_shuffle_B-vpool0_vdata (253:72) with device flush
  Loading table for pv_shuffle_B-vpool0-vpool (253:73).
  Suppressed pv_shuffle_B-vpool0-vpool (253:73) identical table reload.
  Loading table for pv_shuffle_B-vdo (253:74).
  Suppressed pv_shuffle_B-vdo (253:74) identical table reload.
  Resuming pv_shuffle_B-pvmove0 (253:80).
  Resuming pv_shuffle_B-vpool0_vdata (253:72).
  Resuming pv_shuffle_B-vpool0-vpool (253:73).
  Resuming pv_shuffle_B-vdo (253:74).
  Monitored LVM-A8imPrmNFpjLU5GqTnBcIZgO7r42UJ91nGa4SkCe9Vj8BwpiPvwbzm5xsqZFlLSO-vpool for events
  Creating volume group backup "/etc/lvm/backup/pv_shuffle_B" (seqno 68).
  activation/volume_list configuration setting not defined: Checking only host tags for pv_shuffle_B/pvmove0.
  Checking progress before waiting every 15 seconds.
  /dev/sdf2: Moved: 0.29%
  /dev/sdf2: Moved: 33.30%
  /dev/sdf2: Moved: 67.48%
  /dev/sdf2: Moved: 100.00%
  Polling finished successfully.
Device does not exist.
Unable to get copy percent, pvmove most likely finished.

Could not connect to hayes-02:5016, 22: Invalid argument
Could not connect to hayes-02:5016, 22: Invalid argument


 lvs -a -o +devices pv_shuffle_B
  vdo                 pv_shuffle_B vwi-aov--- 1016.00m vpool0                          0.43                                    vpool0(0)                                                                  
  vpool0              pv_shuffle_B dwi-------    4.00g                                 75.16                                   vpool0_vdata(0)                                                            
  [vpool0_vdata]      pv_shuffle_B Dwi-ao----    4.00g                                                                         /dev/sdm2(2668)                                                            



Jan 28 15:28:50 hayes-02 qarshd[27121]: Running cmdline: pvmove -n pv_shuffle_B/vdo -v /dev/sdf2 /dev/sdm2
Jan 28 15:28:50 hayes-02 lvm[7556]: No longer monitoring VDO pool pv_shuffle_B-vpool0-vpool.
Jan 28 15:28:50 hayes-02 kernel: kvdo38:pvmove: suspending device 'pv_shuffle_B-vpool0-vpool'
Jan 28 15:28:50 hayes-02 kernel: kvdo38:packerQ: compression is disabled
Jan 28 15:28:51 hayes-02 kernel: kvdo38:packerQ: compression is enabled
Jan 28 15:28:51 hayes-02 kernel: uds: kvdo38:dedupeQ: beginning save (vcn 4294967295)
Jan 28 15:28:59 hayes-02 kernel: uds: kvdo38:dedupeQ: finished save (vcn 4294967295)
Jan 28 15:28:59 hayes-02 kernel: kvdo38:pvmove: device 'pv_shuffle_B-vpool0-vpool' suspended
Jan 28 15:28:59 hayes-02 kernel: kvdo38:pvmove: resuming device 'pv_shuffle_B-vpool0-vpool'
Jan 28 15:28:59 hayes-02 kernel: kvdo38:pvmove: device 'pv_shuffle_B-vpool0-vpool' resumed
Jan 28 15:28:59 hayes-02 lvm[7556]: Monitoring VDO pool pv_shuffle_B-vpool0-vpool.
Jan 28 15:28:59 hayes-02 systemd[1]: Started LVM2 poll daemon.
Jan 28 15:29:09 hayes-02 lvm[7556]: /dev/mapper/pv_shuffle_B-corigin_corig: stat failed: No such file or directory
Jan 28 15:29:09 hayes-02 lvm[7556]: Path /dev/mapper/pv_shuffle_B-corigin_corig no longer valid for device(253,81)
Jan 28 15:29:09 hayes-02 lvm[7556]: /dev/dm-81: stat failed: No such file or directory
Jan 28 15:29:09 hayes-02 lvm[7556]: Path /dev/dm-81 no longer valid for device(253,81)
Jan 28 15:29:35 hayes-02 systemd[1]: Started qarsh Per-Connection Server (10.15.80.223:46998).
Jan 28 15:29:35 hayes-02 qarshd[27205]: Talking to peer ::ffff:10.15.80.223:spremotetablet (IPv6)
Jan 28 15:29:35 hayes-02 qarshd[27205]: Running cmdline: dmsetup status pv_shuffle_B-pvmove0-real
Jan 28 15:29:35 hayes-02 systemd[1]: Started qarsh Per-Connection Server (10.15.80.223:47002).
Jan 28 15:29:35 hayes-02 qarshd[27209]: Talking to peer ::ffff:10.15.80.223:47002 (IPv6)
Jan 28 15:29:35 hayes-02 qarshd[27209]: Running cmdline: dmsetup status pv_shuffle_B-pvmove0
Jan 28 15:29:45 hayes-02 lvmpolld[27148]: W: #011LVPOLL: PID 27151: STDERR: '  WARNING: This metadata update is NOT backed up.'

### There's quite a gap of time here where nothing else was running or going on...

Jan 29 06:13:19 hayes-02 kernel: uds: kvdo38:journalQ: assertion "count to be initialized not in use" ((*journalValue == atomicLoad32(decrementCount))) failed at /builddir/build/BUILD/kvdo-e77ddd6a10d86fb4f25c0f543a0bee06b37c7633/obj/./vdo/base/lockCounter.c:259
Jan 29 06:13:19 hayes-02 kernel: uds: kvdo38:journalQ: [backtrace]
Jan 29 06:13:19 hayes-02 kernel: CPU: 13 PID: 20203 Comm: kvdo38:journalQ Kdump: loaded Tainted: G           O     --------- -t - 4.18.0-173.el8.x86_64 #1
Jan 29 06:13:19 hayes-02 kernel: Hardware name: Dell Inc. PowerEdge R830/0VVT0H, BIOS 1.8.0 05/28/2018
Jan 29 06:13:19 hayes-02 kernel: Call Trace:
Jan 29 06:13:19 hayes-02 kernel: dump_stack+0x5c/0x80
Jan 29 06:13:19 hayes-02 kernel: assertionFailedLogOnly+0x49/0x70 [uds]
Jan 29 06:13:19 hayes-02 kernel: ? scheduleOperationWithContext+0xee/0x130 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: ? kvdoGetCurrentThreadID+0xa/0x20 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: initializeLockCount+0x6f/0x80 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: ? prepareToAssignEntry+0x1d0/0x1d0 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: prepareToAssignEntry+0x17a/0x1d0 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: assignEntries.part.6+0x9d/0xb0 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: workQueueRunner+0x1b9/0x660 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: ? finish_wait+0x80/0x80
Jan 29 06:13:19 hayes-02 kernel: ? kvdoCompareDataVIOs+0x90/0x90 [kvdo]
Jan 29 06:13:19 hayes-02 kernel: kthread+0x112/0x130
Jan 29 06:13:19 hayes-02 kernel: ? kthread_flush_work_fn+0x10/0x10
Jan 29 06:13:19 hayes-02 kernel: ret_from_fork+0x35/0x40


Version-Release number of selected component (if applicable):
kernel-4.18.0-173.el8    BUILT: Fri Jan 24 06:02:03 CST 2020
vdo-6.2.2.33-12.el8    BUILT: Mon Nov 25 16:26:28 CST 2019
kmod-kvdo-6.2.2.24-63.el8    BUILT: Tue Jan 14 15:03:22 CST 2020

lvm2-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-libs-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-dbusd-2.03.07-1.el8    BUILT: Mon Dec  2 00:12:23 CST 2019
device-mapper-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
vdo-6.2.2.33-12.el8    BUILT: Mon Nov 25 16:26:28 CST 2019
kmod-kvdo-6.2.2.24-63.el8    BUILT: Tue Jan 14 15:03:22 CST 2020


How reproducible:
Still looking into how reproducible this is, as well as providing more details...

Comment 1 Sweet Tea Dorminy 2020-01-29 17:04:26 UTC
We just fixed this bug :)

*** This bug has been marked as a duplicate of bug 1765253 ***


Note You need to log in before you can comment on or make changes to this bug.