Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1583501

Summary: The vdo storage usage is not updated after copying file.
Product: Red Hat Enterprise Linux 7 Reporter: Wei Wang <weiwang>
Component: kmod-kvdoAssignee: Dennis Keefe <dkeefe>
Status: CLOSED NOTABUG QA Contact: vdo-qe
Severity: medium Docs Contact:
Priority: medium    
Version: 7.5CC: awalsh, bgurney, bugs, cshao, dkeefe, huzhao, jkrysl, limershe, mgoldboi, qiyuan, rbarry, sweettea, weiwang, yaniwang, ycui, yzhao
Target Milestone: pre-dev-freezeKeywords: Regression
Target Release: 7.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-31 14:23:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Node RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
log files
none
vdo_logs none

Description Wei Wang 2018-05-29 06:55:26 UTC
Created attachment 1445140 [details]
log files

Description of problem:
The vdo storage usage is not updated after copying the file. CLI displays no changes after copying several files. They still display the same GB size no matter what files are copied.
[root@dhcp-9-130 vdo]# df -h|grep vdo
/dev/mapper/vdo0                                              15G   33M   15G   1% /root/my_vdo/vdo
[root@dhcp-9-130 vdo]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo0         10.0G      4.0G      6.0G  40%           99%


Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.2-20180525.0
cockpit-system-165-3.el7.noarch
cockpit-storaged-165-3.el7.noarch
cockpit-165-3.el7.x86_64
cockpit-ws-165-3.el7.x86_64
imgbased-1.0.17-0.1.el7ev.noarch

How reproducible:
100%


Steps to Reproduce:
Prepare a USB block device, with a 10GB partition
1. Clean install redhat-virtualization-host-4.2-20180525.0.
2. Login to cockpit UI with root account in https://<host IP>:9090
3. Create vdo storage with logical partition is 15GB
4. Format it with "Overwrite existing data"
5. Mount vdo storage to path /root/my_vdo/vdo
6. Check the usage of vdo storage
    #df -h|grep vdo
    #vdostats --human-readable
7. Copy files (>2GB) to the mount path
8 check the usage of vdo storage
    #df -h|grep vdo
    #vdostats --human-readable
 
Actual results:
The vdo storage usage still displays the same GB size no matter what files are copied.
[root@dhcp-9-130 vdo]# df -h|grep vdo
/dev/mapper/vdo0                                              15G   33M   15G   1% /root/my_vdo/vdo
[root@dhcp-9-130 vdo]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo0         10.0G      4.0G      6.0G  40%           99%

Expected results:
The vdo storage usage should be updated after copying files.

Additional info:

Comment 1 Ryan Barry 2018-05-29 08:55:17 UTC
How is this a regression?

Comment 2 Red Hat Bugzilla Rules Engine 2018-05-29 08:55:24 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 3 Wei Wang 2018-05-29 10:05:00 UTC
(In reply to Ryan Barry from comment #1)
> How is this a regression?

This issue was not detected with the earlier RHVH version.

Comment 4 Sandro Bonazzola 2018-08-07 09:21:15 UTC
This doesn't look specific to oVirt Node, it seems to be related to vdo directly. Moving to platform.

Comment 6 Sweet Tea Dorminy 2018-08-07 15:37:53 UTC
How are you copying the files? Can you please elaborate that step?

(I'm worried that the files are in the page cache and haven't been written to VDO yet.)

Thanks!

Comment 7 Bryan Gurney 2018-08-07 15:43:50 UTC
In order to ensure that the files are written to the VDO volume, run "time sync; date" after the files have been copied.

The "sync" command will flush the filesystem buffers; the "time" command before it will time how long this operation takes, and the "date" command after it will display the time the operation completed (which helps when checking logs for any potential associated events).

If you run "vdostats vdo0 --verbose", you will see the verbose statistics for the VDO volume.  The statistic "logical blocks used" tracks the number of 4096-byte logical blocks currently used.  This will be 0 before any devices have written to it (i.e.: prior to mkfs), and will be a relatively small positive number after mkfs.  This should be a higher number after files have been written to the VDO volume.

Comment 8 Wei Wang 2018-08-08 05:47:25 UTC
(In reply to Sweet Tea Dorminy from comment #6)
> How are you copying the files? Can you please elaborate that step?
> 
> (I'm worried that the files are in the page cache and haven't been written
> to VDO yet.)
> 
> Thanks!

1. scp from remote nfs storage
scp <username>@<host ip>:<file path>/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./

2. Then check the dh -h and vdostats
3. Change the name of iso copied in local host
4. step 1~3 for 3 times.

Comment 9 Sweet Tea Dorminy 2018-08-08 05:56:25 UTC
Okay. Based on that, and the df -h output above, I really think that the ISOs are still in the page cache. The df -h | grep vdo output is showing a 'Used' number of 33M, while a filesystem with several gigabytes of ISOs written to it should show a greater 'Used' number. 

Can you please try adding 'time sync;date' after your scp commands, and check whether the `df -h | grep vdo` output and the `vdostats` output changes? Thanks!

Comment 10 Wei Wang 2018-08-08 06:26:47 UTC
(In reply to Sweet Tea Dorminy from comment #9)
> Okay. Based on that, and the df -h output above, I really think that the
> ISOs are still in the page cache. The df -h | grep vdo output is showing a
> 'Used' number of 33M, while a filesystem with several gigabytes of ISOs
> written to it should show a greater 'Used' number. 
> 
> Can you please try adding 'time sync;date' after your scp commands, and
> check whether the `df -h | grep vdo` output and the `vdostats` output
> changes? Thanks!

There are no changes after using 'time sync;date'

[root@dhcp-9-57 vdo]# time sync;date

real	0m0.065s
user	0m0.000s
sys	0m0.006s
Wed Aug  8 14:24:03 CST 2018
[root@dhcp-9-57 vdo]# df -h|grep vdo
/dev/mapper/vdo_wei                              15G   33M   15G   1% /root/my_vdo/vdo
[root@dhcp-9-57 vdo]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_wei      10.0G      4.0G      6.0G  40%           99%

Comment 12 Dennis Keefe 2018-08-16 19:09:23 UTC
Please provide feedback to comment 11

Comment 14 Dennis Keefe 2018-08-20 16:08:26 UTC
Wei,  the bios stats didn't increment, which means that the data hasn't been written to VDO or hasn't been written to VDO yet.  Can you run the same test, then
run the commands "umount /dev/mapper/vdo_wei; vdostats --verbose|egrep "bios in read|bios out read|bios in write|bios out write" ?   Maybe by unmounting the file system this will force the data to be flushed.

Comment 16 Wei Wang 2018-08-21 03:52:40 UTC
Make a mistake to remove flag pm_ack+, could you please help to recover it? Thanks!

Comment 17 Dennis Keefe 2018-08-21 13:23:02 UTC
Wei,

I'm not able to reproduce this issue on RHVH-4.2.  After creating a VDO volume, formatting, mounting, then scping an ISO to this volume the data is written to VDO and the counters increment.  

The version of RHVH-4.2 is RHVH-builds/RHVH-4.2-20180531.0 not the one from 20180525.  Can you try
this build?

Comment 18 Dennis Keefe 2018-08-21 15:04:43 UTC
Test results:

[root@rhvh-4 vdo]# df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]';scp 192.168.122.1:/VDO/Downloads/RHVH-4.2* ./;df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]'
/dev/mapper/vdo0                                 15G   33M   15G   1% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo0         15.0G      3.0G     12.0G  20%           99%
  bios in read                        : 1823
  bios in write                       : 3950698
  bios in discard                     : 0
RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                                                                                                                                                         100% 1090MB  39.4MB/s   00:27    
/dev/mapper/vdo0                                 15G  1.1G   14G   8% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo0         15.0G      4.1G     10.9G  27%           92%
  bios in read                        : 1829
  bios in write                       : 4229740
  bios in discard                     : 0

Packages:

[root@rhvh-4 vdo]# rpm -qa|grep cockpit
cockpit-ovirt-dashboard-0.11.24-1.el7ev.noarch
cockpit-bridge-165-3.el7.x86_64
cockpit-165-3.el7.x86_64
cockpit-ws-165-3.el7.x86_64
cockpit-storaged-165-3.el7.noarch
cockpit-dashboard-165-3.el7.x86_64
cockpit-system-165-3.el7.noarch
[root@rhvh-4 vdo]# rpm -qa|grep vdo
vdo-6.1.0.168-18.x86_64
kmod-kvdo-6.1.0.168-16.el7_5.x86_64

Comment 19 Wei Wang 2018-08-22 02:20:46 UTC
(In reply to Dennis Keefe from comment #17)
> Wei,
> 
> I'm not able to reproduce this issue on RHVH-4.2.  After creating a VDO
> volume, formatting, mounting, then scping an ISO to this volume the data is
> written to VDO and the counters increment.  
> 
> The version of RHVH-4.2 is RHVH-builds/RHVH-4.2-20180531.0 not the one from
> 20180525.  Can you try
> this build?

Dennis,
I still can reproduce this bug.

[root@dhcp-9-57 ~]# df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]';scp wangwei.8.174:/home/wangwei/isos/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./;df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]'
/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_wei      10.0G      4.0G      6.0G  40%           99%
  bios in read                        : 2243
  bios in write                       : 3950698
  bios in discard                     : 0
wangwei.8.174's password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  79.7MB/s   00:25    
/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_wei      10.0G      4.0G      6.0G  40%           99%
  bios in read                        : 2243
  bios in write                       : 3950698
  bios in discard                     : 0

[root@dhcp-9-57 ~]# rpm -qa|grep cockpit
cockpit-ovirt-dashboard-0.11.24-1.el7ev.noarch
cockpit-bridge-165-3.el7.x86_64
cockpit-165-3.el7.x86_64
cockpit-ws-165-3.el7.x86_64
cockpit-storaged-165-3.el7.noarch
cockpit-dashboard-165-3.el7.x86_64
cockpit-system-165-3.el7.noarch
[root@dhcp-9-57 ~]# rpm -qa|grep vdo
vdo-6.1.0.168-18.x86_64
kmod-kvdo-6.1.0.168-16.el7_5.x86_64

Comment 20 Dennis Keefe 2018-08-22 14:21:31 UTC
Wei,

Can you attach the logs for this system or provide me a detailed process to recreate this issue, maybe a video showing the process?  It would seem that the data is just not making it to the file system on top of VDO, otherwise the stats would be updated.   There must be a gap between what I'm doing from what you are doing.

Thank you.

Comment 21 Wei Wang 2018-08-24 01:07:01 UTC
(In reply to Dennis Keefe from comment #20)
> Wei,
> 
> Can you attach the logs for this system or provide me a detailed process to
> recreate this issue, maybe a video showing the process?  It would seem that
> the data is just not making it to the file system on top of VDO, otherwise
> the stats would be updated.   There must be a gap between what I'm doing
> from what you are doing.
> 
> Thank you.

Attach the logs, and I dont know it is related to the USB storage or not? 

1. Attach the 30G USB (3.0) into host
2. Clear the USB storage via dd commend
3. #fdisk /dev/sdb, make one partition (10G)
4. Create vdo storage with cockpit UI(15G logic storage)

Comment 22 Wei Wang 2018-08-24 01:09:25 UTC
Created attachment 1478371 [details]
vdo_logs

Comment 23 Dennis Keefe 2018-08-24 17:03:06 UTC
Wei,

In the log messages I see that there is a VDO volume named vdo_wei which is started and there is a mount
of /dev/dm-13 to /root/my_vdo/vdo, but I can't see if there is a relationship between /dev/dm-13 and /dev/mapper/vdo_wei.

grep -i vdo messages
Aug 22 09:14:14 dhcp-9-57 systemd: Starting VDO volume services...
Aug 22 09:14:17 dhcp-9-57 systemd: Started VDO volume services.
Aug 22 09:58:18 dhcp-9-57 kernel: kvdo: modprobe: loaded version 6.1.0.168
Aug 22 09:58:21 dhcp-9-57 kernel: kvdo0:dmsetup: starting device 'vdo_wei' device instantiation 0 write policy auto
Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: device 'vdo_wei' started
Aug 22 09:58:22 dhcp-9-57 kernel: uds: kvdo0:dedupeQ: loading or rebuilding index: dev=/dev/sdb1 offset=4096 size=2781704192
Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: resuming device 'vdo_wei'
Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: device 'vdo_wei' resumed
Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:packerQ: compression is enabled
Aug 22 10:09:02 dhcp-9-57 journal: Mounted /dev/dm-13 (system) at /root/my_vdo/vdo on behalf of uid 0

There is an entry in the anaconda/journal.log file that might suggest dm-13 is a LVMThinSnapShotDevice and not a VDO volume or wasn't a VDO volume at this time. 

Aug 22 01:09:00 dhcp-9-57.nay.redhat.com blivet[1833]:                 LVMThinSnapShotDevice.readCurrentSize: path: /dev/mapper/rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1 ; sysfsPath: /sys/devices/virtual/block/dm-13 ; exists: True ;

I still need more information.  Could you provide the output of these commands?

"cat /proc/partitions;lsblk;dmsetup ls;dmsetup table;df -h;vdo status"

Thank you.

Comment 24 Wei Wang 2018-08-27 01:59:53 UTC
(In reply to Dennis Keefe from comment #23)
> Wei,
> 
> In the log messages I see that there is a VDO volume named vdo_wei which is
> started and there is a mount
> of /dev/dm-13 to /root/my_vdo/vdo, but I can't see if there is a
> relationship between /dev/dm-13 and /dev/mapper/vdo_wei.
> 
> grep -i vdo messages
> Aug 22 09:14:14 dhcp-9-57 systemd: Starting VDO volume services...
> Aug 22 09:14:17 dhcp-9-57 systemd: Started VDO volume services.
> Aug 22 09:58:18 dhcp-9-57 kernel: kvdo: modprobe: loaded version 6.1.0.168
> Aug 22 09:58:21 dhcp-9-57 kernel: kvdo0:dmsetup: starting device 'vdo_wei'
> device instantiation 0 write policy auto
> Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: device 'vdo_wei' started
> Aug 22 09:58:22 dhcp-9-57 kernel: uds: kvdo0:dedupeQ: loading or rebuilding
> index: dev=/dev/sdb1 offset=4096 size=2781704192
> Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: resuming device 'vdo_wei'
> Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:dmsetup: device 'vdo_wei' resumed
> Aug 22 09:58:22 dhcp-9-57 kernel: kvdo0:packerQ: compression is enabled
> Aug 22 10:09:02 dhcp-9-57 journal: Mounted /dev/dm-13 (system) at
> /root/my_vdo/vdo on behalf of uid 0
> 
> There is an entry in the anaconda/journal.log file that might suggest dm-13
> is a LVMThinSnapShotDevice and not a VDO volume or wasn't a VDO volume at
> this time. 
> 
> Aug 22 01:09:00 dhcp-9-57.nay.redhat.com blivet[1833]:                
> LVMThinSnapShotDevice.readCurrentSize: path:
> /dev/mapper/rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1 ; sysfsPath:
> /sys/devices/virtual/block/dm-13 ; exists: True ;
> 
> I still need more information.  Could you provide the output of these
> commands?
> 
> "cat /proc/partitions;lsblk;dmsetup ls;dmsetup table;df -h;vdo status"
> 
> Thank you.

Dennis,

[root@dhcp-9-57 vdo]# cat /proc/partitions;lsblk;dmsetup ls;dmsetup table;df -h;vdo status
major minor  #blocks  name

   8        0  976762584 sda
   8        1    1048576 sda1
   8        2  975712256 sda2
  11        0    1048575 sr0
 253        0    8192000 dm-0
 253        1    1048576 dm-1
 253        2  861794304 dm-2
 253        3  861794304 dm-3
 253        4  833482752 dm-4
 253        5  861794304 dm-5
 253        6    2097152 dm-6
 253        7    8388608 dm-7
 253        8   15728640 dm-8
 253        9    1048576 dm-9
 253       10    1048576 dm-10
 253       11  833482752 dm-11
 253       12   10485760 dm-12
   8       16   30375936 sdb
   8       17   10485760 sdb1
 253       13   15728640 dm-13
NAME                                                   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                      8:0    0 931.5G  0 disk 
├─sda1                                                   8:1    0     1G  0 part /boot
└─sda2                                                   8:2    0 930.5G  0 part 
  ├─rhvh_dhcp--9--57-swap                              253:0    0   7.8G  0 lvm  [SWAP]
  ├─rhvh_dhcp--9--57-pool00_tmeta                      253:1    0     1G  0 lvm  
  │ └─rhvh_dhcp--9--57-pool00-tpool                    253:3    0 821.9G  0 lvm  
  │   ├─rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1 253:4    0 794.9G  0 lvm  /
  │   ├─rhvh_dhcp--9--57-pool00                        253:5    0 821.9G  0 lvm  
  │   ├─rhvh_dhcp--9--57-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
  │   ├─rhvh_dhcp--9--57-var_log                       253:7    0     8G  0 lvm  /var/log
  │   ├─rhvh_dhcp--9--57-var                           253:8    0    15G  0 lvm  /var
  │   ├─rhvh_dhcp--9--57-tmp                           253:9    0     1G  0 lvm  /tmp
  │   ├─rhvh_dhcp--9--57-home                          253:10   0     1G  0 lvm  /home
  │   ├─rhvh_dhcp--9--57-root                          253:11   0 794.9G  0 lvm  
  │   └─rhvh_dhcp--9--57-var_crash                     253:12   0    10G  0 lvm  /var/crash
  └─rhvh_dhcp--9--57-pool00_tdata                      253:2    0 821.9G  0 lvm  
    └─rhvh_dhcp--9--57-pool00-tpool                    253:3    0 821.9G  0 lvm  
      ├─rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1 253:4    0 794.9G  0 lvm  /
      ├─rhvh_dhcp--9--57-pool00                        253:5    0 821.9G  0 lvm  
      ├─rhvh_dhcp--9--57-var_log_audit                 253:6    0     2G  0 lvm  /var/log/audit
      ├─rhvh_dhcp--9--57-var_log                       253:7    0     8G  0 lvm  /var/log
      ├─rhvh_dhcp--9--57-var                           253:8    0    15G  0 lvm  /var
      ├─rhvh_dhcp--9--57-tmp                           253:9    0     1G  0 lvm  /tmp
      ├─rhvh_dhcp--9--57-home                          253:10   0     1G  0 lvm  /home
      ├─rhvh_dhcp--9--57-root                          253:11   0 794.9G  0 lvm  
      └─rhvh_dhcp--9--57-var_crash                     253:12   0    10G  0 lvm  /var/crash
sdb                                                      8:16   1    29G  0 disk 
└─sdb1                                                   8:17   1    10G  0 part 
  └─vdo_wei                                            253:13   0    15G  0 vdo  /root/my_vdo/vdo
sr0                                                     11:0    1  1024M  0 rom  
rhvh_dhcp--9--57-tmp	(253:9)
rhvh_dhcp--9--57-var_crash	(253:12)
rhvh_dhcp--9--57-home	(253:10)
rhvh_dhcp--9--57-var	(253:8)
rhvh_dhcp--9--57-pool00	(253:5)
rhvh_dhcp--9--57-swap	(253:0)
rhvh_dhcp--9--57-root	(253:11)
rhvh_dhcp--9--57-var_log	(253:7)
rhvh_dhcp--9--57-var_log_audit	(253:6)
vdo_wei	(253:13)
rhvh_dhcp--9--57-pool00-tpool	(253:3)
rhvh_dhcp--9--57-pool00_tdata	(253:2)
rhvh_dhcp--9--57-pool00_tmeta	(253:1)
rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1	(253:4)
rhvh_dhcp--9--57-tmp: 0 2097152 thin 253:3 4
rhvh_dhcp--9--57-var_crash: 0 20971520 thin 253:3 9
rhvh_dhcp--9--57-home: 0 2097152 thin 253:3 5
rhvh_dhcp--9--57-var: 0 31457280 thin 253:3 3
rhvh_dhcp--9--57-pool00: 0 1723588608 linear 253:3 0
rhvh_dhcp--9--57-swap: 0 16384000 linear 8:2 2048
rhvh_dhcp--9--57-root: 0 1666965504 thin 253:3 6
rhvh_dhcp--9--57-var_log: 0 16777216 thin 253:3 2
rhvh_dhcp--9--57-var_log_audit: 0 4194304 thin 253:3 1
vdo_wei: 0 31457280 vdo /dev/sdb1 4096 disabled 0 32768 16380 on auto vdo_wei ack=1,bio=4,bioRotationInterval=64,cpu=2,hash=1,logical=1,physical=1
rhvh_dhcp--9--57-pool00-tpool: 0 1723588608 thin-pool 253:1 253:2 128 4039660 0 
rhvh_dhcp--9--57-pool00_tdata: 0 1723588608 linear 8:2 17246208
rhvh_dhcp--9--57-pool00_tmeta: 0 2097152 linear 8:2 1740834816
rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1: 0 1666965504 thin 253:3 8
Filesystem                                                  Size  Used Avail Use% Mounted on
/dev/mapper/rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180815.0+1  783G  3.9G  739G   1% /
devtmpfs                                                    7.7G     0  7.7G   0% /dev
tmpfs                                                       7.8G  4.0K  7.8G   1% /dev/shm
tmpfs                                                       7.8G   18M  7.7G   1% /run
tmpfs                                                       7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1                                                   976M  204M  706M  23% /boot
/dev/mapper/rhvh_dhcp--9--57-tmp                            976M  2.8M  906M   1% /tmp
/dev/mapper/rhvh_dhcp--9--57-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh_dhcp--9--57-var                             15G   42M   14G   1% /var
/dev/mapper/rhvh_dhcp--9--57-var_crash                      9.8G   37M  9.2G   1% /var/crash
/dev/mapper/rhvh_dhcp--9--57-var_log                        7.8G   40M  7.3G   1% /var/log
/dev/mapper/rhvh_dhcp--9--57-var_log_audit                  2.0G  6.1M  1.8G   1% /var/log/audit
tmpfs                                                       1.6G     0  1.6G   0% /run/user/0
/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo
VDO status:
  Date: '2018-08-27 09:48:22+08:00'
  Node: dhcp-9-57.nay.redhat.com
Kernel module:
  Loaded: true
  Name: kvdo
  Version information:
    kvdo version: 6.1.0.171
Configuration:
  File: /etc/vdoconf.yml
  Last modified: '2018-08-27 09:35:15'
VDOs:
  vdo_wei:
    Acknowledgement threads: 1
    Activate: enabled
    Bio rotation interval: 64
    Bio submission threads: 4
    Block map cache size: 128M
    Block map period: 16380
    Block size: 4096
    CPU-work threads: 2
    Compression: enabled
    Configured write policy: auto
    Deduplication: enabled
    Device mapper status: 0 31457280 vdo /dev/sdb1 albserver online cpu=2,bio=4,ack=1,bioRotationInterval=64
    Emulate 512 byte: disabled
    Hash zone threads: 1
    Index checkpoint frequency: 0
    Index memory setting: 0.25
    Index parallel factor: 0
    Index sparse: disabled
    Index status: online
    Logical size: 15G
    Logical threads: 1
    Physical size: 10G
    Physical threads: 1
    Read cache: disabled
    Read cache size: 0M
    Slab size: 2G
    Storage device: /dev/sdb1
    VDO statistics:
      /dev/mapper/vdo_wei:
        1K-blocks: 10485760
        1K-blocks available: 6266908
        1K-blocks used: 4218852
        512 byte emulation: false
        KVDO module bios used: 37286
        KVDO module bytes used: 426444608
        KVDO module peak bio count: 37574
        KVDO module peak bytes used: 426446480
        bios acknowledged discard: 0
        bios acknowledged flush: 15365
        bios acknowledged fua: 2
        bios acknowledged partial discard: 0
        bios acknowledged partial flush: 0
        bios acknowledged partial fua: 0
        bios acknowledged partial read: 0
        bios acknowledged partial write: 0
        bios acknowledged read: 2541
        bios acknowledged write: 3951216
        bios in discard: 0
        bios in flush: 15365
        bios in fua: 2
        bios in partial discard: 0
        bios in partial flush: 0
        bios in partial fua: 0
        bios in partial read: 0
        bios in partial write: 0
        bios in progress discard: 0
        bios in progress flush: 0
        bios in progress fua: 0
        bios in progress read: 0
        bios in progress write: 0
        bios in read: 2541
        bios in write: 3951216
        bios journal completed discard: 0
        bios journal completed flush: 0
        bios journal completed fua: 0
        bios journal completed read: 0
        bios journal completed write: 66515
        bios journal discard: 0
        bios journal flush: 66515
        bios journal fua: 66515
        bios journal read: 0
        bios journal write: 66515
        bios meta completed discard: 0
        bios meta completed flush: 0
        bios meta completed fua: 0
        bios meta completed read: 4904
        bios meta completed write: 70575
        bios meta discard: 0
        bios meta flush: 68317
        bios meta fua: 66516
        bios meta read: 4904
        bios meta write: 70575
        bios out completed discard: 0
        bios out completed flush: 0
        bios out completed fua: 0
        bios out completed read: 76
        bios out completed write: 1066
        bios out discard: 0
        bios out flush: 0
        bios out fua: 0
        bios out read: 76
        bios out write: 1066
        bios page cache completed discard: 0
        bios page cache completed flush: 0
        bios page cache completed fua: 0
        bios page cache completed read: 4843
        bios page cache completed write: 3428
        bios page cache discard: 0
        bios page cache flush: 1714
        bios page cache fua: 0
        bios page cache read: 4843
        bios page cache write: 3428
        block map cache pressure: 0
        block map cache size: 134217728
        block map clean pages: 1712
        block map dirty pages: 3131
        block map discard required: 0
        block map failed pages: 0
        block map failed reads: 0
        block map failed writes: 0
        block map fetch required: 4843
        block map flush count: 1714
        block map found in cache: 7432595
        block map free pages: 27925
        block map incoming pages: 0
        block map outgoing pages: 0
        block map pages loaded: 4843
        block map pages saved: 1714
        block map read count: 3937182
        block map read outgoing: 0
        block map reclaimed: 0
        block map wait for page: 436647
        block map write count: 3936903
        block size: 4096
        completed recovery count: 0
        compressed blocks written: 75
        compressed fragments in packer: 11
        compressed fragments written: 1050
        current VDO IO requests in progress: 11
        current dedupe queries: 0
        data blocks used: 52
        dedupe advice stale: 0
        dedupe advice timeouts: 0
        dedupe advice valid: 0
        entries indexed: 1064
        flush out: 15365
        instance: 0
        invalid advice PBN count: 0
        journal blocks batching: 0
        journal blocks committed: 66515
        journal blocks started: 66515
        journal blocks writing: 0
        journal blocks written: 66515
        journal commits requested count: 0
        journal disk full count: 0
        journal entries batching: 0
        journal entries committed: 7878829
        journal entries started: 7878829
        journal entries writing: 0
        journal entries written: 7878829
        logical blocks: 3932160
        logical blocks used: 3932160
        maximum VDO IO requests in progress: 566
        maximum dedupe queries: 311
        no space error count: 0
        operating mode: normal
        overhead blocks used: 1054661
        physical blocks: 2621440
        posts found: 0
        posts not found: 1064
        queries found: 0
        queries not found: 0
        read cache accesses: 0
        read cache data hits: 0
        read cache hits: 0
        read only error count: 0
        read-only recovery count: 0
        recovery progress (%): N/A
        reference blocks written: 0
        release version: 131337
        saving percent: 99
        slab count: 3
        slab journal blocked count: 0
        slab journal blocks written: 6
        slab journal disk full count: 0
        slab journal flush count: 0
        slab journal tail busy count: 0
        slab summary blocks written: 6
        slabs opened: 1
        slabs reopened: 0
        updates found: 1050
        updates not found: 0
        used percent: 40
        version: 26
        write amplification ratio: 0.02
        write policy: sync
I think maybe this is another bug? Since there is an error message of a mount of /dev/dm-13 to /root/my_vdo/vdo displayed after clicking mount button. If you agree this message, it can be continue with /dev/sdb1 mount to /root/my_vdo/vdo successfully.

Comment 25 Dennis Keefe 2018-08-27 17:56:49 UTC
Wei,

In comment 19 you posted this output

"[root@dhcp-9-57 ~]# df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]';scp wangwei.8.174:/home/wangwei/isos/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./;df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]'"

The working directory is "~".  This is the home directory for root (/root).  The command tells scp to write the file to /root ("./")
not to /root/my_vdo/vdo.

The vdo volume is mounted at /root/my_vdo/vdo according you the df output

"/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo"

The file was not written to the VDO volume, it was written to the root partition.

Can you verify again that you are writing the file to the correct file system?

-----

The output you sent looks correct.   I'm not able to recreate this bug and believe that the issue is either the file is being 
written to the wrong location (not the VDO volume) or there is another issue not related to VDO.

Comment 26 Dennis Keefe 2018-08-27 22:20:59 UTC
Wei,

I believe I've been able to reproduce what you have been seeing, but this is not
a VDO bug.  

The issue I've been able to reproduce is when a your shell exists in the path that you are mounting.
If you copy any files to the path after the mount has completed the files will be written to the 
root file system or device where the directory originally existed, not the new device.  

Here is my example:

Nothing in this directory
[root@rhvh-4 /]# cd /root/my_vdo/vdo
[root@rhvh-4 vdo]# ls -trlh
total 0

Copy file
[root@rhvh-4 vdo]# : scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
[root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                                                                                                    100% 1090MB  36.1MB/s   00:30    

List directory
[root@rhvh-4 vdo]# ls -trlh 
total 1.1G
-rw-r--r--. 1 root root 1.1G Aug 27 17:56 RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso

Change file name so later I can create another copy
[root@rhvh-4 vdo]# mv RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso RHVH.iso


[root@rhvh-4 vdo]# ls -tlrh
total 1.1G
-rw-r--r--. 1 root root 1.1G Aug 27 17:56 RHVH.iso

Now in Cockpit I configure the VDO volume and mount it to /root/my_vdo/vdo.

Check VDO stats before copying the file
[root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
  bios in read                        : 1629
  bios in write                       : 13418

Copy the file
[root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                                                                                                    100% 1090MB  49.0MB/s   00:22   

Check VDO stats.  This is the same thing you saw where the file was copied but there was no change is VDO stats. 
[root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
  bios in read                        : 1629
  bios in write                       : 13418

list the directory. There are two files
[root@rhvh-4 vdo]# ls -trlh
total 2.2G
-rw-r--r--. 1 root root 1.1G Aug 27 17:56 RHVH.iso
-rw-r--r--. 1 root root 1.1G Aug 27 17:59 RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso

Verify that the filesystem is mounted
[root@rhvh-4 vdo]# df -lh|grep vdo
/dev/mapper/vdo0                                100G   33M  100G   1% /root/my_vdo/vdo

Now back out of the direcotry
[root@rhvh-4 vdo]# cd ../

Go back into the newly mounted VDO volume
[root@rhvh-4 my_vdo]# cd vdo

List the directory.  No files exist.  
[root@rhvh-4 vdo]# ls -trlh
total 0

Copy the ISO again, but now to the VDO volume. 
[root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                                                                                                    100% 1090MB  42.7MB/s   00:25 

Check VDO stats and see that they have incremented.    
[root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
  bios in read                        : 1635
  bios in write                       : 292460

The first set of file copied to /root/my_vdo/vdo where saved to the root filesystem on /dev/sda.  
Solution would be to wait until you have mounted the VDO volume before entering that directory.

Let me know if you believe this is the issue.

Comment 27 Wei Wang 2018-08-28 02:25:10 UTC
(In reply to Dennis Keefe from comment #25)
> Wei,
> 
> In comment 19 you posted this output
> 
> "[root@dhcp-9-57 ~]# df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep
> 'bios in [r w d]';scp
> wangwei.8.174:/home/wangwei/isos/RHEL-7.3-20161019.0-Workstation-
> x86_64-dvd1.iso ./;df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep
> 'bios in [r w d]'"
> 
> The working directory is "~".  This is the home directory for root (/root). 
> The command tells scp to write the file to /root ("./")
> not to /root/my_vdo/vdo.
> 
> The vdo volume is mounted at /root/my_vdo/vdo according you the df output
> 
> "/dev/mapper/vdo_wei                                          15G   33M  
> 15G   1% /root/my_vdo/vdo"
> 
> The file was not written to the VDO volume, it was written to the root
> partition.
> 
> Can you verify again that you are writing the file to the correct file
> system?
> 
[root@dhcp-9-57 vdo]# df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]';scp wangwei.8.174:/home/wangwei/isos/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./;df -lh|grep vdo;vdostats --hum;vdostats --verbose|egrep 'bios in [r w d]'
/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_wei      10.0G      4.0G      6.0G  40%           99%
  bios in read                        : 2541
  bios in write                       : 3951216
  bios in discard                     : 0
The authenticity of host '10.66.8.174 (10.66.8.174)' can't be established.
ECDSA key fingerprint is SHA256:4zm67ja2qMwJnCgqkvkdGyE8KgaGLZeqkzU+fzAbBUY.
ECDSA key fingerprint is MD5:32:f2:fb:df:6e:04:e8:33:21:38:91:53:ff:62:9e:9f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.66.8.174' (ECDSA) to the list of known hosts.
wangwei.8.174's password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  79.4MB/s   00:26    
/dev/mapper/vdo_wei                                          15G   33M   15G   1% /root/my_vdo/vdo
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_wei      10.0G      4.0G      6.0G  40%           99%
  bios in read                        : 2541
  bios in write                       : 3951216
  bios in discard                     : 0

[root@dhcp-9-57 vdo]# rpm -qa|grep cockpit
cockpit-bridge-172-2.el7.x86_64
cockpit-storaged-172-2.el7.noarch
cockpit-172-2.el7.x86_64
cockpit-system-172-2.el7.noarch
cockpit-ws-172-2.el7.x86_64
cockpit-machines-ovirt-172-2.el7.noarch
cockpit-ovirt-dashboard-0.11.33-1.el7ev.noarch
cockpit-dashboard-172-2.el7.x86_64
[root@dhcp-9-57 vdo]# rpm -qa|grep vdo
vdo-6.1.0.168-18.x86_64
kmod-kvdo-6.1.0.171-17.el7_5.x86_64

> -----
> 
> The output you sent looks correct.   I'm not able to recreate this bug and
> believe that the issue is either the file is being 
> written to the wrong location (not the VDO volume) or there is another issue
> not related to VDO.

Comment 28 Wei Wang 2018-08-28 02:55:58 UTC
(In reply to Dennis Keefe from comment #26)
> Wei,
> 
> I believe I've been able to reproduce what you have been seeing, but this is
> not
> a VDO bug.  
> 
> The issue I've been able to reproduce is when a your shell exists in the
> path that you are mounting.
> If you copy any files to the path after the mount has completed the files
> will be written to the 
> root file system or device where the directory originally existed, not the
> new device.  
> 
> Here is my example:
> 
> Nothing in this directory
> [root@rhvh-4 /]# cd /root/my_vdo/vdo
> [root@rhvh-4 vdo]# ls -trlh
> total 0
> 
> Copy file
> [root@rhvh-4 vdo]# : scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
> [root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
> RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                    
> 100% 1090MB  36.1MB/s   00:30    
> 
> List directory
> [root@rhvh-4 vdo]# ls -trlh 
> total 1.1G
> -rw-r--r--. 1 root root 1.1G Aug 27 17:56
> RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso
> 
> Change file name so later I can create another copy
> [root@rhvh-4 vdo]# mv RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso RHVH.iso
> 
> 
> [root@rhvh-4 vdo]# ls -tlrh
> total 1.1G
> -rw-r--r--. 1 root root 1.1G Aug 27 17:56 RHVH.iso
> 
> Now in Cockpit I configure the VDO volume and mount it to /root/my_vdo/vdo.
> 
> Check VDO stats before copying the file
> [root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
>   bios in read                        : 1629
>   bios in write                       : 13418
> 
> Copy the file
> [root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
> RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                    
> 100% 1090MB  49.0MB/s   00:22   
> 
> Check VDO stats.  This is the same thing you saw where the file was copied
> but there was no change is VDO stats. 
> [root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
>   bios in read                        : 1629
>   bios in write                       : 13418
> 
> list the directory. There are two files
> [root@rhvh-4 vdo]# ls -trlh
> total 2.2G
> -rw-r--r--. 1 root root 1.1G Aug 27 17:56 RHVH.iso
> -rw-r--r--. 1 root root 1.1G Aug 27 17:59
> RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso
> 
> Verify that the filesystem is mounted
> [root@rhvh-4 vdo]# df -lh|grep vdo
> /dev/mapper/vdo0                                100G   33M  100G   1%
> /root/my_vdo/vdo
> 
> Now back out of the direcotry
> [root@rhvh-4 vdo]# cd ../
> 
> Go back into the newly mounted VDO volume
> [root@rhvh-4 my_vdo]# cd vdo
> 
> List the directory.  No files exist.  
> [root@rhvh-4 vdo]# ls -trlh
> total 0
> 
> Copy the ISO again, but now to the VDO volume. 
> [root@rhvh-4 vdo]#  scp dkeefe.122.1:/VDO/Downloads/RHVH-4.2* ./
> RHVH-4.2-20180531.0-RHVH-x86_64-dvd1.iso                                    
> 100% 1090MB  42.7MB/s   00:25 
> 
> Check VDO stats and see that they have incremented.    
> [root@rhvh-4 vdo]# vdostats --verbose|egrep 'bios in [w r]'
>   bios in read                        : 1635
>   bios in write                       : 292460
> 
> The first set of file copied to /root/my_vdo/vdo where saved to the root
> filesystem on /dev/sda.  
> Solution would be to wait until you have mounted the VDO volume before
> entering that directory.
> 
> Let me know if you believe this is the issue.

I think the problem of iso missing after backing out of the directory is a key issue for this bug.

Comment 29 Dennis Keefe 2018-08-28 12:06:10 UTC
Wei,

This issue can be reproduced with any mount point, so this should be closed as not a VDO bug.

Comment 30 Wei Wang 2018-08-29 01:03:21 UTC
(In reply to Dennis Keefe from comment #29)
> Wei,
> 
> This issue can be reproduced with any mount point, so this should be closed
> as not a VDO bug.

Dennis,
Any mount point? For example, nfs mount point? I think if I can reproduce the same issue with nfs mount point, it will not a vdo bug. So I try it as below.

[root@dhcp-9-57 mnt]# df -h
Filesystem                                                  Size  Used Avail Use% Mounted on
/dev/mapper/rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180827.0+1  783G  3.9G  739G   1% /
devtmpfs                                                    7.7G     0  7.7G   0% /dev
tmpfs                                                       7.8G  4.0K  7.8G   1% /dev/shm
tmpfs                                                       7.8G   26M  7.7G   1% /run
tmpfs                                                       7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/rhvh_dhcp--9--57-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh_dhcp--9--57-tmp                            976M  2.9M  906M   1% /tmp
/dev/sda1                                                   976M  204M  706M  23% /boot
/dev/mapper/rhvh_dhcp--9--57-var                             15G   52M   14G   1% /var
/dev/mapper/rhvh_dhcp--9--57-var_log                        7.8G   47M  7.3G   1% /var/log
/dev/mapper/rhvh_dhcp--9--57-var_log_audit                  2.0G  6.4M  1.8G   1% /var/log/audit
/dev/mapper/rhvh_dhcp--9--57-var_crash                      9.8G  140M  9.1G   2% /var/crash
tmpfs                                                       1.6G     0  1.6G   0% /run/user/0
10.66.8.174:/home/wangwei/nfs                               245G  124G  121G  51% /mnt
[root@dhcp-9-57 mnt]# mv RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso_1
[root@dhcp-9-57 mnt]# scp wangwei.8.174:/home/wangwei/isos/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./
wangwei.8.174's password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  34.5MB/s   01:00    
[root@dhcp-9-57 mnt]# df -h
Filesystem                                                  Size  Used Avail Use% Mounted on
/dev/mapper/rhvh_dhcp--9--57-rhvh--4.2.6.0--0.20180827.0+1  783G  3.9G  739G   1% /
devtmpfs                                                    7.7G     0  7.7G   0% /dev
tmpfs                                                       7.8G  4.0K  7.8G   1% /dev/shm
tmpfs                                                       7.8G   26M  7.7G   1% /run
tmpfs                                                       7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/rhvh_dhcp--9--57-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh_dhcp--9--57-tmp                            976M  2.9M  906M   1% /tmp
/dev/sda1                                                   976M  204M  706M  23% /boot
/dev/mapper/rhvh_dhcp--9--57-var                             15G   52M   14G   1% /var
/dev/mapper/rhvh_dhcp--9--57-var_log                        7.8G   47M  7.3G   1% /var/log
/dev/mapper/rhvh_dhcp--9--57-var_log_audit                  2.0G  6.4M  1.8G   1% /var/log/audit
/dev/mapper/rhvh_dhcp--9--57-var_crash                      9.8G  140M  9.1G   2% /var/crash
tmpfs                                                       1.6G     0  1.6G   0% /run/user/0
10.66.8.174:/home/wangwei/nfs                               245G  126G  119G  52% /mnt
[root@dhcp-9-57 mnt]# ll
total 4235504
-rw-r--r--. 1 root root 2168578048 Aug 29 08:52 RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso
-rw-r--r--. 1 root root 2168578048 Aug 29 08:49 RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso_1


But it is normal for nfs mount point. So I think it is related with vdo still. Am I right?

Comment 31 Ryan Barry 2018-08-29 02:33:04 UTC
I think what Dennis means is that the following operation will always write to a path which is later masked:

ssh somehost
cd foo
#in another shell or cockpit
mount /dev/bar ~/foo
# in the original shell
touch quux || scp ...

If you back out of the path (cd ../ && cd -), is the file still there? Or are you now in the mounted filesystem?

Comment 32 Dennis Keefe 2018-08-29 15:13:05 UTC
This issue is not related to a block device or VDO.  The same behavior can be recreated with non VDO storage devices.

Comment 33 Wei Wang 2018-08-30 02:05:48 UTC
Retest with block device (usb). The copied file is gone, the same behavior can be recreated with non VDO storage devices.

#fdisk /dev/sdc
#mkfs.xfs /dev/sdc1

[root@dhcp-9-57 ~]# mkdir -p foo
#mount /dev/sdc1 ~/foo
[root@dhcp-9-57 ~]# cd foo/
[root@dhcp-9-57 foo]# scp <username>@<server IP>:/file/path/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso ./
<username>@<server IP>'s password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  69.8MB/s   00:29    
[root@dhcp-9-57 foo]# ll
total 2117756
-rw-r--r--. 1 root root 2168578048 Aug 30 09:14 RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso
[root@dhcp-9-57 foo]# cd ..
[root@dhcp-9-57 ~]# cd foo
[root@dhcp-9-57 foo]# ll
total 0

Comment 34 Ryan Barry 2018-08-30 10:16:04 UTC
Perfect. Can you please vdo by copying to a child dir (the vdo mount) instead of ./ ?

Comment 35 Wei Wang 2018-08-31 01:49:31 UTC
(In reply to Ryan Barry from comment #34)
> Perfect. Can you please vdo by copying to a child dir (the vdo mount)
> instead of ./ ?

Ryan,
You mean like this? 
# scp <username>@<server IP>:/file/path/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso my_vdo/vdo/

If using the command above, the issue is gone.
[root@dhcp-9-57 ~]# scp <username>@<server IP>:/file/path/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso my_vdo/vdo/
<username>@<server IP>'s password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  21.5MB/s   01:36    
[root@dhcp-9-57 ~]# df -lh|grep vdo
/dev/mapper/vdo                                              15G  2.1G   13G  14% /root/my_vdo/vdo
[root@dhcp-9-57 ~]# vdostats --hum
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo          10.0G      6.0G      4.0G  59%           86%
[root@dhcp-9-57 ~]# mv my_vdo/vdo/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso my_vdo/vdo/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso_1
[root@dhcp-9-57 ~]# scp <username>@<server IP>:/file/path/RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso my_vdo/vdo/
<username>@<server IP>'s password: 
RHEL-7.3-20161019.0-Workstation-x86_64-dvd1.iso                                                                                                                              100% 2068MB  10.9MB/s   03:10    
[root@dhcp-9-57 ~]# df -lh|grep vdo
/dev/mapper/vdo                                              15G  4.1G   11G  28% /root/my_vdo/vdo
[root@dhcp-9-57 ~]# vdostats --hum
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo          10.0G      6.0G      4.0G  59%           86%

Comment 36 Ryan Barry 2018-08-31 01:57:12 UTC
That's exactly what I meant. This looks like NOTABUG

Comment 37 Wei Wang 2018-08-31 02:05:22 UTC
(In reply to Ryan Barry from comment #36)
> That's exactly what I meant. This looks like NOTABUG

Ryan,
But if customer using ./ instead of /vdo/path/, the important feature for vdo will have problem. Besides going to the previous path, then go back to the /vdo/path/, their data will missing. I think it is not a perfect experience.

Comment 38 Ryan Barry 2018-08-31 02:08:31 UTC
Granted, but this is not a problem with VDO. Mount points masking paths is a problem as old as UNIX. If you mount a path over your working directory, that is the result, no matter what filesystem

Comment 39 Wei Wang 2018-08-31 02:25:57 UTC
(In reply to Ryan Barry from comment #38)
> Granted, but this is not a problem with VDO. Mount points masking paths is a
> problem as old as UNIX. If you mount a path over your working directory,
> that is the result, no matter what filesystem

Yes, it is not a problem with vdo and cockpit. Maybe, need we give some hint for customer to avoid this old problem?

Comment 40 Wei Wang 2018-08-31 02:30:55 UTC
If it is not necessary to hint customer, it should be NOTABUG.

Comment 41 Ryan Barry 2018-08-31 14:23:01 UTC
I don't honestly know if it's possible to warn, and this hasn't ever been done on RHEL.

The real problem here is that the shell doesn't run background processes. It's likely that if you try `pwd` or something else after mounting over it, you'll get an odd error, since you're in a masked path. But we'd need to implement some kind of custom hook on *every* shell invocation which checks whether a filesystem has been mounted over the current working directory, and that would bring behavior significantly out of line with platform and the way UNIX traditionally works.

Comment 43 Red Hat Bugzilla 2023-09-15 00:09:41 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days