+++ This bug was initially created as a clone of Bug #1583464 +++ Description of problem: I created a VM using disks stored in gluster volume, and ran the sysbench test on it. This caused I/O errors and the disk to be mounted as read-only disk. # sysbench prepare --test=oltp --mysql-table-engine=innodb --mysql-password=pwd --oltp-table-size=500000000 --oltp-dist-type=gaussian - Errors in dmesg [ 2838.983763] blk_update_request: I/O error, dev vdb, sector 0 [ 2842.932506] blk_update_request: I/O error, dev vdb, sector 524722736 [ 2842.932577] Aborting journal on device vdb-8. [ 2842.933882] EXT4-fs error (device vdb): ext4_journal_check_start:56: Detected aborted journal [ 2842.934009] EXT4-fs (vdb): Remounting filesystem read-only On the host's gluster logs: mount log: [2017-07-19 07:42:47.219501] W [MSGID: 114031] [client-rpc-fops.c:2938:client3_3_lookup_cbk] 0-glusterlocal1-client-0: remote operation failed. Path: /.shard/c37d9820-0fd8-4d8e-af67-a9a54e5a99af.843 (00000000-0000-0000-0000-000000000000) [No data available] [2017-07-19 07:42:47.219587] E [MSGID: 133010] [shard.c:1725:shard_common_lookup_shards_cbk] 0-glusterlocal1-shard: Lookup on shard 843 failed. Base file gfid = c37d9820-0fd8-4d8e-af67-a9a54e5a99af [No data available] brick log: [2017-07-19 07:42:16.094979] E [MSGID: 113020] [posix.c:1361:posix_mknod] 0-glusterlocal1-posix: setting gfid on /rhgs/bricks/gv1/.shard/c37d9820-0fd8-4d8e-af67-a9a54e5a99af.842 failed [2017-07-19 07:42:47.218982] E [MSGID: 113002] [posix.c:253:posix_lookup] 0-glusterlocal1-posix: buf->ia_gfid is null for /rhgs/bricks/gv1/.shard/c37d9820-0fd8-4d8e-af67-a9a54e5a99af.843 [No data available] gluster vol info: Volume Name: glusterlocal1 Type: Distribute Volume ID: 3b0d4b90-10a4-4a91-80c1-27d051daf731 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.70.40.33:/rhgs/bricks/gv1 Options Reconfigured: performance.strict-o-direct: on storage.owner-gid: 107 storage.owner-uid: 107 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on Version-Release number of selected component (if applicable): glusterfs-3.8.4-18.4.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create an image on gluster volume mount point using qemu-img create qemu-img create -f qcow2 -o preallocation=off /mnt/glusterlocal1/vm1boot.img 500G 2. Start the VM with additional device as image created in Step 1 3. Install MariaDB and sysbench. Configure database to be on filesystem using device in Step 1 4. Run sysbench prepare Actual results: Fails to create data. Additional info:
Tested with glusterfs-3.8.4-54.13 with the test steps mentioned in comment0 No issues found
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:3523