| Summary: | Binary data at top of text file after copying to Gluster | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | neil.garb |
| Component: | core | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED WORKSFORME | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | 3.1-alpha | CC: | gluster-bugs, lakshmipathi, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | fuse |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
neil.garb
2010-09-07 05:15:44 UTC
(In reply to comment #0) > I have 3 64bit Ubuntu 10.04 EC2 instances running GlusterFS-3.1.0 in RAID 0. > Each server has 1 mounted 1TB EBS volume which I use as the share. > > The client is a similar instance, also using GlusterFS-3.1.0, and has the share > mounted using the glusterfs binary. > Can you confirm the version by running below command ? bash# glusterfs --version The output would tell us what is the exact version of the process. glusterfs 3.1.0git built on Sep 3 2010 13:55:55 (In reply to comment #2) > (In reply to comment #0) > > I have 3 64bit Ubuntu 10.04 EC2 instances running GlusterFS-3.1.0 in RAID 0. > > Each server has 1 mounted 1TB EBS volume which I use as the share. > > > > The client is a similar instance, also using GlusterFS-3.1.0, and has the share > > mounted using the glusterfs binary. > > > > Can you confirm the version by running below command ? > > bash# glusterfs --version > > The output would tell us what is the exact version of the process. (In reply to comment #0) > For certain PHP scripts that I am trying to copy onto my Gluster, whenever I > copy them the resulting file on the Gluster (as seen by the client) has a > bunch of binary data at the top of it which was not in the original. The > corresponding file on the server's physical volume is fine (no binary data). I > have replicated this with the same file using rsync and cp, and I have copied > it to multiple subdirectories, but for the same file(s) this always happens. > > It seems (after very superficial testing) that this happens to text files over > a certain very small size (~5-6k). > Could you please post here output of "file" command- client#file script.php server#file script.php client: bash# file User.php User.php: DBase 3 data file (2079622220 records) server: bash# file User.php User.php: PHP script text This is a 52347-byte PHP script. (In reply to comment #4) > (In reply to comment #0) > > For certain PHP scripts that I am trying to copy onto my Gluster, whenever I > > copy them the resulting file on the Gluster (as seen by the client) has a > > bunch of binary data at the top of it which was not in the original. The > > corresponding file on the server's physical volume is fine (no binary data). I > > have replicated this with the same file using rsync and cp, and I have copied > > it to multiple subdirectories, but for the same file(s) this always happens. > > > > It seems (after very superficial testing) that this happens to text files over > > a certain very small size (~5-6k). > > > Could you please post here output of "file" command- > > client#file script.php > server#file script.php (In reply to comment #5) > client: > > bash# file User.php > User.php: DBase 3 data file (2079622220 records) > > server: > > bash# file User.php > User.php: PHP script text > > This is a 52347-byte PHP script. > > (In reply to comment #4) > > (In reply to comment #0) > > > For certain PHP scripts that I am trying to copy onto my Gluster, whenever I > > > copy them the resulting file on the Gluster (as seen by the client) has a > > > bunch of binary data at the top of it which was not in the original. The > > > corresponding file on the server's physical volume is fine (no binary data). I > > > have replicated this with the same file using rsync and cp, and I have copied > > > it to multiple subdirectories, but for the same file(s) this always happens. > > > > > > It seems (after very superficial testing) that this happens to text files over > > > a certain very small size (~5-6k). > > > > > Could you please post here output of "file" command- > > > > client#file script.php > > server#file script.php Could please check this issue with glusterfs-3.0.5 version? due to 3.1.0 alpha release activities, git repo changes everyday. Seems to have been a bug in my particular snapshot of 3.1. The bug doesn't exist in 3.0.5. For certain PHP scripts that I am trying to copy onto my Gluster, whenever I copy them the resulting file on the Gluster (as seen by the client) has a bunch of binary data at the top of it which was not in the original. The corresponding file on the server's physical volume is fine (no binary data). I have replicated this with the same file using rsync and cp, and I have copied it to multiple subdirectories, but for the same file(s) this always happens.
It seems (after very superficial testing) that this happens to text files over a certain very small size (~5-6k).
I have 3 64bit Ubuntu 10.04 EC2 instances running GlusterFS-3.1.0 in RAID 0. Each server has 1 mounted 1TB EBS volume which I use as the share.
The client is a similar instance, also using GlusterFS-3.1.0, and has the share mounted using the glusterfs binary.
My client config is as follows:
---
volume server0-brick
type protocol/client
option transport-type tcp
option remote-host node1
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick
end-volume
volume server1-brick
type protocol/client
option transport-type tcp
option remote-host node2
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick
end-volume
volume server2-brick
type protocol/client
option transport-type tcp
option remote-host node3
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick
end-volume
volume distribute
type cluster/distribute
subvolumes server0-brick server1-brick server2-brick
end-volume
---
My server configs, all identical, are as follows:
---
volume posix
type storage/posix
option directory /mnt/glusterfs
end-volume
volume locks
type features/posix-locks
# option mandatory on
subvolumes posix
end-volume
volume brick
type performance/io-threads
subvolumes locks
option thread-count 16 # default value is 1
end-volume
volume server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 6996
subvolumes brick
option auth.addr.brick.allow *
end-volume
---
I have tried to run glusterfs-defrag with no luck.
(In reply to comment #7) > Seems to have been a bug in my particular snapshot of 3.1. The bug doesn't > exist in 3.0.5. Hi Neil, Can you try with http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/3.1/GlusterFS/glusterfs-3.1.0alpha.tar.gz and see if this behavior happens again? Regards, Hi Neil, Can you confirm the bug is fixed now? check with http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/glusterfs-3.1.0qa25.tar.gz I will be closing the bug with 'works for me', as this is not happening for us at the moment. Please reopen if you still have the issue. -Amar |