Bug 1076625
| Summary: | file disappeared in the heterogeneity architecture computer system(arm and intel) | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | yhfudev |
| Component: | cli | Assignee: | bugs <bugs> |
| Status: | CLOSED EOL | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.4.2 | CC: | bugs, gluster-bugs, ndevos, yhfudev |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | arm | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-10-07 13:50:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
yhfudev
2014-03-14 17:27:47 UTC
ARM can run in Big-Engian mode or Little-Endian mode, what Endianness do your Pogoplugs have? Mixing Little-Endian (Intel) and Big-Endian will very likely have some issues (see bug 951903). There is also a bug that involves 64-bit clients and 32-bit servers. Mounting over glusterfs-fuse will return errors for some directories (more than 20 entries). Mounting over NFS does not have any issues (bug 1074023). If you access the mounted volume from a terminal window, do you get any error messages? All of systems are Little-Endian
I run the command command to detect the endian(0-big, 1-little)
echo -n I | hexdump -o | awk '{ print substr($2,6,1); exit}'
and output were 1s.
An easier check is the 'lscpu' command, 'uname -m', or /proc/cpuinfo ... Also, please let us know this: - If you access the mounted volume from a terminal window, do you get any error messages? - Do you have the same problem when mounting over NFS? - Are the missing files available on any of the raw bricks? At the moment, I expect that this is a duplicate of bug 1074023. I changed a little bit of my network recently, and I changed the OS system for the x86 arch to a i686 Arch Linux (32bit), so that the systems involved is all Arch Linux 32bit; the difference is the gluster server running on ARM cpu, and the client is on x86 cpu. Now I also found there's I/O error for some directories when I check the files from the gluster client. here's the cpu info I gathered: 1. gluster client: a VM cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 2 model name : QEMU Virtual CPU version 1.5.0 stepping : 3 microcode : 0x1 cpu MHz : 2526.862 cache size : 4096 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 4 wp : yes flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm pni vmx cx16 hypervisor lahf_lm bogomips : 5055.29 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 2 Model name: QEMU Virtual CPU version 1.5.0 Stepping: 3 CPU MHz: 2526.862 BogoMIPS: 5055.29 Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K uname -m i686 uname -a Linux home-fileserver 3.13.7-1-ARCH #1 SMP PREEMPT Mon Mar 24 19:50:04 CET 2014 i686 GNU/Linux 2. gluster server(s): ARM cat /proc/cpuinfo Processor : Feroceon 88FR131 rev 1 (v5l) BogoMIPS : 795.44 Features : swp half thumb fastmult edsp CPU implementer : 0x56 CPU architecture: 5TE CPU variant : 0x2 CPU part : 0x131 CPU revision : 1 Hardware : Marvell OpenRD Ultimate Board Revision : 0000 Serial : 0000000000000000 lscpu Architecture: armv5tel Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 uname -m armv5tel uname -a Linux home-pogoplug-v4-1 3.1.10-32-ARCH #1 PREEMPT Tue Feb 11 06:26:34 MST 2014 armv5tel GNU/Linux And other questions: - If you access the mounted volume from a terminal window, do you get any error messages? Not quiet sure about the question. I enter a directory from the mount point, and got ls -l ls: reading directory .: Input/output error total 0 - Do you have the same problem when mounting over NFS? I failed to mount the file system by NFS, mount -vv -o mountproto=udp,vers=3 -t nfs 192.168.2.7:filecache1t /media/test1/ mount.nfs: timeout set for Wed Apr 9 23:49:56 2014 mount.nfs: trying text-based options 'mountproto=udp,vers=3,addr=192.168.2.7,mountaddr=192.168.2.7' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.2.7 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: requested NFS version or transport protocol is not supported - Are the missing files available on any of the raw bricks? Yes, all of the files are exist in the raw bricks. I mounted in the server, use the same configure file(eg. /etc/glusterfs/datastore.vol), and I can access all of the files too. GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed. GlusterFS 3.4.x has reached end-of-life.\ \ If this bug still exists in a later release please reopen this and change the version or open a new bug. |