Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 922874 Details for
Bug 1113609
create_vol.sh do volume mount multiple times if it has more than one brick.
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
Log from create_vol.sh command.
create_vol.log (text/x-log), 11.93 KB, created by
Daniel Horák
on 2014-07-31 10:44:19 UTC
(
hide
)
Description:
Log from create_vol.sh command.
Filename:
MIME Type:
Creator:
Daniel Horák
Created:
2014-07-31 10:44:19 UTC
Size:
11.93 KB
patch
obsolete
>[10:22:36] *** >[10:22:36] *** create_vol: version 1.34 >[10:22:36] *** >[10:22:36] DEBUG: date: Thu Jul 31 10:22:36 CEST 2014 >[10:23:01] DEBUG: all nodes in storage pool: NODE1 NODE2 NODE3 NODE4 >[10:23:01] DEBUG: nodes *not* spanned by new volume: >[10:23:01] *** Volume : HadoopVol1 >[10:23:01] *** Nodes : NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4 >[10:23:01] *** Volume mount : /mnt/glusterfs >[10:23:01] *** Brick mounts : /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick3, /mnt/brick3, /mnt/brick3, /mnt/brick3 >[10:23:01] --- verifying consistent hadoop UIDs and GIDs across nodes... >[10:23:13] DEBUG: check_gids: Consistent GID across supplied nodes >[10:23:55] DEBUG: check_uids: Consistent UIDs across supplied nodes >[10:23:55] --- completed verifying hadoop UIDs and GIDs >[10:23:55] --- checking all nodes spanned by HadoopVol1... >[10:24:16] DEBUG: check_node on NODE1:\nxfs brick mount setup correctly on NODE1 with 0 warnings >selinux on NODE1 is set to: disabled >selinux configured correctly on NODE1 >iptables not running on NODE1 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE1 >all required users present on NODE1 with 0 warnings >all required dirs present on NODE1 with 0 warnings >ambari-agent is running on NODE1 with 0 warnings >************ >*** NODE1 is ready for Hadoop workloads >************ >[10:24:36] DEBUG: check_node on NODE2:\nxfs brick mount setup correctly on NODE2 with 0 warnings >selinux on NODE2 is set to: disabled >selinux configured correctly on NODE2 >iptables not running on NODE2 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE2 >all required users present on NODE2 with 0 warnings >all required dirs present on NODE2 with 0 warnings >ambari-agent is running on NODE2 with 0 warnings >************ >*** NODE2 is ready for Hadoop workloads >************ >[10:24:56] DEBUG: check_node on NODE3:\nxfs brick mount setup correctly on NODE3 with 0 warnings >selinux on NODE3 is set to: disabled >selinux configured correctly on NODE3 >iptables not running on NODE3 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE3 >all required users present on NODE3 with 0 warnings >all required dirs present on NODE3 with 0 warnings >ambari-agent is running on NODE3 with 0 warnings >************ >*** NODE3 is ready for Hadoop workloads >************ >[10:25:15] DEBUG: check_node on NODE4:\nxfs brick mount setup correctly on NODE4 with 0 warnings >selinux on NODE4 is set to: disabled >selinux configured correctly on NODE4 >iptables not running on NODE4 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE4 >all required users present on NODE4 with 0 warnings >all required dirs present on NODE4 with 0 warnings >ambari-agent is running on NODE4 with 0 warnings >************ >*** NODE4 is ready for Hadoop workloads >************ >[10:25:42] DEBUG: check_node on NODE1:\nxfs brick mount setup correctly on NODE1 with 0 warnings >selinux on NODE1 is set to: disabled >selinux configured correctly on NODE1 >iptables not running on NODE1 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE1 >all required users present on NODE1 with 0 warnings >all required dirs present on NODE1 with 0 warnings >ambari-agent is running on NODE1 with 0 warnings >************ >*** NODE1 is ready for Hadoop workloads >************ >[10:26:02] DEBUG: check_node on NODE2:\nxfs brick mount setup correctly on NODE2 with 0 warnings >selinux on NODE2 is set to: disabled >selinux configured correctly on NODE2 >iptables not running on NODE2 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE2 >all required users present on NODE2 with 0 warnings >all required dirs present on NODE2 with 0 warnings >ambari-agent is running on NODE2 with 0 warnings >************ >*** NODE2 is ready for Hadoop workloads >************ >[10:26:22] DEBUG: check_node on NODE3:\nxfs brick mount setup correctly on NODE3 with 0 warnings >selinux on NODE3 is set to: disabled >selinux configured correctly on NODE3 >iptables not running on NODE3 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE3 >all required users present on NODE3 with 0 warnings >all required dirs present on NODE3 with 0 warnings >ambari-agent is running on NODE3 with 0 warnings >************ >*** NODE3 is ready for Hadoop workloads >************ >[10:26:41] DEBUG: check_node on NODE4:\nxfs brick mount setup correctly on NODE4 with 0 warnings >selinux on NODE4 is set to: disabled >selinux configured correctly on NODE4 >iptables not running on NODE4 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE4 >all required users present on NODE4 with 0 warnings >all required dirs present on NODE4 with 0 warnings >ambari-agent is running on NODE4 with 0 warnings >************ >*** NODE4 is ready for Hadoop workloads >************ >[10:27:02] DEBUG: check_node on NODE1:\nxfs brick mount setup correctly on NODE1 with 0 warnings >selinux on NODE1 is set to: disabled >selinux configured correctly on NODE1 >iptables not running on NODE1 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE1 >all required users present on NODE1 with 0 warnings >all required dirs present on NODE1 with 0 warnings >ambari-agent is running on NODE1 with 0 warnings >************ >*** NODE1 is ready for Hadoop workloads >************ >[10:27:23] DEBUG: check_node on NODE2:\nxfs brick mount setup correctly on NODE2 with 0 warnings >selinux on NODE2 is set to: disabled >selinux configured correctly on NODE2 >iptables not running on NODE2 >NTP time-server 10.5.26.10 is acceptable >ntpd is running on NODE2 >all required users present on NODE2 with 0 warnings >all required dirs present on NODE2 with 0 warnings >ambari-agent is running on NODE2 with 0 warnings >************ >*** NODE2 is ready for Hadoop workloads >************ >[10:27:42] DEBUG: check_node on NODE3:\nxfs brick mount setup correctly on NODE3 with 0 warnings >selinux on NODE3 is set to: disabled >selinux configured correctly on NODE3 >iptables not running on NODE3 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE3 >all required users present on NODE3 with 0 warnings >all required dirs present on NODE3 with 0 warnings >ambari-agent is running on NODE3 with 0 warnings >************ >*** NODE3 is ready for Hadoop workloads >************ >[10:28:03] DEBUG: check_node on NODE4:\nxfs brick mount setup correctly on NODE4 with 0 warnings >selinux on NODE4 is set to: disabled >selinux configured correctly on NODE4 >iptables not running on NODE4 >NTP time-server 10.34.255.7 is acceptable >ntpd is running on NODE4 >all required users present on NODE4 with 0 warnings >all required dirs present on NODE4 with 0 warnings >ambari-agent is running on NODE4 with 0 warnings >************ >*** NODE4 is ready for Hadoop workloads >************ >[10:28:03] all nodes passed check for hadoop workloads >[10:28:03] --- creating the new HadoopVol1 volume... >[10:28:03] DEBUG: bricks: NODE1:/mnt/brick1/HadoopVol1 NODE2:/mnt/brick1/HadoopVol1 NODE3:/mnt/brick1/HadoopVol1 NODE4:/mnt/brick1/HadoopVol1 NODE1:/mnt/brick2/HadoopVol1 NODE2:/mnt/brick2/HadoopVol1 NODE3:/mnt/brick2/HadoopVol1 NODE4:/mnt/brick2/HadoopVol1 NODE1:/mnt/brick3/HadoopVol1 NODE2:/mnt/brick3/HadoopVol1 NODE3:/mnt/brick3/HadoopVol1 NODE4:/mnt/brick3/HadoopVol1 >[10:28:27] DEBUG: gluster vol create: volume create: HadoopVol1: success: please start the volume to access data >[10:28:27] --- "HadoopVol1" created >[10:28:27] --- setting performance options on HadoopVol1... >[10:29:05] DEBUG: set_vol_perf: gluster volume set HadoopVol1 cluster.eager-lock on 2>&1; gluster volume set HadoopVol1 performance.quick-read off 2>&1; gluster volume set HadoopVol1 performance.stat-prefetch off 2>&1; successful >[10:29:05] --- performance options set >[10:29:05] --- starting the new HadoopVol1 volume... >[10:29:17] DEBUG: gluster vol start: volume start: HadoopVol1: success >[10:29:17] "HadoopVol1" started >[10:29:17] --- creating glusterfs-fuse mounts for HadoopVol1... >[10:29:20] DEBUG: glusterfs mount on NODE1: >[10:29:23] DEBUG: glusterfs mount on NODE2: >[10:29:24] DEBUG: glusterfs mount on NODE3: >[10:29:31] DEBUG: glusterfs mount on NODE4: >[10:29:32] DEBUG: glusterfs mount on NODE1: >[10:29:33] DEBUG: glusterfs mount on NODE2: >[10:29:33] DEBUG: glusterfs mount on NODE3: >[10:29:34] DEBUG: glusterfs mount on NODE4: >[10:29:35] DEBUG: glusterfs mount on NODE1: >[10:29:36] DEBUG: glusterfs mount on NODE2: >[10:29:37] DEBUG: glusterfs mount on NODE3: >[10:29:37] DEBUG: glusterfs mount on NODE4: >[10:29:37] --- created glusterfs-fuse mounts for HadoopVol1 >[10:29:37] --- adding hadoop directories to nodes spanned by HadoopVol1... >[10:29:39] DEBUG: add_dirs -d /mnt/glusterfs/HadoopVol1:\n/mnt/glusterfs/HadoopVol1/mapred created/updated with perms 0770 >/mnt/glusterfs/HadoopVol1/mapred/system created/updated with perms 0755 >/mnt/glusterfs/HadoopVol1/tmp created/updated with perms 1777 >/mnt/glusterfs/HadoopVol1/user created/updated with perms 0755 >/mnt/glusterfs/HadoopVol1/mr-history created/updated with perms 0755 >/mnt/glusterfs/HadoopVol1/tmp/logs created/updated with perms 1777 >/mnt/glusterfs/HadoopVol1/mr-history/tmp created/updated with perms 1777 >/mnt/glusterfs/HadoopVol1/mr-history/done created/updated with perms 0770 >/mnt/glusterfs/HadoopVol1/job-staging-yarn created/updated with perms 0770 >/mnt/glusterfs/HadoopVol1/app-logs created/updated with perms 1777 >/mnt/glusterfs/HadoopVol1/apps created/updated with perms 0775 >/mnt/glusterfs/HadoopVol1/apps/webhcat created/updated with perms 0775 >12 new Hadoop directories added/updated >[10:29:39] --- added hadoop directories to nodes spanned by HadoopVol1 >[10:29:39] "HadoopVol1" created and started with no errors > > > > > > > > > > > > > > > > > ># gluster volume info > Volume Name: HadoopVol1 > Type: Distributed-Replicate > Volume ID: 2201f08f-67cc-4c23-9098-5a6405ff879c > Status: Started > Snap Volume: no > Number of Bricks: 6 x 2 = 12 > Transport-type: tcp > Bricks: > Brick1: NODE1:/mnt/brick1/HadoopVol1 > Brick2: NODE2:/mnt/brick1/HadoopVol1 > Brick3: NODE3:/mnt/brick1/HadoopVol1 > Brick4: NODE4:/mnt/brick1/HadoopVol1 > Brick5: NODE1:/mnt/brick2/HadoopVol1 > Brick6: NODE2:/mnt/brick2/HadoopVol1 > Brick7: NODE3:/mnt/brick2/HadoopVol1 > Brick8: NODE4:/mnt/brick2/HadoopVol1 > Brick9: NODE1:/mnt/brick3/HadoopVol1 > Brick10: NODE2:/mnt/brick3/HadoopVol1 > Brick11: NODE3:/mnt/brick3/HadoopVol1 > Brick12: NODE4:/mnt/brick3/HadoopVol1 > Options Reconfigured: > performance.stat-prefetch: off > performance.quick-read: off > cluster.eager-lock: on > performance.readdir-ahead: on > snap-max-hard-limit: 256 > snap-max-soft-limit: 90 > auto-delete: disable > > > ># ./create_vol.sh -y --debug HadoopVol1 /mnt/glusterfs \ > NODE1:/mnt/brick1 \ > NODE2:/mnt/brick1 \ > NODE3:/mnt/brick1 \ > NODE4:/mnt/brick1 \ > NODE1:/mnt/brick2 \ > NODE2:/mnt/brick2 \ > NODE3:/mnt/brick2 \ > NODE4:/mnt/brick2 \ > NODE1:/mnt/brick3 \ > NODE2:/mnt/brick3 \ > NODE3:/mnt/brick3 \ > NODE4:/mnt/brick3 > > > > *** > *** create_vol: version 1.34 > *** > DEBUG: all nodes in storage pool: NODE1 NODE2 NODE3 NODE4 > DEBUG: nodes *not* spanned by new volume: > > *** Volume : HadoopVol1 > *** Nodes : NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4 > *** Volume mount : /mnt/glusterfs > *** Brick mounts : /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick3, /mnt/brick3, /mnt/brick3, /mnt/brick3 > << truncated >> > DEBUG: gluster vol start: volume start: HadoopVol1: success > "HadoopVol1" started > --- creating glusterfs-fuse mounts for HadoopVol1... > DEBUG: glusterfs mount on NODE1: > DEBUG: glusterfs mount on NODE2: > DEBUG: glusterfs mount on NODE3: > DEBUG: glusterfs mount on NODE4: > DEBUG: glusterfs mount on NODE1: > DEBUG: glusterfs mount on NODE2: > DEBUG: glusterfs mount on NODE3: > DEBUG: glusterfs mount on NODE4: > DEBUG: glusterfs mount on NODE1: > DEBUG: glusterfs mount on NODE2: > DEBUG: glusterfs mount on NODE3: > DEBUG: glusterfs mount on NODE4: > --- created glusterfs-fuse mounts for HadoopVol1 > << truncated >> > > >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1113609
: 922874