| Summary: | Warning message at glusterfs-server package while upgrading from RHGS 3.1.3 RHEL 6.8 to RHGS 3.2.0 RHEL 6.9 | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Karan Sandha <ksandha> | ||||||
| Component: | build | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> | ||||||
| Status: | CLOSED WORKSFORME | QA Contact: | Rahul Hinduja <rhinduja> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | low | ||||||||
| Version: | rhgs-3.2 | CC: | amukherj, atumball, ksandha, rcyriac, rhs-bugs, storage-qa-internal, tdesala | ||||||
| Target Milestone: | --- | Keywords: | Reopened, ZStream | ||||||
| Target Release: | --- | ||||||||
| Hardware: | All | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2018-10-11 10:05:18 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Attachments: |
|
||||||||
|
Description
Karan Sandha
2016-11-18 11:25:17 UTC
Please retry upgrade with the following command: # yum --verbose --rpmverbosity=debug *.rpm Since the default debug level is 'info', it doesn't help to identify what part of the scriptlet exactly failed. Some info about this issue: This warning message will come when we try to update packages without stopping glusterd, as per the installation guide we ask CUS to stop all gluster processes before starting update. More details are can be found in the BZ-https://bugzilla.redhat.com/show_bug.cgi?id=1378316 Byreddy, I haven't started the glusterd services. its a fresh setup with no volumes. i just installed 3.1.3 and upgraded to 3.8.4.5 without starting any services. Millind, I'll do with the same steps mentioned in the comment 2 and update the bug. Thanks & regards Karan Sandha Created attachment 1222300 [details] as per comment 2 Karan, Looks like the post-install scriptlet for glusterfs-server returned a 0 this time as per attachment 1222300 [details]. If the issue was as described by Byreddy in comment #2, then there's not much to be looked into. Else, you need to reproduce the issue while running "yum upgrade" in debug mode. Milind, I'll check again and report you back. Thanks & regards Karan Sandha As per comment 8 Closing this bug, feel free to reopen if the issue reappears. Reopening this BZ. I was able to reproduce this issue with debug flag as per Comment 2. I hit this while updating from RHEL 6.8+ 3.1.3 --> RHEL 6.9+ 3.2.0. D: +++ h# 929 Header SHA1 digest: OK (2ff7e0bcfe854bea7372064bdafba9c45f1bdf8e) D: adding "glusterfs-server" to Name index. D: adding 137 entries to Basenames index. D: adding "System Environment/Daemons" to Group index. D: adding 83 entries to Requirename index. D: adding 3 entries to Providename index. D: adding 45 entries to Dirnames index. D: adding 83 entries to Requireversion index. D: adding 3 entries to Provideversion index. D: adding 1 entries to Installtid index. D: adding 1 entries to Sigmd5 index. D: adding "2ff7e0bcfe854bea7372064bdafba9c45f1bdf8e" to Sha1header index. D: adding 137 entries to Filedigests index. D: install: %post(glusterfs-server-3.8.4-15.el6rhs.x86_64) scriptlet start D: install: %post(glusterfs-server-3.8.4-15.el6rhs.x86_64) execv(/bin/sh) pid 8565 + /sbin/chkconfig --add glusterd + '[' -f /var/log/glusterfs/.cmd_log_history ']' + '[' -d /etc/glusterd -a '!' -h /var/lib/glusterd ']' + '[' -d /var/lib/glusterd/vols ']' ++ find /var/lib/glusterd/vols -name '*.vol' + '[' -e /etc/ld.so.conf.d/glusterfs.conf ']' + pidof -c -o %PPID -x glusterd + '[' 0 -eq 0 ']' ++ pgrep -f gsyncd.py + kill -9 + killall --wait glusterd + glusterd --xlator-option '*.upgrade=on' -N + rm -rf /var/run/glusterd.socket + /sbin/service glusterd start D: install: waitpid(8565) rc 8565 status 100 secs 3.219 warning: %post(glusterfs-server-3.8.4-15.el6rhs.x86_64) scriptlet failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm package glusterfs-server-3.8.4-15.el6rhs.x86_64 Updating : mdadm-3.3.4-8.el6.x86_64 [########################################## ] 143/418XZDIO: 101 reads, 826836 total bytes in 0.079985 secs Updating : mdadm-3.3.4-8.el6.x86_64 143/418 D: ========== +++ mdadm-3.3.4-8.el6 x86_64-linux 0x2 D: Expected size: 356660 = lead(96)+sigs(1284)+pad(4)+data(355276) D: Actual size: 356660 D: mdadm-3.3.4-8.el6.x86_64: Header V3 RSA/SHA256 Signature, key ID fd431d51: OK D: install: mdadm-3.3.4-8.el6 has 19 files, test = 0 D: ========== Directories not explicitly included in package: D: 0 /etc/cron.d/ D: 1 /etc/rc.d/init.d/ D: 2 /etc/sysconfig/ D: 3 /lib/udev/rules.d/ D: 4 /sbin/ D: 5 /usr/sbin/ D: 6 /usr/share/doc/ D: 8 /usr/share/man/man4/ D: 9 /usr/share/man/man5/ D: 10 /usr/share/man/man8/ D: 11 /var/run/ D: ========== D: fini 100600 1 ( 0, 0) 108 /etc/cron.d/raid-check;58b974be D: fini 100755 1 ( 0, 0) 2571 /etc/rc.d/init.d/mdmonitor;58b974be D: fini 100644 1 ( 0, 0) 2585 /etc/sysconfig/raid-check;58b974be D: fini 100644 1 ( 0, 0) 2751 /lib/udev/rules.d/65-md-incremental.rules;58b974be D: fini 100755 1 ( 0, 0) 479920 /sbin/mdadm;58b974be D: fini 100755 1 ( 0, 0) 230992 /sbin/mdmon;58b974be D: fini 100755 1 ( 0, 0) 3793 /usr/sbin/raid-check;58b974be D: fini 040755 2 ( 0, 0) 0 /usr/share/doc/mdadm-3.3.4 D: fini 100644 1 ( 0, 0) 18092 /usr/share/doc/mdadm-3.3.4/COPYING;58b974be D: fini 100644 1 ( 0, 0) 14814 /usr/share/doc/mdadm-3.3.4/ChangeLog;58b974be D: fini 100644 1 ( 0, 0) 7191 /usr/share/doc/mdadm-3.3.4/TODO;58b974be D: fini 100644 1 ( 0, 0) 2687 /usr/share/doc/mdadm-3.3.4/mdadm.conf-example;58b974be D: fini 100644 1 ( 0, 0) 3565 /usr/share/doc/mdadm-3.3.4/mdcheck;58b974be D: fini 100644 1 ( 0, 0) 550 /usr/share/doc/mdadm-3.3.4/syslog-events;58b974be D: fini 100644 1 ( 0, 0) 13591 /usr/share/man/man4/md.4.gz;58b974be D: fini 100644 1 ( 0, 0) 6539 /usr/share/man/man5/mdadm.conf.5.gz;58b974be D: fini 100644 1 ( 0, 0) 31064 /usr/share/man/man8/mdadm.8.gz;58b974be D: fini 100644 1 ( 0, 0) 3183 /usr/share/man/man8/mdmon.8.gz;58b974be D: fini 040700 2 ( 0, 0) 0 /var/run/mdadm D: +++ h# 930 Header V3 RSA/SHA256 Signature, key ID fd431d51: OK Created attachment 1259572 [details]
Yum update with debug flag complete output
In this run, the PID of gsyncd.py wasn't found. So the argument to the kill -9 command was empty. + kill -9 ----- Looks like there wasn't any geo-replication active on the system since gsyncd.py is a geo-replication artifact. Could you confirm if geo-replication for any volume was running at the time the "yum update" was attempted ? (In reply to Milind Changire from comment #12) > In this run, the PID of gsyncd.py wasn't found. So the argument to the kill > -9 command was empty. > > + kill -9 > > ----- > > Looks like there wasn't any geo-replication active on the system since > gsyncd.py is a geo-replication artifact. Could you confirm if > geo-replication for any volume was running at the time the "yum update" was > attempted No volumes were present at the time of upgrade. I have just created a 3.1.3 setup and without creating any volume or running any services performed upgrade to 3.2.0. Not much demand to have it in release since last 2 years, not in in upgrades in recent 3 releases. Closing as WORKSFORME. |