Bug 1596839

Summary: incorrect and inconsistent representation of time stamps in WA
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Martin Bukatovic <mbukatov>
Component: web-admin-tendrl-uiAssignee: Neha Gupta <negupta>
Status: CLOSED WONTFIX QA Contact: sds-qe-bugs
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: gshanmug, nthomas, rhs-bugs, sankarshan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-08 20:25:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
screenshot 1: Import cluster screen with timestamp error highlighted none

Description Martin Bukatovic 2018-06-29 19:04:56 UTC
Description of problem
======================

Timestamps shown in WA in various places are expected to be:

 * consistent, formatted in the same way
 * convey the time correctly (wrt actual system time on WA server machine)

This is not the case.

Version-Release number of selected component
============================================

[root@mbukatov-usm1-server ~]# rpm -qa | grep tendrl | sort
tendrl-ansible-1.6.3-5.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-7.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-5.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-5.el7rhgs.noarch
tendrl-node-agent-1.6.3-7.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-4.el7rhgs.noarch

Steps to Reproduce
==================

1. prepare Gluster trusted storage pool and server for WA
2. install RHGS WA with tendrl-ansible
3. make sure it's afternoon right now or tweak the time
   on all machines in the same way to make it so
3. run import (with profiling enabled)
4. check system time on WA server
5. perform some action, which will create alert (eg. stop volume)

Actual results
==============

Timestamps for messages are incorrect. For example I see:

```
info
Running ImportCluster
29 Jun 2018 08:37:10 
```

While the system time on the WA machine, few minutes after the import was
started, was:

```
[root@mbukatov-usm1-server ~]# date
Fri Jun 29 20:50:45 CEST 2018
```

This looks as if WA tried to use 12 hour time format, but drops "PM" from the
time representation.

Alerts use 24 hour time format correctly, as can be seen in some alert, eg:

```
29 Jun 2018 20:41:11
```

Expected results
================

Both status messages in Task Details page and alerts reports the time in the
same way, which conveys the time (as set on WA server) correctly.

Additional info
===============

We need to look for all timestamps in the WA and check it as well.

This BZ *is not going to be verified* without this list and such additional
check.

This was assigned to ui component because it looks like representation problem.
Feel free to reassign to other component if the problem is actually more
severe and belongs to some backend component instead.

Comment 1 Martin Bukatovic 2018-06-29 19:10:33 UTC
Created attachment 1455567 [details]
screenshot 1: Import cluster screen with timestamp error highlighted

Attaching screenshot of task details page with Alerts sidebar, where we can see
3 different type of messages with timestapms:

 * Time Submitted shows 24 hour time correctly
 * Alerts shows 24 hour time correctly
 * Events (messages there) shows the time incorrectly

Comment 2 Nishanth Thomas 2018-07-03 09:57:51 UTC
This seems to be a change with bigger impact affecting multiple components. Hence as discussed in the triage, I am moving this bug out of 3.4