Bug 1460697 - GenerateJournalMapping result is different for every call
GenerateJournalMapping result is different for every call
Status: ON_QA
Product: Red Hat Storage Console
Classification: Red Hat
Component: Ceph Integration (Show other bugs)
3
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 4
Assigned To: Neha Gupta
sds-qe-bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-12 08:25 EDT by Martin Kudlej
Modified: 2017-06-23 06:44 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Martin Kudlej 2017-06-12 08:25:49 EDT
Description of problem:
GenerateJournalMapping function returns different device/journal list for every call.
Disks are still the same:

$ lsblk 
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  20G  0 disk 
└─vda1 253:1    0  20G  0 part /
vdb    253:16   0  11G  0 disk     <--SSD
vdc    253:32   0  11G  0 disk     <--SSD
vdd    253:48   0  20G  0 disk 
vde    253:64   0  20G  0 disk 
vdf    253:80   0  20G  0 disk 
vdg    253:96   0  20G  0 disk 

I expect that only one logical choice for journal mappings with 5G for journal is:

vdd -> journal vdb1
vde -> journal vdb2
vdf -> journal vdc1
vdg -> journal vdc2

GenerateJournalMapping results:

{
  "job_id": "e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "8f74a35a-11c3-4781-928a-6e91842077e9"
  },
  "created_at": "2017-06-12T10:56:35Z",
  "status_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/status",
  "messages_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/messages",
  "output_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/output"
}


{
  "job_id": "734df403-d131-48a5-bf1c-3ba1980817e8",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "a643be2e-0de7-4f25-87e9-2eb464c83f46"
  },
  "created_at": "2017-06-12T11:00:47Z",
  "status_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/status",
  "messages_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/messages",
  "output_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/output"
}

{
  "job_id": "119827e8-01b1-4b98-b396-c669128ae45a",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "56dc1886-7d34-48fa-bacc-7408086eb7cd"
  },
  "created_at": "2017-06-12T10:53:51Z",
  "status_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/status",
  "messages_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/messages",
  "output_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/output"
}

Get node list result can be found here https://bugzilla.redhat.com/show_bug.cgi?id=1460669#c3

Version-Release number of selected component (if applicable):
ceph-ansible-2.2.11-1.el7scon.noarch
ceph-installer-1.3.0-1.el7scon.noarch
etcd-3.1.7-1.el7.x86_64
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-alerting-3.0-alpha.3.el7scon.noarch
tendrl-api-3.0-alpha.4.el7scon.noarch
tendrl-api-doc-3.0-alpha.4.el7scon.noarch
tendrl-api-httpd-3.0-alpha.4.el7scon.noarch
tendrl-commons-3.0-alpha.9.el7scon.noarch
tendrl-dashboard-3.0-alpha.4.el7scon.noarch
tendrl-node-agent-3.0-alpha.9.el7scon.noarch
tendrl-node-monitoring-3.0-alpha.5.el7scon.noarch
tendrl-performance-monitoring-3.0-alpha.7.el7scon.noarch


How reproducible:
100%

Steps to Reproduce:
1. start wizard for creating ceph cluster in UI and try to go to "configuration" page and back to general for couple of times
2. watch results of GenerateJournalMapping calls via jobs API

Actual results:
GenerateJournalMapping results is different for every call.

Expected results:
GenerateJournalMapping result should be the same for every call. There is exception only if user try to get next journal configuration.
Comment 3 Shubhendu Tripathi 2017-06-12 09:07:27 EDT
Martin, can you share the setup details with Ankush?
We will try to debug the issue, as it seems not to be an issue with backend. Rather it could be related to defaulting journal type back to `colocated` while coming back to same screen.
Comment 4 Shubhendu Tripathi 2017-06-13 04:09:29 EDT
I verified the backend changes and with same above input details, I always get the same journal mapping.

Request Ankush to check once from UI if input details passed from UI are not changed in every request.
Comment 5 Lubos Trilety 2017-06-15 06:38:04 EDT
(In reply to Shubhendu Tripathi from comment #4)
> I verified the backend changes and with same above input details, I always
> get the same journal mapping.
> 
> Request Ankush to check once from UI if input details passed from UI are not
> changed in every request.

Well, that's because the problem is not in the GetJournalMapping method. From what I seen the issue was in those inputs which goes to the method. In first run the method gets all disks correctly, in second run it gets all disks without those where journals were located etc.
Comment 7 Ankush Behl 2017-06-20 06:01:10 EDT
Fixed with this version[1] please verify.

tendrl-dashboard-3.0-alpha.5.el7scon.noarch.rpm

Note You need to log in before you can comment on or make changes to this bug.