Bug 1460697 - GenerateJournalMapping result is different for every call
Summary: GenerateJournalMapping result is different for every call
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: Ceph Integration
Version: 3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4
Assignee: Neha Gupta
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-12 12:25 UTC by Martin Kudlej
Modified: 2018-11-19 05:42 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:42:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1460669 0 unspecified CLOSED wrong dedicated journals 2021-09-20 06:26:34 UTC

Internal Links: 1460669

Description Martin Kudlej 2017-06-12 12:25:49 UTC
Description of problem:
GenerateJournalMapping function returns different device/journal list for every call.
Disks are still the same:

$ lsblk 
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  20G  0 disk 
└─vda1 253:1    0  20G  0 part /
vdb    253:16   0  11G  0 disk     <--SSD
vdc    253:32   0  11G  0 disk     <--SSD
vdd    253:48   0  20G  0 disk 
vde    253:64   0  20G  0 disk 
vdf    253:80   0  20G  0 disk 
vdg    253:96   0  20G  0 disk 

I expect that only one logical choice for journal mappings with 5G for journal is:

vdd -> journal vdb1
vde -> journal vdb2
vdf -> journal vdc1
vdg -> journal vdc2

GenerateJournalMapping results:

{
  "job_id": "e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdg",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdf",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "8f74a35a-11c3-4781-928a-6e91842077e9"
  },
  "created_at": "2017-06-12T10:56:35Z",
  "status_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/status",
  "messages_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/messages",
  "output_url": "/jobs/e2b74a4a-b514-4f7b-ba3e-a4f54bcb2567/output"
}


{
  "job_id": "734df403-d131-48a5-bf1c-3ba1980817e8",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vde",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "a643be2e-0de7-4f25-87e9-2eb464c83f46"
  },
  "created_at": "2017-06-12T11:00:47Z",
  "status_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/status",
  "messages_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/messages",
  "output_url": "/jobs/734df403-d131-48a5-bf1c-3ba1980817e8/output"
}

{
  "job_id": "119827e8-01b1-4b98-b396-c669128ae45a",
  "status": "finished",
  "flow": "GenerateJournalMapping",
  "parameters": {
    "Cluster.node_configuration": {
      "2a25a352-33ed-4828-9e6e-66e45f694529": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          }
        ]
      },
      "cbed0efd-07e8-4c51-a017-2fe2bedc1bcf": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      },
      "b89caa49-b6f9-46cc-85cf-a698673359c1": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      },
      "1b3fc81a-9e2b-453f-b61f-cb8543eee1ce": {
        "storage_disks": [
          {
            "device": "/dev/vdd",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vde",
            "ssd": false,
            "journal": "/dev/vdb",
            "size": 21474836480
          },
          {
            "device": "/dev/vdf",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          },
          {
            "device": "/dev/vdg",
            "ssd": false,
            "journal": "/dev/vdc",
            "size": 21474836480
          }
        ]
      }
    },
    "TendrlContext.integration_id": "56dc1886-7d34-48fa-bacc-7408086eb7cd"
  },
  "created_at": "2017-06-12T10:53:51Z",
  "status_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/status",
  "messages_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/messages",
  "output_url": "/jobs/119827e8-01b1-4b98-b396-c669128ae45a/output"
}

Get node list result can be found here https://bugzilla.redhat.com/show_bug.cgi?id=1460669#c3

Version-Release number of selected component (if applicable):
ceph-ansible-2.2.11-1.el7scon.noarch
ceph-installer-1.3.0-1.el7scon.noarch
etcd-3.1.7-1.el7.x86_64
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-alerting-3.0-alpha.3.el7scon.noarch
tendrl-api-3.0-alpha.4.el7scon.noarch
tendrl-api-doc-3.0-alpha.4.el7scon.noarch
tendrl-api-httpd-3.0-alpha.4.el7scon.noarch
tendrl-commons-3.0-alpha.9.el7scon.noarch
tendrl-dashboard-3.0-alpha.4.el7scon.noarch
tendrl-node-agent-3.0-alpha.9.el7scon.noarch
tendrl-node-monitoring-3.0-alpha.5.el7scon.noarch
tendrl-performance-monitoring-3.0-alpha.7.el7scon.noarch


How reproducible:
100%

Steps to Reproduce:
1. start wizard for creating ceph cluster in UI and try to go to "configuration" page and back to general for couple of times
2. watch results of GenerateJournalMapping calls via jobs API

Actual results:
GenerateJournalMapping results is different for every call.

Expected results:
GenerateJournalMapping result should be the same for every call. There is exception only if user try to get next journal configuration.

Comment 3 Shubhendu Tripathi 2017-06-12 13:07:27 UTC
Martin, can you share the setup details with Ankush?
We will try to debug the issue, as it seems not to be an issue with backend. Rather it could be related to defaulting journal type back to `colocated` while coming back to same screen.

Comment 4 Shubhendu Tripathi 2017-06-13 08:09:29 UTC
I verified the backend changes and with same above input details, I always get the same journal mapping.

Request Ankush to check once from UI if input details passed from UI are not changed in every request.

Comment 5 Lubos Trilety 2017-06-15 10:38:04 UTC
(In reply to Shubhendu Tripathi from comment #4)
> I verified the backend changes and with same above input details, I always
> get the same journal mapping.
> 
> Request Ankush to check once from UI if input details passed from UI are not
> changed in every request.

Well, that's because the problem is not in the GetJournalMapping method. From what I seen the issue was in those inputs which goes to the method. In first run the method gets all disks correctly, in second run it gets all disks without those where journals were located etc.

Comment 7 Ankush Behl 2017-06-20 10:01:10 UTC
Fixed with this version[1] please verify.

tendrl-dashboard-3.0-alpha.5.el7scon.noarch.rpm

Comment 13 Shubhendu Tripathi 2018-11-19 05:42:11 UTC
This product is EOL now


Note You need to log in before you can comment on or make changes to this bug.