Bug 1294548 - [RFE] Enhancement: The cli could manage state
Summary: [RFE] Enhancement: The cli could manage state
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: All
OS: All
low
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-28 23:51 UTC by Joe Julian
Modified: 2018-11-19 05:35 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:20:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 572 0 None None None 2018-11-19 05:35:36 UTC

Description Joe Julian 2015-12-28 23:51:35 UTC
puppet, salt, etc all manage gluster state from a 3rd party standpoint to manage the steps necessary to change a volume, peer state, etc. They do this by taking output from the cli, determining state, then passing change commands back in to the cli.

This is something the cli could manage directly. For instance, a volume of:

Volume Name: myvolume
Type: Distribute
Volume ID: 1fb52916-7fbf-4ef3-99e3-ec06b5b6407a
Status: Started
Number of Bricks: 1 x 1 = 1
Transport-type: tcp
Bricks:
Brick1: host1:/srv/gluster/myvolume/brick1
Brick2: host2:/srv/gluster/myvolume/brick1

A command like:

gluster volume state myvolume host1:/srv/gluster/myvolume/brick1 host2:/srv/gluster/myvolume/brick1 host3:/srv/gluster/myvolume/brick1

Would add a 3rd brick on host3.

Or:

gluster volume state myvolume replica 2 host1:/srv/gluster/myvolume/brick1 host3:/srv/gluster/myvolume/brick1 host2:/srv/gluster/myvolume/brick1 host4:/srv/gluster/myvolume/brick1

Would convert the distribute volume to a replicated volume. 

If the bricks were listed in the wrong order, host1, host2, host3, host4, the cli should warn and abort without some sort of switch or keyword that makes excessive changes complete. With that override option, however, the cli would replace brick, migrating the data from 2 to 3, then add brick with replica 2.

Optimally, the state management command would also accept volume options, ie.:

gluster volume state myvolume host1:/srv/gluster/myvolume/brick1 host2:/srv/gluster/myvolume/brick1 option cluster.eager-lock=enable option performance.stat-prefetch=off

Comment 1 Gaurav Kumar Garg 2015-12-29 07:37:27 UTC
(In reply to Joe Julian from comment #0)
> puppet, salt, etc all manage gluster state from a 3rd party standpoint to
> manage the steps necessary to change a volume, peer state, etc. They do this
> by taking output from the cli, determining state, then passing change
> commands back in to the cli.
> 
> This is something the cli could manage directly. For instance, a volume of:
> 
> Volume Name: myvolume
> Type: Distribute
> Volume ID: 1fb52916-7fbf-4ef3-99e3-ec06b5b6407a
> Status: Started
> Number of Bricks: 1 x 1 = 1
> Transport-type: tcp
> Bricks:
> Brick1: host1:/srv/gluster/myvolume/brick1
> Brick2: host2:/srv/gluster/myvolume/brick1
> 
> A command like:
> 
> gluster volume state myvolume host1:/srv/gluster/myvolume/brick1
> host2:/srv/gluster/myvolume/brick1 host3:/srv/gluster/myvolume/brick1


You can do it in existing GlusterFS code. just by replacing state to add-brick. I didn't get this RFE request properly. could you explain what's the need of having state here for adding brick. 



> Would add a 3rd brick on host3.
> 
> Or:
> 
> gluster volume state myvolume replica 2 host1:/srv/gluster/myvolume/brick1
> host3:/srv/gluster/myvolume/brick1 host2:/srv/gluster/myvolume/brick1
> host4:/srv/gluster/myvolume/brick1
> 
> Would convert the distribute volume to a replicated volume. 
> 
> If the bricks were listed in the wrong order, host1, host2, host3, host4,
> the cli should warn and abort without some sort of switch or keyword that
> makes excessive changes complete. With that override option, however, the
> cli would replace brick, migrating the data from 2 to 3, then add brick with
> replica 2.
> 
> Optimally, the state management command would also accept volume options,
> ie.:
> 
> gluster volume state myvolume host1:/srv/gluster/myvolume/brick1
> host2:/srv/gluster/myvolume/brick1 option cluster.eager-lock=enable option
> performance.stat-prefetch=off


If i am not wrong, i think you mean to say that have one state command in GlusterFS which is so intelligent that in can do any operation based on the argument passed. If we are adding one more brick then it can add-one more brick without specifying add-brick keyword in the command, if we are configuring some option, it can be done without using "gluster set" command, etc. ?

Comment 2 Joe Julian 2015-12-30 03:57:22 UTC
Yes, intelligent state manipulation whereas you specify the desired state and, if it's safe and possible to do so, glusterd or the cli would manage the necessary changes to get the volume into that state.

Comment 3 James (purpleidea) 2015-12-31 05:41:54 UTC
I had a short chat with JoeJulian about this, and I think I know what he's getting at. If I were to rephrase his feature request in my language, what Joe really wants is something like puppet-gluster, except way more powerful, so that you "set the desired state" declaratively and something (puppet?) causes your cluster to converge on this state.

I actually want the same thing, which is why I started puppet-gluster, however I long realized that the puppet engine and language aren't going to be able to accomplish that goal.

Joe's proposal of moving it into glusterd is actually very compelling, and approximately the right thing to do, however I have an alternative idea which is functionally identical. I've actually mentioned the idea to some different gluster devs in the past, but my idea isn't quite ready for prime time yet.

What is it? Well, I'm building a prototype for a next generation config mgmt engine and language. It might support a variant of the puppet language so that existing code could be roughly compatible or easily portable, but this hasn't been decided yet. It will depend on what the puppet folks have to say.

The engine part is the interesting part. I believe my engine is sufficiently powerful that in addition to solving traditional configuration management problems, we'll be able to "libify" it, so that traditional software (gluster, ceph, freeipa, etc...) can write their management layer in native config mgmt code, and not have to worry about getting that part right, since my engine will offer common patterns as easy to use primitives.

In effect, my system could produce a "glusterd" binary. I'm actually writing this prototype in golang, so this actually fits very nicely with accomplishing this.

I'll probably have a full blog post about this with lots of code within a week or two, but no later than a month from now.

Hopefully this will be something useful the gluster team would find useful. It's not anywhere near being ready for production, so I'm not sure if this would fit for a glusterd 2.0 timeline, but maybe it's enough to pique their interest and devote some resources to my project in the hopes of later benefiting theirs. I'm also hoping that my design and model are more compelling and simpler to reason about than traditional glusterd code.

HTH,
James
@purpleidea

Comment 4 James (purpleidea) 2016-01-25 11:40:56 UTC
Forgot to post this right away, but here's what I was referring to in my previous comment:

https://ttboj.wordpress.com/2016/01/18/next-generation-configuration-mgmt/

My longer term goal is to libify mgmt so that you can write your management code in the DSL (or directly via a golang API) and let it compile to a binary that effectively replaces glusterd (or any other management "D").

More discussion to follow if anyone is interested.

Comment 6 Kaushal 2017-03-08 10:49:21 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Comment 7 Vijay Bellur 2018-11-19 05:35:36 UTC
Migrated to github:

https://github.com/gluster/glusterfs/issues/583

Please follow the github issue for further updates on this bug.


Note You need to log in before you can comment on or make changes to this bug.