With repeated testing using the Samba ping_pong tool while simulating one or more subvolumes' outage in an AFR setup, the number of locks were found to to be unequal on each server. This has happened only after a prolonged run of ping_pong. Need to arrive at a much narrower case to hit this bug.
Need to bring in features to do lock recovery from the client side to solve this problem, which is an instance of the lack of 'lock self-heal' in GlusterFS. Re-targetting to 3.1 as it involves quite a bit of code changes.
*** This bug has been marked as a duplicate of bug 960 ***