This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 765264 - (GLUSTER-3532) use variable sized iobufs where ever possible
use variable sized iobufs where ever possible
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
mainline
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: Amar Tumballi
Raghavendra Bhat
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2011-09-09 10:40 EDT by Amar Tumballi
Modified: 2015-12-01 11:45 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 14:00:28 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs-3.3.0qa43
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Amar Tumballi 2011-09-09 10:40:34 EDT
we still have fixed size (128kB) iobuf used in codebase. Need to utilize variable sized iobufs in those places too.

------

rpc/rpc-lib/src/rpc-clnt.c:        request_iob = iobuf_get (clnt->ctx->iobuf_pool);
rpc/rpc-lib/src/rpcsvc.c:        request_iob = iobuf_get (rpc->ctx->iobuf_pool);
rpc/rpc-lib/src/rpcsvc.c:        replyiob = iobuf_get (svc->ctx->iobuf_pool);
rpc/rpc-transport/rdma/src/rdma.c:                iobuf = iobuf_get (this->ctx->iobuf_pool);
rpc/rpc-transport/rdma/src/rdma.c:        iobuf = iobuf_get (peer->trans->ctx->iobuf_pool);
rpc/rpc-transport/rdma/src/rdma.c:                post->ctx.hdr_iobuf = iobuf_get (peer->trans->ctx->iobuf_pool);
rpc/rpc-transport/rdma/src/rdma.c:        iobuf = iobuf_get (peer->trans->ctx->iobuf_pool);
rpc/rpc-transport/socket/src/socket.c:                        iobuf = iobuf_get (this->ctx->iobuf_pool);
rpc/rpc-transport/socket/src/socket.c:                        iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/cluster/stripe/src/stripe.c:                                iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/mount/fuse/src/fuse-bridge.c:                iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/nfs/server/src/mount3.c:        iob = iobuf_get (ms->iobpool);
xlators/nfs/server/src/nfs3.c:        iob = iobuf_get (nfs3->iobpool);
xlators/performance/quick-read/src/quick-read.c:                                                iobuf = iobuf_get (iobuf_pool);
xlators/performance/write-behind/src/write-behind.c:                iobuf = iobuf_get (request->file->this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:                        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:                rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:                rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);

------------------
Comment 1 Anand Avati 2012-02-20 03:46:33 EST
CHANGE: http://review.gluster.com/388 (iobuf: use 'iobuf_get2()' to get variable sized buffers) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Raghavendra Bhat 2012-05-24 06:14:21 EDT
The below information taken from the statedump shows that now variable sized iobufs can be used as per the requirements.




[purge.1]
purge.1.mem_base=0x7fdf97d9c000
purge.1.active_cnt=0
purge.1.passive_cnt=1024
purge.1.alloc_cnt=1810528
purge.1.max_active=70
purge.1.page_size=128

[purge.2]
purge.2.mem_base=0x7fdf97d54000
purge.2.active_cnt=0
purge.2.passive_cnt=512
purge.2.alloc_cnt=1596716
purge.2.max_active=72
purge.2.page_size=512

[purge.3]
purge.3.mem_base=0x7fdf97c54000
purge.3.active_cnt=0
purge.3.passive_cnt=512
purge.3.alloc_cnt=181137
purge.3.max_active=5
purge.3.page_size=2048

[arena.4]
arena.4.mem_base=0x7fdf96679000
arena.4.active_cnt=1
arena.4.passive_cnt=127
arena.4.alloc_cnt=345581
arena.4.max_active=84
arena.4.page_size=8192


[arena.4.active_iobuf.1]
arena.4.active_iobuf.1.ref=1
arena.4.active_iobuf.1.ptr=0x7fdf96763000

[purge.5]
purge.5.mem_base=0x7fdf96479000
purge.5.active_cnt=0
purge.5.passive_cnt=64
purge.5.alloc_cnt=362
purge.5.max_active=1
purge.5.page_size=32768

[arena.6]
arena.6.mem_base=0x7fdf8fad5000
arena.6.active_cnt=4
arena.6.passive_cnt=28
arena.6.alloc_cnt=397884
arena.6.max_active=32
arena.6.page_size=131072

[arena.6.active_iobuf.1]
arena.6.active_iobuf.1.ref=2
arena.6.active_iobuf.1.ptr=0x7fdf8fb15000

[arena.6.active_iobuf.2]
arena.6.active_iobuf.2.ref=1
arena.6.active_iobuf.2.ptr=0x7fdf8fd55000

[arena.6.active_iobuf.3]
arena.6.active_iobuf.3.ref=2
arena.6.active_iobuf.3.ptr=0x7fdf8fe95000

[arena.6.active_iobuf.4]
arena.6.active_iobuf.4.ref=1
arena.6.active_iobuf.4.ptr=0x7fdf8fb55000


git grep iobuf_get  | grep -v libglusterfs | grep -v legacy | grep -v bdb
cli/src/cli.c:                iobuf = iobuf_get2 (this->ctx->iobuf_pool, xdr_size);
glusterfsd/src/glusterfsd-mgmt.c:        iob = iobuf_get2 (req->svc->ctx->iobuf_pool, xdr_size);
glusterfsd/src/glusterfsd-mgmt.c:                iobuf = iobuf_get2 (ctx->iobuf_pool, xdr_size);
rpc/rpc-lib/src/rpc-clnt.c:        request_iob = iobuf_get2 (clnt->ctx->iobuf_pool, (xdr_size + hdrsize));
rpc/rpc-lib/src/rpcsvc.c:        request_iob = iobuf_get2 (rpc->ctx->iobuf_pool, (xdr_size + payload));
rpc/rpc-lib/src/rpcsvc.c:        replyiob = iobuf_get2 (svc->ctx->iobuf_pool, (xdr_size + hdrlen));
rpc/rpc-transport/rdma/src/rdma.c:                iobuf = iobuf_get2 (this->ctx->iobuf_pool, size2);
rpc/rpc-transport/rdma/src/rdma.c:        iobuf = iobuf_get2 (peer->trans->ctx->iobuf_pool, bytes_in_post);
rpc/rpc-transport/rdma/src/rdma.c:                post->ctx.hdr_iobuf = iobuf_get2 (peer->trans->ctx->iobuf_pool,
rpc/rpc-transport/rdma/src/rdma.c:        iobuf = iobuf_get2 (peer->trans->ctx->iobuf_pool, size);
rpc/rpc-transport/socket/src/socket.c:                        iobuf = iobuf_get2 (this->ctx->iobuf_pool, size);
rpc/rpc-transport/socket/src/socket.c:                        iobuf = iobuf_get2 (this->ctx->iobuf_pool, size);
rpc/rpc-transport/socket/src/socket.c:                        iobuf = iobuf_get2 (this->ctx->iobuf_pool,
xlators/cluster/stripe/src/stripe.c:                                iobuf = iobuf_get2 (this->ctx->iobuf_pool,
xlators/mgmt/glusterd/src/glusterd-syncop.c:        iobuf = iobuf_get2 (rpc->ctx->iobuf_pool, req_size);
xlators/mgmt/glusterd/src/glusterd-utils.c:                iobuf = iobuf_get2 (this->ctx->iobuf_pool, req_size);
xlators/mgmt/glusterd/src/glusterd-utils.c:        iob = iobuf_get2 (req->svc->ctx->iobuf_pool, rsp_size);
xlators/mount/fuse/src/fuse-bridge.c:                iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/nfs/server/src/mount3.c:        iob = iobuf_get (ms->iobpool);
xlators/nfs/server/src/nfs3.c:        iob = iobuf_get (nfs3->iobpool);
xlators/nfs/server/src/nlm4.c:        iob = iobuf_get (nfs3->iobpool);
xlators/nfs/server/src/nlm4.c:        iobuf = iobuf_get (cs->nfs3state->iobpool);
xlators/performance/quick-read/src/quick-read.c:                        iobuf = iobuf_get (iobuf_pool);
xlators/performance/write-behind/src/write-behind.c:                iobuf = iobuf_get (request->file->this->ctx->iobuf_pool);
xlators/protocol/client/src/client.c:                iobuf = iobuf_get2 (this->ctx->iobuf_pool, xdr_size);
xlators/protocol/client/src/client3_1-fops.c:                iobuf = iobuf_get2 (this->ctx->iobuf_pool, xdr_size);
xlators/protocol/client/src/client3_1-fops.c:                        rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get2 (this->ctx->iobuf_pool, args->size);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get2 (this->ctx->iobuf_pool, 8 * GF_UNIT_KB);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get2 (this->ctx->iobuf_pool, 8 * GF_UNIT_KB);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get2 (this->ctx->iobuf_pool, 8 * GF_UNIT_KB);
xlators/protocol/client/src/client3_1-fops.c:        rsp_iobuf = iobuf_get2 (this->ctx->iobuf_pool, 8 * GF_UNIT_KB);
xlators/protocol/client/src/client3_1-fops.c:                rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/client/src/client3_1-fops.c:                rsp_iobuf = iobuf_get (this->ctx->iobuf_pool);
xlators/protocol/server/src/server.c:                iob = iobuf_get2 (req->svc->ctx->iobuf_pool, xdr_size);
xlators/storage/posix/src/posix.c:        iobuf = iobuf_get2 (this->ctx->iobuf_pool, size)

Note You need to log in before you can comment on or make changes to this bug.