ServiceManager
这里单纯说一下ServiceManager的Bn端,与其他服务的native不同,ServiceManager的native单纯由c实现,并没有使用binder c++中的框架结构。
代码位于:
frameworks/base/cmds/servicemanager/
文件也很少:
Android.mk bctest.c binder.c binder.h service_manager.c
编译时将生成 /system/bin/servicemanager,并通过init.rc启动
service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart healthd onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm onrestart restart iss_daemon
main函数位于 service_manager.c中:
int main(int argc, char **argv) { struct binder_state *bs; void *svcmgr = BINDER_SERVICE_MANAGER; bs = binder_open(128*1024); if (binder_become_context_manager(bs)) { LOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = svcmgr; binder_loop(bs, svcmgr_handler); return 0; }
可以看到,启动时此进程将首先通过 binder_open与binder driver通信,把所有信息(binder驱动句柄、mmap信息)都保存在 binder_state这个结构体中:
struct binder_state { int fd; void *mapped; unsigned mapsize; }; struct binder_state *binder_open(unsigned mapsize) { struct binder_state *bs; bs = malloc(sizeof(*bs)); bs->fd = open("/dev/binder", O_RDWR); bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); return bs; }
接下来,是将自己注册成service manager:
int binder_become_context_manager(struct binder_state *bs) { return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
然后是进入 binder_loop这个循环中,开始提供服务注册/获取等相关服务。
void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; unsigned readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(unsigned)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (unsigned) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func); if (res == 0) { LOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { LOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } } }
可以看到,在进入for循环后,这里只是不停地从 binder driver中读取其他程序给自己的数据,然后通过 binder_parse进行处理。
我们知道,所有的binder数据传输都是从客户端由 BC_TRANSACTION发起,到本地端对应的接收到 BR_TRANSACTION,这里我们就只看 BR_TRANSACTION这个最重要的case:
int binder_parse(struct binder_state *bs, struct binder_io *bio, uint32_t *ptr, uint32_t size, binder_handler func) { int r = 1; uint32_t *end = ptr + (size / 4); while (ptr < end) { uint32_t cmd = *ptr++; switch(cmd) { case BR_TRANSACTION: { struct binder_txn *txn = (void *) ptr; if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; bio_init(&reply, rdata, sizeof(rdata), 4); bio_init_from_txn(&msg, txn); res = func(bs, txn, &msg, &reply); binder_send_reply(bs, &reply, txn->data, res); } ptr += sizeof(*txn) / sizeof(uint32_t); break; } } } return r; }
这里首先从 ptr中取出 cmd(比如这里就是 BR_TRANSACTION),然后将剩余的数据转化为 binder_txn;
而通过学习 binder driver可以知道,binder数据的传输结构为 binder_write_read,其中 write_buffer和 read_buffer成员指向的数据正是 cmd + binder_transaction_data,那这里 binder_transaction_data可以直接转换成 binder_txn吗?
我们来对比一下这两者的定义:
struct binder_transaction_data { union { __u32 handle; /* target descriptor of command transaction */ binder_uintptr_t ptr; /* target descriptor of return transaction */ } target; binder_uintptr_t cookie; /* target object cookie */ __u32 code; /* transaction command */ __u32 flags; pid_t sender_pid; uid_t sender_euid; binder_size_t data_size; /* number of bytes of data */ binder_size_t offsets_size; /* number of bytes of offsets */ union { struct { binder_uintptr_t buffer; binder_uintptr_t offsets; } ptr; __u8 buf[8]; } data; }; struct binder_txn { void *target; void *cookie; uint32_t code; uint32_t flags; uint32_t sender_pid; uint32_t sender_euid; uint32_t data_size; uint32_t offs_size; void *data; void *offs; };