1. Netlink简介

0x1:基本概念

Netlink是一个灵活,高效的”内核-用户态“、”内核-内核“、”用户态-用户态“通信机制。通过将复杂的消息拷贝和消息通知机制封装在统一的socket api接口中,netlink提供了良好的接口界面。

Netlink是一个接口家族的合集总称,它包括下列接口类型,

  • 路由daemon(NETLINK_ROUTE)
  • 1-wire子系统(NETLINK_W1)
  • 用户态socket协议(NETLINK_USERSOCK)
  • 防火墙(NETLINK_FIREWALL)
  • socket监视(NETLINK_INET_DIAG)
  • netfilter日志(NETLINK_NFLOG)
  • ipsec安全策略(NETLINK_XFRM)
  • SELinux事件通知(NETLINK_SELINUX)
  • iSCSI子系统(NETLINK_ISCSI)
  • 进程审计(NETLINK_AUDIT)
  • 转发信息表查询(NETLINK_FIB_LOOKUP)
  • Netlink connector(NETLINK_CONNECTOR)
  • netfilter子系统(NETLINK_NETFILTER)
  • IPv6防火墙(NETLINK_IP6_FW)
  • DECnet路由信息(NETLINK_DNRTMSG)
  • 内核事件向用户态通知(NETLINK_KOBJECT_UEVENT)
  • 通用netlink(NETLINK_GENERIC)

Netlink相对于消息类系统调用(消息队列、消息管道)、共享内存、ioctl、以及/proc文件系统而言具有以下优点

  • 为了使用netlink,用户仅需要在include/linux/netlink.h中增加一个新类型的netlink协议定义即可,如 “#define NETLINK_MYTEST 17”,然后,内核和用户态应用就可以立即通过 socket API 使用该 netlink 协议类型进行数据交换。但系统调用需要增加新的系统调用,ioctl 则需要增加设备或文件, 那需要不少代码,proc 文件系统则需要在 /proc 下添加新的文件或目录,那将使本来就混乱的 /proc 更加混乱
  • netlink是一种异步通信机制,在内核与用户态应用之间传递的消息保存在socket缓存队列中,发送消息只是把消息保存在接收者的socket的接收队列,而不需要等待接收者收到消息,但系统调用与 ioctl 则是同步通信机制,如果传递的数据太长,将影响调度粒度
  • 使用 netlink 的内核部分可以采用模块的方式实现,使用 netlink 的应用部分和内核部分没有编译时依赖。但系统调用就有依赖,而且新的系统调用的实现必须静态地连接到内核中,它无法在模块中实现,使用新系统调用的应用在编译时需要依赖内核
  • netlink 支持多播,内核模块或应用可以把消息多播给一个netlink组,属于该neilink组的任何内核模块或应用都能接收到该消息,内核事件向用户态的通知机制就使用了这一特性,任何对内核事件感兴趣的应用都能收到该子系统发送的内核事件
  • 内核可以使用 netlink 首先发起会话(双向的),但系统调用和 ioctl 只能由用户应用发起调用
  • netlink 使用标准的 socket API,因此很容易使用

0x2: Netllink通信基本流程

从用户态-内核态交互的角度来看,Netlink的通信流程如下

  • 应用程序将待发送的数据通过sendmsg()传给Netlink,Netlink进行"组包",这实际上是一次内存拷贝
  • Netlink在buffer满之后,即组包完成,将消息一次性进行"穿透拷贝",即copy_from_user、copy_to_user,这是一次代价较高的系统调用
  • 内核模块从Netlink的buffer逐个取出数据包,即拆包,这个过程可以串行的实现,也可以异步地实现

Relevant Link:

http://www.linuxfoundation.org/collaborate/workgroups/networking/netlink

2. Netlink Function API

0x1: User Space

用户态应用使用标准的socket APIs,socket()、bind()、sendmsg()、recvmsg()、close()就能很容易地使用netlink socket

1. socket

socket(AF_NETLINK, SOCK_RAW, netlink_type)
. 参数1:
) AF_NETLINK
) PF_NETLINK
//在 Linux 中,它们俩实际为一个东西,它表示要使用netlink . 参数2:
) SOCK_RAW
) SOCK_DGRAM . 参数3: 指定Netlink协议类型
#define NETLINK_ROUTE 0 /* Routing/device hook */
#define NETLINK_W1 1 /* 1-wire subsystem */
#define NETLINK_USERSOCK 2 /* Reserved for user mode socket protocols */
#define NETLINK_FIREWALL 3 /* Firewalling hook */
#define NETLINK_INET_DIAG 4 /* INET socket monitoring */
#define NETLINK_NFLOG 5 /* netfilter/iptables ULOG */
#define NETLINK_XFRM 6 /* ipsec */
#define NETLINK_SELINUX 7 /* SELinux event notifications */
#define NETLINK_ISCSI 8 /* Open-iSCSI */
#define NETLINK_AUDIT 9 /* auditing */
#define NETLINK_FIB_LOOKUP 10
#define NETLINK_CONNECTOR 11
#define NETLINK_NETFILTER 12 /* netfilter subsystem */
#define NETLINK_IP6_FW 13
#define NETLINK_DNRTMSG 14 /* DECnet routing messages */
#define NETLINK_KOBJECT_UEVENT 15 /* Kernel messages to userspace */
#define NETLINK_GENERIC 16 //NETLINK_GENERIC是一个通用的协议类型,它是专门为用户使用的,因此,用户可以直接使用它,而不必再添加新的协议类型

其中,11号NETLINK_CONNECTOR,就是用来进行系统进程行为监控的API,也是现在很多HIDS的主机进程日志采集技术方案。

通过”NETLINK_CONNECTOR“,可以实时地拿到下列进程相关事件,

  • PROC_EVENT_NONE
  • PROC_EVENT_FORK
  • PROC_EVENT_EXEC
  • PROC_EVENT_UID
  • PROC_EVENT_GID
  • PROC_EVENT_SID
  • PROC_EVENT_PTRACE
  • PROC_EVENT_COMM
  • PROC_EVENT_EXIT

关于具体编码实现,可以参阅这篇文章

2. bind

对于每一个netlink协议类型,可以有多达 32多播组,每一个多播组用一个位表示,netlink 的多播特性使得发送消息给同一个组仅需要一次系统调用,因而对于需要多拨消息的应用而言,大大地降低了系统调用的次数

bind(fd, (struct sockaddr*)&nladdr, sizeof(struct sockaddr_nl));
函数bind()用于把一个打开的netlink socket与netlink源socket地址绑定在一起。netlink socket的地址结构如下 struct sockaddr_nl
{
//字段 nl_family 必须设置为 AF_NETLINK 或着 PF_NETLINK
sa_family_t nl_family; //字段 nl_pad 当前没有使用,因此要总是设置为 0
unsigned short nl_pad; /*
字段 nl_pid 为接收或发送消息的进程的 ID
1. nl_pid = 0: 消息接收者为内核或多播组
2. nl_pid != 0: nl_pid 实际上未必是进程 ID,它只是用于区分不同的接收者或发送者的一个标识,用户可以根据自己需要设置该字段
*/
__u32 nl_pid; /*
nl_groups 用于指定多播组,bind 函数用于把调用进程加入到该字段指定的多播组
1. nl_groups = 0: 该消息为单播消息,调用者不加入任何多播组
2. nl_groups != 0: 多播消息
*/
__u32 nl_groups;
};

值得注意的是,传递给 bind 函数的地址的 nl_pid 字段应当设置为本进程的进程 ID,这相当于 netlink socket 的本地地址。但是,对于一个进程的多个线程使用 netlink socket 的情况,字段 nl_pid 则可以设置为其它的值,如

pthread_self() <<  | getpid();

字段 nl_pid 实际上未必是进程 ID,它只是用于区分不同的接收者或发送者的一个标识,用户可以根据自己需要设置该字段

关于使用netlink api及其相关参数,请参阅另一篇文章
http://www.cnblogs.com/LittleHann/p/3867214.html
//搜索:user_client.c(用户态程序)

3. sendmsg

/source/net/socket.c

/*
* BSD sendmsg interface
*/
SYSCALL_DEFINE3(sendmsg, int, fd, struct msghdr __user *, msg, unsigned, flags)
{
struct compat_msghdr __user *msg_compat = (struct compat_msghdr __user *)msg;
struct socket *sock;
struct sockaddr_storage address;
struct iovec iovstack[UIO_FASTIOV], *iov = iovstack;
unsigned char ctl[sizeof(struct cmsghdr) + ] __attribute__ ((aligned(sizeof(__kernel_size_t))));
/* 20 is size of ipv6_pktinfo */
unsigned char *ctl_buf = ctl;
struct msghdr msg_sys;
int err, ctl_len, iov_size, total_len;
int fput_needed; err = -EFAULT;
if (MSG_CMSG_COMPAT & flags)
{
if (get_compat_msghdr(&msg_sys, msg_compat))
return -EFAULT;
}
else
{
err = copy_msghdr_from_user(&msg_sys, msg);
if (err)
return err;
} sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out; /* do not move before msg_sys is valid */
err = -EMSGSIZE;
if (msg_sys.msg_iovlen > UIO_MAXIOV)
goto out_put; /* Check whether to allocate the iovec area */
err = -ENOMEM;
iov_size = msg_sys.msg_iovlen * sizeof(struct iovec);
if (msg_sys.msg_iovlen > UIO_FASTIOV)
{
iov = sock_kmalloc(sock->sk, iov_size, GFP_KERNEL);
if (!iov)
goto out_put;
} /* This will also move the address data into kernel space */
if (MSG_CMSG_COMPAT & flags)
{
err = verify_compat_iovec(&msg_sys, iov, (struct sockaddr *)&address, VERIFY_READ);
}
else
err = verify_iovec(&msg_sys, iov, (struct sockaddr *)&address, VERIFY_READ);
if (err < )
goto out_freeiov;
total_len = err; err = -ENOBUFS; if (msg_sys.msg_controllen > INT_MAX)
goto out_freeiov;
ctl_len = msg_sys.msg_controllen;
if ((MSG_CMSG_COMPAT & flags) && ctl_len)
{
err = cmsghdr_from_user_compat_to_kern(&msg_sys, sock->sk, ctl, sizeof(ctl));
if (err)
goto out_freeiov;
ctl_buf = msg_sys.msg_control;
ctl_len = msg_sys.msg_controllen;
}
else if (ctl_len)
{
if (ctl_len > sizeof(ctl)) {
ctl_buf = sock_kmalloc(sock->sk, ctl_len, GFP_KERNEL);
if (ctl_buf == NULL)
goto out_freeiov;
}
err = -EFAULT;
/*
* Careful! Before this, msg_sys.msg_control contains a user pointer.
* Afterwards, it will be a kernel pointer. Thus the compiler-assisted
* checking falls down on this.
*/
if (copy_from_user(ctl_buf, (void __user *)msg_sys.msg_control, ctl_len))
goto out_freectl;
msg_sys.msg_control = ctl_buf;
}
msg_sys.msg_flags = flags; if (sock->file->f_flags & O_NONBLOCK)
msg_sys.msg_flags |= MSG_DONTWAIT;
err = sock_sendmsg(sock, &msg_sys, total_len); out_freectl:
if (ctl_buf != ctl)
sock_kfree_s(sock->sk, ctl_buf, ctl_len);
out_freeiov:
if (iov != iovstack)
sock_kfree_s(sock->sk, iov, iov_size);
out_put:
fput_light(sock->file, fput_needed);
out:
return err;
}

/source/net/socket.c

int sock_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
{
struct kiocb iocb;
struct sock_iocb siocb;
int ret; init_sync_kiocb(&iocb, NULL);
iocb.private = &siocb;
/*
调用__sock_sendmsg进行UDP数据报的发送
*/
ret = __sock_sendmsg(&iocb, sock, msg, size);
if (-EIOCBQUEUED == ret)
ret = wait_on_sync_kiocb(&iocb);
return ret;
} static inline int __sock_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t size)
{
struct sock_iocb *si = kiocb_to_siocb(iocb);
int err; si->sock = sock;
si->scm = NULL;
si->msg = msg;
si->size = size; err = security_socket_sendmsg(sock, msg, size);
if (err)
return err; /*
const struct proto_ops inet_dgram_ops =
{
.family = PF_INET,
.owner = THIS_MODULE,
.release = inet_release,
.bind = inet_bind,
.connect = inet_dgram_connect,
.socketpair = sock_no_socketpair,
.accept = sock_no_accept,
.getname = inet_getname,
.poll = udp_poll,
.ioctl = inet_ioctl,
.listen = sock_no_listen,
.shutdown = inet_shutdown,
.setsockopt = sock_common_setsockopt,
.getsockopt = sock_common_getsockopt,
.sendmsg = inet_sendmsg,
.recvmsg = sock_common_recvmsg,
.mmap = sock_no_mmap,
.sendpage = inet_sendpage,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_sock_common_setsockopt,
.compat_getsockopt = compat_sock_common_getsockopt,
#endif
};
EXPORT_SYMBOL(inet_dgram_ops);
从结构体中可以看出,sendmsg()对应的系统调用是inet_sendmsg()
我们继续跟进分析inet_sendmsg()
\linux-2.6.32.63\net\ipv4\af_inet.c
*/
return sock->ops->sendmsg(iocb, sock, msg, size);
}

\linux-2.6.32.63\net\ipv4\af_inet.c

int inet_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t size)
{
struct sock *sk = sock->sk; /* We may need to bind the socket. */
if (!inet_sk(sk)->num && inet_autobind(sk))
return -EAGAIN;
/*
INET SOCKET调用协议特有sendmsg操作符
对于INET socket中的udp发送,协议特有操作符集为udp_prot
linux-2.6.32.63\net\ipv4\udp.c
struct proto udp_prot =
{
.name = "UDP",
.owner = THIS_MODULE,
.close = udp_lib_close,
.connect = ip4_datagram_connect,
.disconnect = udp_disconnect,
.ioctl = udp_ioctl,
.destroy = udp_destroy_sock,
.setsockopt = udp_setsockopt,
.getsockopt = udp_getsockopt,
.sendmsg = udp_sendmsg,
.recvmsg = udp_recvmsg,
.sendpage = udp_sendpage,
.backlog_rcv = __udp_queue_rcv_skb,
.hash = udp_lib_hash,
.unhash = udp_lib_unhash,
.get_port = udp_v4_get_port,
.memory_allocated = &udp_memory_allocated,
.sysctl_mem = sysctl_udp_mem,
.sysctl_wmem = &sysctl_udp_wmem_min,
.sysctl_rmem = &sysctl_udp_rmem_min,
.obj_size = sizeof(struct udp_sock),
.slab_flags = SLAB_DESTROY_BY_RCU,
.h.udp_table = &udp_table,
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_udp_setsockopt,
.compat_getsockopt = compat_udp_getsockopt,
#endif
};
EXPORT_SYMBOL(udp_prot);
可以看出,对于UDP,流程进入udp_sendmsg函数(.sendmsg对应的是udp_sendmsg()函数),我们继续跟进udp_sendmsg()
\linux-2.6.32.63\net\ipv4\udp.c
*/
return sk->sk_prot->sendmsg(iocb, sk, msg, size);
}
EXPORT_SYMBOL(inet_sendmsg);

从netlink发送消息相关的数据结构中我们可以看出netlink发送消息的逻辑

  • 对于程序员来说,发送消息的系统调用接口只有sendmsg,每次调用sendmsg只需要传入struct msghdr结构体的实例即可
  • 对于每个struct msghdr结构的实例来说,都必须指定struct iovec成员,即所有单个的消息都会被"挂入"一个"队列"中,用于缓存集中发送
  • 每个代表"消息队列"的struct iovec结构体实例,都必须指定struct nlmsghdr成员,即消息头,用于实现"多路复用"和"多路分解"

4. recvmsg

/source/net/socket.c

/*
* BSD recvmsg interface
*/
SYSCALL_DEFINE3(recvmsg, int, fd, struct msghdr __user *, msg, unsigned int, flags)
{
struct compat_msghdr __user *msg_compat = (struct compat_msghdr __user *)msg;
struct socket *sock;
struct iovec iovstack[UIO_FASTIOV];
struct iovec *iov = iovstack;
struct msghdr msg_sys;
unsigned long cmsg_ptr;
int err, iov_size, total_len, len;
int fput_needed; /* kernel mode address */
struct sockaddr_storage addr; /* user mode address pointers */
struct sockaddr __user *uaddr;
int __user *uaddr_len; if (MSG_CMSG_COMPAT & flags)
{
if (get_compat_msghdr(&msg_sys, msg_compat))
return -EFAULT;
}
else
{
err = copy_msghdr_from_user(&msg_sys, msg);
if (err)
return err;
} sock = sockfd_lookup_light(fd, &err, &fput_needed);
if (!sock)
goto out; err = -EMSGSIZE;
if (msg_sys.msg_iovlen > UIO_MAXIOV)
goto out_put; /* Check whether to allocate the iovec area */
err = -ENOMEM;
iov_size = msg_sys.msg_iovlen * sizeof(struct iovec);
if (msg_sys.msg_iovlen > UIO_FASTIOV)
{
iov = sock_kmalloc(sock->sk, iov_size, GFP_KERNEL);
if (!iov)
goto out_put;
} /* Save the user-mode address (verify_iovec will change the
* kernel msghdr to use the kernel address space)
*/
uaddr = (__force void __user *)msg_sys.msg_name;
uaddr_len = COMPAT_NAMELEN(msg);
if (MSG_CMSG_COMPAT & flags)
err = verify_compat_iovec(&msg_sys, iov, (struct sockaddr *)&addr, VERIFY_WRITE);
else
err = verify_iovec(&msg_sys, iov, (struct sockaddr *)&addr, VERIFY_WRITE);
if (err < )
goto out_freeiov;
total_len = err; cmsg_ptr = (unsigned long)msg_sys.msg_control;
msg_sys.msg_flags = flags & (MSG_CMSG_CLOEXEC|MSG_CMSG_COMPAT); /* We assume all kernel code knows the size of sockaddr_storage */
msg_sys.msg_namelen = ; if (sock->file->f_flags & O_NONBLOCK)
flags |= MSG_DONTWAIT;
err = sock_recvmsg(sock, &msg_sys, total_len, flags);
if (err < )
goto out_freeiov;
len = err; if (uaddr != NULL)
{
err = move_addr_to_user((struct sockaddr *)&addr, msg_sys.msg_namelen, uaddr, uaddr_len);
if (err < )
goto out_freeiov;
}
err = __put_user((msg_sys.msg_flags & ~MSG_CMSG_COMPAT), COMPAT_FLAGS(msg));
if (err)
goto out_freeiov;
if (MSG_CMSG_COMPAT & flags)
err = __put_user((unsigned long)msg_sys.msg_control - cmsg_ptr, &msg_compat->msg_controllen);
else
err = __put_user((unsigned long)msg_sys.msg_control - cmsg_ptr, &msg->msg_controllen);
if (err)
goto out_freeiov;
err = len; out_freeiov:
if (iov != iovstack)
sock_kfree_s(sock->sk, iov, iov_size);
out_put:
fput_light(sock->file, fput_needed);
out:
return err;
}

0x2: Kernel Space

内核使用netlink需要专门的API,这完全不同于用户态应用对netlink的使用。如果用户需要增加新的netlink协议类型,必须通过修改linux/netlink.h来实现。

当然,目前的netlink实现已经包含了一个通用的协议类型 NETLINK_GENERIC 以方便用户使用,用户可以直接使用它而不必增加新的协议类型。

1. netlink_kernel_create

在内核中,为了创建一个netlink socket用户需要调用如下函数

struct sock *netlink_kernel_create(int unit, void (*input)(struct sock *sk, int len));

当内核中发送netlink消息时,也需要设置目标地址与源地址,linux/netlink.h中定义了一个宏

struct netlink_skb_parms
{
/*
Skb credentials
struct scm_creds
{
//pid表示消息发送者进程ID,也即源地址,对于内核,它为 0
u32 pid;
kuid_t uid;
kgid_t gid;
};
struct scm_creds creds; /*
字段portid表示消息接收者进程 ID,也即目标地址,如果目标为组或内核,它设置为 0,否则 dst_group 表示目标组地址,如果它目标为某一进程或内核,dst_group 应当设置为 0
*/
__u32 portid;
__u32 dst_group;
__u32 flags;
struct sock *sk;
};
#define NETLINK_CB(skb) (*(struct netlink_skb_parms*)&((skb)->cb))

在内核中,模块调用函数 netlink_unicast 来发送单播消息

int netlink_unicast(struct sock *sk, struct sk_buff *skb, u32 pid, int nonblock);

2. kernel_recvmsg

/source/net/socket.c

int kernel_recvmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, size_t num, size_t size, int flags)
{
mm_segment_t oldfs = get_fs();
int result; set_fs(KERNEL_DS);
/*
* the following is safe, since for compiler definitions of kvec and
* iovec are identical, yielding the same in-core layout and alignment
*/
msg->msg_iov = (struct iovec *)vec, msg->msg_iovlen = num;
result = sock_recvmsg(sock, msg, size, flags);
set_fs(oldfs);
return result;
}

对于内核态来说,数据包此时已经copy到了Netlink的KERNEL态缓存了。

3. kernel_sendmsg

/source/net/socket.c

int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, size_t num, size_t size)
{
mm_segment_t oldfs = get_fs();
int result; set_fs(KERNEL_DS);
/*
* the following is safe, since for compiler definitions of kvec and
* iovec are identical, yielding the same in-core layout and alignment
*/
msg->msg_iov = (struct iovec *)vec;
msg->msg_iovlen = num;
result = sock_sendmsg(sock, msg, size);
set_fs(oldfs);
return result;

Relevant Link:

http://www.cnblogs.com/iceocean/articles/1594195.html
http://blog.csdn.net/zcabcd123/article/details/8272423
http://www.opensource.apple.com/source/Heimdal/Heimdal-247.9/lib/roken/sendmsg.c
https://fossies.org/dox/glibc-2.21/sysdeps_2mach_2hurd_2sendmsg_8c_source.html
http://lxr.free-electrons.com/source/net/socket.c

3. 基于Netlink_Connector套接字监控系统进程行为

# -*- coding: utf- -*-

import socket
import os
import struct
import errno
from select import select
import psutil
from datetime import datetime
import pwd ##################################################################### utils #####################################################################
class BaseStruct(object):
fields = () def _fill_struct(self, data):
for k,v in zip(self.fields, data):
setattr(self, k, v) class DictWrapper(dict):
def __getattr__(self, attr):
return self[attr]
##################################################################### utils ##################################################################### ##################################################################### netlink #####################################################################
NETLINK_CONNECTOR = NLMSG_NOOP = 0x1 # Nothing
NLMSG_ERROR = 0x2 # Error
NLMSG_DONE = 0x3 # End of a dump
NLMSG_OVERRUN = 0x4 # Data lost # struct nlmsghdr
# {
# __u32 nlmsg_len; /* Length of message including header */
# __u16 nlmsg_type; /* Message content */
# __u16 nlmsg_flags; /* Additional flags */
# __u32 nlmsg_seq; /* Sequence number */
# __u32 nlmsg_pid; /* Sending process port ID */
# }; nlmsghdr = struct.Struct("=I2H2I") def netlink_pack(_type, flags, msg):
"""
Put a netlink header on a message.
The msg parameter is assumed to be a pre-struct-packed data block.
We don't care about seq for now.
"""
_len = len(msg) + nlmsghdr.size
seq =
return nlmsghdr.pack(_len, _type, flags, seq, os.getpid()) + msg def unpack_hdr(data):
return DictWrapper(
zip(("len", "type", "flags", "seq", "pid"),
nlmsghdr.unpack(data[:nlmsghdr.size])))
##################################################################### netlink ##################################################################### ##################################################################### connector #####################################################################
CN_IDX_PROC = 0x1
CN_VAL_PROC = 0x1 # struct cb_id {
# __u32 idx;
# __u32 val;
# }; # struct cn_msg {
# struct cb_id id; # __u32 seq;
# __u32 ack; # __u16 len; /* Length of the following data */
# __u16 flags;
# __u8 data[];
# }; # The data member is left out of this declaration since it may be of
# varying length. This means that unpacking of a complete message will
# have to be incremental and done solely by the decoder of the
# innermost data (in my case pec_decode() in pec.py). cn_msg = struct.Struct("=4I2H") def pack_msg(cb_idx, cb_val, flags, data):
"""
Pack a cn_msg struct with the passed in data.
The data parameter is assumed to be a pre-struct-packed data block.
We don't care about seq or ack for now.
"""
seq = ack =
_len = len(data)
return cn_msg.pack(cb_idx, cb_val, seq, ack, _len, flags) + data def unpack_msg(data):
"""
Peel off netlink header and extract the message (including payload)
from data. This will return a DictWrapper object.
"""
data = data[:cn_msg.size] # Slice off trailing data
return DictWrapper(
zip(("cb_idx", "cb_val", "seq", "ack", "len", "flags"),
cn_msg.unpack(data)))
##################################################################### connector ##################################################################### ############################################ Process ############################################
############################################ Process ############################################
############################################ Process ############################################
############################################ Process ############################################ def getUidName(uid):
try:
euidname = pwd.getpwuid(uid).pw_name
if euidname:
return euidname
except Exception, e:
return '' def check_file_exist(file_path):
return os.path.isfile(file_path) def get_target_processinfo_byid(pid):
if not check_file_exist('/proc/%s/cmdline' % pid):
return
p_info = ''
try:
p_info = psutil.Process(pid)
if not p_info:
return
pname = p_info.name() # 进程名
pexe = p_info.exe() # 进程的bin路径
pcmdline = ''
pcmdline_list = p_info.cmdline() # 进程的bin路径
if pcmdline_list != None:
for arg in pcmdline_list:
pcmdline += ' ' + arg
pstatus = p_info.status() # 进程状态
pcreate_time = p_info.create_time() # 进程创建时间
puid = p_info.uids()[] # 进程effective uid信息
pgid = p_info.gids()[] # 进程的effective gid信息
pmemory_percent = p_info.memory_percent() # 进程内存利用率
pnum_threads = p_info.num_threads() # 进程开启的线程数
pcpu_percent = p_info.cpu_percent() # 进程的CPU利用率 puid_name = getUidName(puid)
pgid_name = getUidName(pgid) return {
'pid': pid,
'pname': pname,
'pexe': pexe,
'pcmdline': pcmdline,
'pstatus': pstatus,
'pcreate_time': datetime.fromtimestamp(pcreate_time).strftime('%Y-%m-%d %H:%M:%S'),
'puid': puid,
'puid_name': puid_name,
'pgid': pgid,
'pgid_name': pgid_name,
'pmemory_percent': pmemory_percent,
'pcpu_percent': pcpu_percent,
'pnum_threads': pnum_threads
}
except Exception, e:
return ############################################ Process ############################################
############################################ Process ############################################
############################################ Process ############################################
############################################ Process ############################################ ##################################################################### pec #####################################################################
PROC_CN_MCAST_LISTEN = 0x1
PROC_CN_MCAST_IGNORE = 0x2 PROC_EVENT_NONE = 0x00000000
PROC_EVENT_FORK = 0x00000001
PROC_EVENT_EXEC = 0x00000002
PROC_EVENT_UID = 0x00000004
PROC_EVENT_GID = 0x00000040
PROC_EVENT_SID = 0x00000080
PROC_EVENT_PTRACE = 0x00000100
PROC_EVENT_COMM = 0x00000200
PROC_EVENT_EXIT = 0x80000000 process_events = {"PROC_EVENT_NONE": PROC_EVENT_NONE,
"PROC_EVENT_FORK": PROC_EVENT_FORK,
"PROC_EVENT_EXEC": PROC_EVENT_EXEC,
"PROC_EVENT_UID": PROC_EVENT_UID,
"PROC_EVENT_GID": PROC_EVENT_GID,
"PROC_EVENT_SID": PROC_EVENT_SID,
"PROC_EVENT_PTRACE": PROC_EVENT_PTRACE,
"PROC_EVENT_COMM": PROC_EVENT_COMM,
"PROC_EVENT_EXIT": PROC_EVENT_EXIT} process_events_rev = dict(zip(process_events.values(),
process_events.keys())) base_proc_event = struct.Struct("=2IL") event_struct_map = {PROC_EVENT_NONE: struct.Struct("=I"),
PROC_EVENT_FORK: struct.Struct("=4I"),
PROC_EVENT_EXEC: struct.Struct("=2I"),
PROC_EVENT_UID: struct.Struct("=4I"),
PROC_EVENT_GID: struct.Struct("=4I"),
PROC_EVENT_SID: struct.Struct("=2I"),
PROC_EVENT_PTRACE: struct.Struct("=4I"),
PROC_EVENT_COMM: struct.Struct("=2I16s"),
PROC_EVENT_EXIT: struct.Struct("=4I")} process_list = [] def pec_bind(s):
"""
Bind a socket to the Process Event Connector.
This will pass on any socket.error exception raised. The most
common one will be EPERM since you need root privileges to
bind to the connector.
"""
s.bind((os.getpid(), CN_IDX_PROC)) def pec_control(s, listen=False):
"""
Notify PEC if we want event notifications on this socket or not.
"""
pec_ctrl_data = struct.Struct("=I")
if listen:
action = PROC_CN_MCAST_LISTEN
else:
action = PROC_CN_MCAST_IGNORE nl_msg = netlink_pack(
NLMSG_DONE, , pack_msg(
CN_IDX_PROC, CN_VAL_PROC, ,
pec_ctrl_data.pack(action)))
s.send(nl_msg) def pec_unpack(data):
"""
Peel off the wrapping layers from the data. This will return
a DictWrapper object.
"""
nl_hdr = unpack_hdr(data)
if nl_hdr.type != NLMSG_DONE:
# Ignore all other types of messages
return
# Slice off header data and trailing data (if any)
data = data[nlmsghdr.size:nl_hdr.len]
#msg = connector.unpack_msg(data)
# .. and away goes the connector_message, leaving just the payload
data = data[cn_msg.size:]
event = list(base_proc_event.unpack(data[:base_proc_event.size]))
ev_data_struct = event_struct_map.get(event[])
event_data = ev_data_struct.unpack(
data[base_proc_event.size:base_proc_event.size+ev_data_struct.size]) fields = ["what", "cpu", "timestamp_ns"]
if event[] == PROC_EVENT_NONE:
fields.append("err")
event[] = -
elif event[] == PROC_EVENT_FORK:
fields += ["parent_pid", "parent_tgid", "child_pid", "child_tgid"]
elif event[] == PROC_EVENT_EXEC:
fields += ["process_pid", "process_tgid"]
elif event[] == PROC_EVENT_UID:
fields += ["process_pid", "process_tgid", "ruid", "rgid"]
elif event[] == PROC_EVENT_GID:
fields += ["process_pid", "process_tgid", "euid", "egid"]
elif event[] == PROC_EVENT_SID:
fields += ["process_pid", "process_tgid"]
elif event[] == PROC_EVENT_PTRACE:
fields += ["process_pid", "process_tgid", "tracer_pid", "tracer_tgid"]
elif event[] == PROC_EVENT_COMM:
fields += ["process_pid", "process_tgid", "comm"]
elif event[] == PROC_EVENT_EXIT:
fields += ["process_pid", "process_tgid", "exit_code", "exit_signal"] return DictWrapper(zip(fields, tuple(event) + event_data)) def register_process(pid=None, process_name=None, events=(), action=None):
"""
Register a callback for processes of a specific name or
by pid. pec_loop() will call this callback for any processes
matching.
If no events is specified, all events related to
that pid will call the callback. The action can be any callable.
One argument will be passed to the callable, the PEC message,
as returned by pec_unpack().
"""
for x in events:
if x not in process_events:
raise Exception("No such process event: 0x%08x" % (int(x),))
process_list.append({'pid': pid,
'process_name': process_name,
'events': events}) def pec_loop(plist=process_list):
s = socket.socket(socket.AF_NETLINK,
socket.SOCK_DGRAM,
NETLINK_CONNECTOR) # Netlink sockets are connected with pid and message group mask,
# message groups are for multicast protocols (like our process event
# connector). try:
pec_bind(s)
except socket.error, (_errno, errmsg):
if _errno == errno.EPERM:
raise Exception("You don't have permission to bind to the "
"process event connector. Try sudo.") pec_control(s, listen=True) while True:
(readable, w, e) = select([s],[],[])
buf = readable[].recv()
event = pec_unpack(buf)
event["what"] = process_events_rev.get(event.what)
print event
##################################################################### pec ##################################################################### def printCmdlineInfo(pid, ppid, event):
if pid:
pid_info = get_target_processinfo_byid(pid)
ppid_info = get_target_processinfo_byid(ppid)
if pid_info and ppid_info:
print dict(
what=event["what"],
cmdlline=pid_info['pcmdline'],
processId=pid_info['pid'],
name=pid_info['pname'],
Caption=pid_info['pexe'],
ExecutablePath=pid_info['pcmdline'],
SessionId=,
processOwner_username=pid_info['puid_name'],
processOwner_domainname=pid_info['pgid_name'],
processOwnerSid=pid_info['puid'],
ParentProcessId=ppid_info['pid'],
ParentProcessName=ppid_info['pname'],
ParentProcessCmdline=ppid_info['pcmdline']
) procInfo = {}
def filter_target_event(event):
if event["what"] == 'PROC_EVENT_FORK':
pid = event["child_tgid"]
ppid = event["parent_tgid"]
procInfo[pid] = ppid
printCmdlineInfo(pid, ppid, event)
elif event["what"] == 'PROC_EVENT_EXEC':
pid = event["process_tgid"]
if pid in procInfo.keys():
ppid = procInfo[pid]
printCmdlineInfo(pid, ppid, event)
elif event["what"] == 'PROC_EVENT_EXIT':
print "PROC_EVENT_EXIT: ", event
elif event["what"] in ['PROC_EVENT_UID', 'PROC_EVENT_GID', 'PROC_EVENT_SID', 'PROC_EVENT_PTRACE']:
ppid = None
pid = event["process_tgid"]
if pid in procInfo.keys():
ppid = procInfo[pid]
if ppid:
printCmdlineInfo(pid, ppid, event)
else:
printCmdlineInfo(pid, pid, event)
elif event["what"] == 'PROC_EVENT_COMM':
print "PROC_EVENT_COMM: ", event def start():
# Create Netlink socket
s = socket.socket(socket.AF_NETLINK,
socket.SOCK_DGRAM,
NETLINK_CONNECTOR) # Netlink sockets are connected with pid and message group mask,
# message groups are for multicast protocols (like our process event
# connector). try:
s.bind((os.getpid(), CN_IDX_PROC))
except socket.error as (_errno, errmsg):
if _errno == errno.EPERM:
print ("You don't have permission to bind to the "
"process event connector. Try sudo.")
raise SystemExit()
raise pec_control(s, listen=True) while True:
(readable, w, e) = select([s], [], [])
buf = readable[].recv()
event = pec_unpack(buf)
event["what"] = process_events_rev.get(event.what)
filter_target_event(event) s.close() if __name__ == "__main__":
start()

0x1:监控进程启动行为

fork.py代码如下:

# multiprocessing.py
import os print 'Process (%s) start...' % os.getpid()
pid = os.fork()
if pid==:
print 'I am child process (%s) and my parent is %s.' % (os.getpid(), os.getppid())
else:
print 'I (%s) just created a child process (%s).' % (os.getpid(), pid)

对于启动新进程来说,会同时监控到fork和execv。

同时对于python的multiprocess库来说,底层是通过fork实现进程的复制创建,就只会监控到fork事件,而没有execv事件。

0x2:监控进程uid/gid/sid的变化

执行如下指令:

sudo ifconfig

0x3:监控进程被ptrace跟踪

执行如下指令:

strace who

Relevant Link:

https://github.com/LittleHann/proc_events/blob/master/proc_events/netlink.py
https://github.com/LittleHann/proc_events/blob/master/proc_events/connector.py
https://github.com/LittleHann/proc_events/blob/master/proc_events/pec.py
https://blog.csdn.net/yidunmarket/article/details/96742658

NetLink Communication Mechanism And Netlink Sourcecode Analysis的更多相关文章

  1. Linux Communication Mechanism Summarize

    目录 . Linux通信机制分类简介 . 控制机制 0x1: 竞态条件 0x2: 临界区 . Inter-Process Communication (IPC) mechanisms: 进程间通信机制 ...

  2. [转]A Faster UIWebView Communication Mechanism

    ref:http://blog.persistent.info/2013/10/a-faster-uiwebview-communication.html Use location.hash or t ...

  3. [Angular] Implementing A General Communication Mechanism For Directive Interaction

    We have modal implement and now we want to implement close functionality. Becuase we use a structure ...

  4. Nginx Parsing HTTP Package、header/post/files/args Sourcecode Analysis

    catalog . Nginx源码结构 . HTTP Request Header解析流程 . HTTP Request Body解析流程 1. Nginx源码结构 . core:Nginx的核心源代 ...

  5. PHP Lex Engine Sourcecode Analysis(undone)

    catalog . PHP词法解析引擎Lex简介 . PHP标签解析 1. PHP词法解析引擎Lex简介 Relevant Link: 2. PHP标签解析 \php-5.4.41\Zend\zend ...

  6. [dev][ipsec] netlink是什么

    介绍: https://www.linuxjournal.com/article/7356 大纲: man手册 http://man7.org/linux/man-pages/man7/netlink ...

  7. netlink socket编程

    转载 原文地址:netlink socket编程之why & how (转) 作者:renyuan000 作者: Kevin Kaichuan He@2005-1-5 翻译整理:duanjig ...

  8. linux 内核与用户空间通信之netlink使用方法

    转自:http://blog.csdn.net/haomcu/article/details/7371835 Linux中的进程间通信机制源自于Unix平台上的进程通信机制.Unix的两大分支AT&a ...

  9. Generic Netlink详解

    netlink socket是一种用于用户态进程和内核态进程之间的通信机制.它通过为内核模块提供一组特殊的API,并为用户程序提供了一组标准的socket接口的方式,实现了全双工的通讯连接. Netl ...

随机推荐

  1. Lowest Common Ancestor of a Binary Tree

    Given a binary tree, find the lowest common ancestor (LCA) of two given nodes in the tree. According ...

  2. IE8下获取iframe document EVENT对象的问题

    在一个页面中设置iframe的document Onclick 事件获取在iframe中的document被点击的对象,W3C如下: document.getElementById('iframe的I ...

  3. KMS10流氓软件

    win10想激活,结果中了流氓软件的当... (关键是win10家庭单语言版居然还激活不了....白吃亏了) 把我的chrome 和 firefox 主页改成hao.qquu8.com,该网址重定向到 ...

  4. HDU1281-棋盘游戏-二分图匹配

    先跑一个二分图匹配,然后一一删去匹配上的边,看能不能达到最大匹配数,不能这条边就是重要边 /*----------------------------------------------------- ...

  5. C/C++实践笔记 003

    数据结构与算法程序=数据结构+算法语言是一种工具语言工具(c,c++)--程序设计方法(面向过程.面向对象)——数据结构(二叉树.队列.栈.红黑树.链表……)——算法(快速排序算法.冒泡排序算法.选择 ...

  6. 【有人@我】Android中高亮变色显示文本中的关键字

    应该是好久没有写有关技术类的文章了,前天还有人在群里问我,说群主很长时间没有分享干货了,今天分享一篇Android中TextView在大段的文字内容中如何让关键字高亮变色的文章 ,希望对大家有所帮助, ...

  7. 深入理解计算机系统(4.1)---X86的孪生兄弟,Y86指令体系结构

    引言 各位猿友们好,计算机系统系列很久没更新了,实在是抱歉之极.新的一年,为了给计算机系统系列添加一些新的元素,LZ将其更改为书的原名<深入理解计算机系统>.这本书非常厚,而且难度较高,L ...

  8. 备忘:maven 错误信息: Plugin execution not covered by lifecycle configuration

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/20 ...

  9. HTML5之创新的视频拼图剖析式学习之二

    昨天我们剖析了一下翻阅体验的实现.今天要剖析另外一个很有意思的效果——视频拼图. 网站中第一部分第二页<月熊的标志>是月熊志中互动性较强的一页,页面上会随机分布9块视频碎片,用户可以通过鼠 ...

  10. TrueSkill 原理及实现

    在电子竞技游戏中,特别是当有多名选手参加比赛的时候需要平衡队伍间的水平,让游戏比赛更加有意思.这样的一个参赛选手能力平衡系统通常包含以下三个模块: 一个包含跟踪所有玩家比赛结果,记录玩家能力的模块. ...