熟悉TCP编程的读者可能都知道,无论是服务端还是客户端,当我们读取或者发送消息的时候,都需要考虑TCP底层的粘包/拆包机制。木章开始我们先简单介绍TCP粘包/拆包的基础知识,然后模拟一个没有考虑TCP粘包/拆包导致功能异常的案例,最后通过正确例米探讨Netty是如何解决这个问题的。如果你已经熟悉了TCP粘包和拆包的相知识,建议你直接跳到代码讲解小节,看Netty是如何解决这个问题的。本章主要内容包:

TCP粘包/拆包的基础知识

没考虑TCP粘包/拆包的问题案例

使用Netty解决读半包问题

4.1    丁CP粘包/拆包

TCP是个流协议,所谓流,就是没奋界限的一串数据。大家可以想想河里的流水,是连成一片的,其间并没有分界线。TCP底层并不了解上层业务数据的具体含义,它会根据TCP缓冲区的实际情况进行包的划分,所以在业务上认为,一个完整的包可能会被TCP拆分成多个包进行发送,也有可能把多个小的包封装成一个大的数据包发送,这就是所谓的TCP粘包和拆包问题。

4.1.1    TCP粘包/拆包问题说明

我们可以通过图解对TCP粘包和拆包问题进行说明,粘包问题示例如图4-1所示。

假设客户端分别发送了两个数据包D1和D2给服务端,由于服务端一次读取到的字节数是不确定的,故可能存在以下4种情况。
(1)服务端分两次读取到了两个独立的数据包,分别是D1和D2,没有粘包和拆包:
(2)服务端一次接收到了两个数据包,D1和D2粘合在一起,被称为TCP粘包;
(3)服务端分两次读取到了两个数据包,第一次读取到了完整的D1包和D2包的部分内容,第二次读取到了D2包的剩余内容,这被称为TCP拆包:
(4)服务端分两次读取到了两个数据包,第一次读取到了D1包的部分内容D1, 第二次读取到了D1包的剩余内容D1和D2包的整包。
如果此时服务端TCP接收滑窗非常小,而数据包D1和D2 比较大,很有可能会发生第五种可能,即服务端分多次才能将D1和D2包接收完全,期间发生多次拆包。

4.1.2    丁CP粘包/拆包发生的原因

问题产生的原因有三个,分别如下。

(1)应用程序WRITE写入的字节大小大于套接口发送缓冲区大小;
(2)进行MSS大小的TCP分段:
(3)以太网帧的payLoad大于MTU进行IP分片。

4.1.3    粘包问题的解决策略

由于底层的TCP无法理解上层的业务数据,所以在底层是无法保证数据包不被拆分和重组的,这个问题只能通过上层的应用协议核设计来解决,根据业界的主流协议的解决方案,可以归纳如下。
(l)消息定长,例如每个报文的大小为固定长度200字节,如果不够,空位补空格:
(2)在包尾增加回车换行符进行分割,例如FTP协议:
(3)将消息分为消息头和消息休,消息头中包含表示消息总长度(或者消息体长度〉的字段,通常设计思路为消息头的第一个字段使用int32来表示消息的总长度;
(4)更复杂的应用层协议。

介绍完了TCP粘包/拆包的基础知识,下面我们就通过实际例程来看看如何使用Netty提供的半包解码器来解决TCP粘包/拆包问题。

4.2    未考虑TCP粘包导致功能异常案例

在前面的时间服务器例程中,我们多次强调并没有考虑读半包问题,这在功能测试时往往没有问题,但是一旦压力上来,或者发送大报文之后,就会存在粘包/拆包问题。如果代码没有考虑,往往就会出现解码错位或者错误,导致程序不能正常工作。下面我们以3.l节的代码为例,模拟故障场景,然后看看如何正确使用Netty的半包解码器来解决TCP粘包/拆包问题。

4.2.1    TimeServer的改造
4-1    Netty时间服务器服务端

1.TimeServer

package lqy2_nianbao_fault_82;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel; /**
* @author lilinfeng
* @date 2014年2月14日
* @version 1.0
*/
public class TimeServer { public void bind(int port) throws Exception {
// 配置服务端的NIO线程组
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.childHandler(new ChildChannelHandler());
// 绑定端口,同步等待成功
ChannelFuture f = b.bind(port).sync(); // 等待服务端监听端口关闭
f.channel().closeFuture().sync();
} finally {
// 优雅退出,释放线程池资源
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
} private class ChildChannelHandler extends ChannelInitializer<SocketChannel> {
@Override
protected void initChannel(SocketChannel arg0) throws Exception {
arg0.pipeline().addLast(new TimeServerHandler());
} } /**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
int port = 8080;
if (args != null && args.length > 0) {
try {
port = Integer.valueOf(args[0]);
} catch (NumberFormatException e) {
// 采用默认值
}
}
new TimeServer().bind(port);
}
}

2.TimeServerHandler

 1 package lqy2_nianbao_fault_82;
2
3 import io.netty.buffer.ByteBuf;
4 import io.netty.buffer.Unpooled;
5 import io.netty.channel.ChannelHandlerAdapter;
6 import io.netty.channel.ChannelHandlerContext;
7
8 /**
9 * @author lilinfeng
10 * @date 2014年2月14日
11 * @version 1.0
12 */
13 public class TimeServerHandler extends ChannelHandlerAdapter {
14
15 private int counter;
16
17 @Override
18 public void channelRead(ChannelHandlerContext ctx, Object msg)
19 throws Exception {
20 ByteBuf buf = (ByteBuf) msg;
21 byte[] req = new byte[buf.readableBytes()];
22 buf.readBytes(req);
23 String body = new String(req, "UTF-8").substring(0, req.length
24 - System.getProperty("line.separator").length());
25 System.out.println("The time server receive order : " + body
26 + " ; the counter is : " + ++counter);
27 String currentTime = "QUERY TIME ORDER".equalsIgnoreCase(body) ? new java.util.Date(
28 System.currentTimeMillis()).toString() : "BAD ORDER";
29 currentTime = currentTime + System.getProperty("line.separator");
30 ByteBuf resp = Unpooled.copiedBuffer(currentTime.getBytes());
31 ctx.writeAndFlush(resp);
32 }
33
34 @Override
35 public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
36 ctx.close();
37 }
38 }

每读到一条消息后,就计一次数,然后发送应答消息给客户端。按照设计,服务端接收到的消息总数应该跟客户端发送的消息总数相同,而且请求消息删除回车换行符后应该为“QUERYTIMEORDER”。下面我们继续看下客户端的改造。

3.TimeClient

 1 package lqy2_nianbao_fault_82;
2
3 import io.netty.bootstrap.Bootstrap;
4 import io.netty.channel.ChannelFuture;
5 import io.netty.channel.ChannelInitializer;
6 import io.netty.channel.ChannelOption;
7 import io.netty.channel.EventLoopGroup;
8 import io.netty.channel.nio.NioEventLoopGroup;
9 import io.netty.channel.socket.SocketChannel;
10 import io.netty.channel.socket.nio.NioSocketChannel;
11
12 /**
13 * @author lilinfeng
14 * @date 2014年2月14日
15 * @version 1.0
16 */
17 public class TimeClient {
18
19 public void connect(int port, String host) throws Exception {
20 // 配置客户端NIO线程组
21 EventLoopGroup group = new NioEventLoopGroup();
22 try {
23 Bootstrap b = new Bootstrap();
24 b.group(group).channel(NioSocketChannel.class)
25 .option(ChannelOption.TCP_NODELAY, true)
26 .handler(new ChannelInitializer<SocketChannel>() {
27 @Override
28 public void initChannel(SocketChannel ch)
29 throws Exception {
30
31 ch.pipeline().addLast(new TimeClientHandler());
32 }
33 });
34
35 // 发起异步连接操作
36 ChannelFuture f = b.connect(host, port).sync();
37
38 // 当代客户端链路关闭
39 f.channel().closeFuture().sync();
40 } finally {
41 // 优雅退出,释放NIO线程组
42 group.shutdownGracefully();
43 }
44 }
45
46 /**
47 * @param args
48 * @throws Exception
49 */
50 public static void main(String[] args) throws Exception {
51 int port = 8080;
52 if (args != null && args.length > 0) {
53 try {
54 port = Integer.valueOf(args[0]);
55 } catch (NumberFormatException e) {
56 // 采用默认值
57 }
58 }
59 new TimeClient().connect(port, "127.0.0.1");
60 }
61 }

4.TimeClientHandler

package lqy2_nianbao_fault_82;

import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext; import java.util.logging.Logger; /**
* @author lilinfeng
* @date 2014年2月14日
* @version 1.0
*/
public class TimeClientHandler extends ChannelHandlerAdapter { private static final Logger logger = Logger
.getLogger(TimeClientHandler.class.getName()); private int counter; private byte[] req; /**
* Creates a client-side handler.
*/
public TimeClientHandler() {
req = ("QUERY TIME ORDER" + System.getProperty("line.separator"))
.getBytes();
}
@Override
public void channelActive(ChannelHandlerContext ctx) {
ByteBuf message = null;
for (int i = 0; i < 100; i++) {
message = Unpooled.buffer(req.length);
message.writeBytes(req);
ctx.writeAndFlush(message);
}
} @Override
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
ByteBuf buf = (ByteBuf) msg;
byte[] req = new byte[buf.readableBytes()];
buf.readBytes(req);
String body = new String(req, "UTF-8");
System.out.println("Now is : " + body + " ; the counter is : "
+ ++counter);
} @Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// 释放资源
logger.warning("Unexpected exception from downstream : "
+ cause.getMessage());
ctx.close();
}
}

主要的修改点就是代码33~38行,客户端跟服务端链路建立成功之后,循环发送100条消息,每发送一条就刷新一次,保证每条消息都会被写入Cl1annel中。按照我们的设计,服务端应该接收到100条查询时间指令的请求消息。第48~49行,客户端每接收到服务端一条应答消息之后,就打印一次计数器。按照设计初衷,客户端应该打印100次服务端的系统时间。下面的小节就来看下运行结果是否符合设计初衷。

4.2.3    运行结果

分别执行服务端和客户端,运行结果如下。

服务端

  1 The time server receive order : QUERY TIME ORDER
2 QUERY TIME ORDER
3 QUERY TIME ORDER
4 QUERY TIME ORDER
5 QUERY TIME ORDER
6 QUERY TIME ORDER
7 QUERY TIME ORDER
8 QUERY TIME ORDER
9 QUERY TIME ORDER
10 QUERY TIME ORDER
11 QUERY TIME ORDER
12 QUERY TIME ORDER
13 QUERY TIME ORDER
14 QUERY TIME ORDER
15 QUERY TIME ORDER
16 QUERY TIME ORDER
17 QUERY TIME ORDER
18 QUERY TIME ORDER
19 QUERY TIME ORDER
20 QUERY TIME ORDER
21 QUERY TIME ORDER
22 QUERY TIME ORDER
23 QUERY TIME ORDER
24 QUERY TIME ORDER
25 QUERY TIME ORDER
26 QUERY TIME ORDER
27 QUERY TIME ORDER
28 QUERY TIME ORDER
29 QUERY TIME ORDER
30 QUERY TIME ORDER
31 QUERY TIME ORDER
32 QUERY TIME ORDER
33 QUERY TIME ORDER
34 QUERY TIME ORDER
35 QUERY TIME ORDER
36 QUERY TIME ORDER
37 QUERY TIME ORDER
38 QUERY TIME ORDER
39 QUERY TIME ORDER
40 QUERY TIME ORDER
41 QUERY TIME ORDER
42 QUERY TIME ORDER
43 QUERY TIME ORDER
44 QUERY TIME ORDER
45 QUERY TIME ORDER
46 QUERY TIME ORDER
47 QUERY TIME ORDER
48 QUERY TIME ORDER
49 QUERY TIME ORDER
50 QUERY TIME ORDER
51 QUERY TIME ORDER
52 QUERY TIME ORDER
53 QUERY TIME ORDER
54 QUERY TIME ORDER
55 QUERY TIME ORDER
56 QUERY TIME ORDER
57 QUERY TIME ORD ; the counter is : 1
58 The time server receive order :
59 QUERY TIME ORDER
60 QUERY TIME ORDER
61 QUERY TIME ORDER
62 QUERY TIME ORDER
63 QUERY TIME ORDER
64 QUERY TIME ORDER
65 QUERY TIME ORDER
66 QUERY TIME ORDER
67 QUERY TIME ORDER
68 QUERY TIME ORDER
69 QUERY TIME ORDER
70 QUERY TIME ORDER
71 QUERY TIME ORDER
72 QUERY TIME ORDER
73 QUERY TIME ORDER
74 QUERY TIME ORDER
75 QUERY TIME ORDER
76 QUERY TIME ORDER
77 QUERY TIME ORDER
78 QUERY TIME ORDER
79 QUERY TIME ORDER
80 QUERY TIME ORDER
81 QUERY TIME ORDER
82 QUERY TIME ORDER
83 QUERY TIME ORDER
84 QUERY TIME ORDER
85 QUERY TIME ORDER
86 QUERY TIME ORDER
87 QUERY TIME ORDER
88 QUERY TIME ORDER
89 QUERY TIME ORDER
90 QUERY TIME ORDER
91 QUERY TIME ORDER
92 QUERY TIME ORDER
93 QUERY TIME ORDER
94 QUERY TIME ORDER
95 QUERY TIME ORDER
96 QUERY TIME ORDER
97 QUERY TIME ORDER
98 QUERY TIME ORDER
99 QUERY TIME ORDER
100 QUERY TIME ORDER
101 QUERY TIME ORDER ; the counter is : 2

服务端运行结果表明它只接收到了两条消息,第一条包含57条“QUERY TIME ORDER”指令,第二条包含了43条“QUERY TIME ORDER”指令,总数正好是100条。我们期待的是收到100条消息,每条包含一条“QUERYTIME ORDER'指令。这说明发生了TCP粘包。

Now is : BAD ORDER
BAD ORDER
; the counter is : 1

按照设计初衷,客户端应该收到100条当前系统时间的消息,但实际上只收到了一条。这不难理解,因为服务端只收到了2条请求消息,所以实际服务端只发送了2条应答,由于请求消息不满足查询条件,所以返回了2条“BAD ORDER”应答消息。但是实际上客户端只收到了一条包含2BAD ORDER指令的消息,说明服务端返回的应答消息也发生了粘包。
由于上面的例程没有考虑TCP的粘包/拆包,所以当发生TCP粘包时,我们的程序就不能正常工作。


下面的章节将演示如何通过N'etty的LineBasedFrameDecoder和StringDecoder来解决TCP粘包问题。

4.3    利用LineBasedFrameDecoder解决TCP粘包问题

为了解决TCP粘包/拆包导致的半包读写问题重Netty默认提供了多种编解码器用于处理半包,只要能熟练掌握这些类库的使用,TCP粘包问题从此会变得非常容易,你甚至不需要关心它们,这也是其他NIO框架和JDK原生的NIOAPI所无法匹敌的。
下面我们就以修正时间服务器为目标进行开发和讲解,通过对实际代码的讲解让大家能够尽快熟悉和掌握半包解码器的使用。

4.3.1    支持TCP粘包的Timeserver
直接看代码,然后对LineBasedFra1neDecoder和StringDecoder的APl进行说明。

4-3    Netty    时间服务器服务端TimeServer

 1 package lqy3_nianbao_correct_88;
2
3 import io.netty.bootstrap.ServerBootstrap;
4 import io.netty.channel.ChannelFuture;
5 import io.netty.channel.ChannelInitializer;
6 import io.netty.channel.ChannelOption;
7 import io.netty.channel.EventLoopGroup;
8 import io.netty.channel.nio.NioEventLoopGroup;
9 import io.netty.channel.socket.SocketChannel;
10 import io.netty.channel.socket.nio.NioServerSocketChannel;
11 import io.netty.handler.codec.LineBasedFrameDecoder;
12 import io.netty.handler.codec.string.StringDecoder;
13
14 /**
15 * @author lilinfeng
16 * @date 2014年2月14日
17 * @version 1.0
18 */
19 public class TimeServer {
20
21 public void bind(int port) throws Exception {
22 // 配置服务端的NIO线程组
23 EventLoopGroup bossGroup = new NioEventLoopGroup();
24 EventLoopGroup workerGroup = new NioEventLoopGroup();
25 try {
26 ServerBootstrap b = new ServerBootstrap();
27 b.group(bossGroup, workerGroup)
28 .channel(NioServerSocketChannel.class)
29 .option(ChannelOption.SO_BACKLOG, 1024)
30 .childHandler(new ChildChannelHandler());
31 // 绑定端口,同步等待成功
32 ChannelFuture f = b.bind(port).sync();
33
34 // 等待服务端监听端口关闭
35 f.channel().closeFuture().sync();
36 } finally {
37 // 优雅退出,释放线程池资源
38 bossGroup.shutdownGracefully();
39 workerGroup.shutdownGracefully();
40 }
41 }
42
43 private class ChildChannelHandler extends ChannelInitializer<SocketChannel> {
44 @Override
45 protected void initChannel(SocketChannel arg0) throws Exception {
46 arg0.pipeline().addLast(new LineBasedFrameDecoder(1024));
47 arg0.pipeline().addLast(new StringDecoder());
48 arg0.pipeline().addLast(new TimeServerHandler());
49 }
50 }
51
52 /**
53 * @param args
54 * @throws Exception
55 */
56 public static void main(String[] args) throws Exception {
57 int port = 8080;
58 if (args != null && args.length > 0) {
59 try {
60 port = Integer.valueOf(args[0]);
61 } catch (NumberFormatException e) {
62 // 采用默认值
63 }
64 }
65 new TimeServer().bind(port);
66 }
67 }

重点看45~47行,在原来的TimeServerHandler之前新增了两个解码器:第一个是LineBasedFrameDecoder,第二个是StringDecoder。这两个类的功能后续会进行介绍,下面继续看TimeServerHandler的代码修改

4-4    Netty时间服务器服务端    TimeServerHandler

 1 package lqy3_nianbao_correct_88;
2
3 import io.netty.buffer.ByteBuf;
4 import io.netty.buffer.Unpooled;
5 import io.netty.channel.ChannelHandlerAdapter;
6 import io.netty.channel.ChannelHandlerContext;
7
8 /**
9 * @author lilinfeng
10 * @date 2014年2月14日
11 * @version 1.0
12 */
13 public class TimeServerHandler extends ChannelHandlerAdapter {
14 private int counter;
15
16 @Override
17 public void channelRead(ChannelHandlerContext ctx, Object msg)
18 throws Exception {
19 String body = (String) msg;
20 System.out.println("The time server receive order : " + body
21 + " ; the counter is : " + ++counter);
22 String currentTime = "QUERY TIME ORDER".equalsIgnoreCase(body) ? new java.util.Date(
23 System.currentTimeMillis()).toString() : "BAD ORDER";
24 currentTime = currentTime + System.getProperty("line.separator");
25 ByteBuf resp = Unpooled.copiedBuffer(currentTime.getBytes());
26 ctx.writeAndFlush(resp);
27 }
28
29 @Override
30 public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
31 ctx.close();
32 }
33 }

直接看19~21行,可以发现接收到的msg就是删除回车换行符后的请求消息,不需要额外考虑处理读半包问题,也不需要对请求消息进行编码,代码非常简洁。读者可能会质疑这样是否可行,不着急我们先继续看看客户端的类似改造,然后运行程序看执行结果,最后再揭开其中的奥秘。

4.3.2    支持TCP粘包的TimeClient

支持TCP粘包的客户端修改起来也非常简单,代码如下。

Netty时间服务器客户端TimeClient

 1 package lqy3_nianbao_correct_88;
2
3 import io.netty.bootstrap.Bootstrap;
4 import io.netty.channel.ChannelFuture;
5 import io.netty.channel.ChannelInitializer;
6 import io.netty.channel.ChannelOption;
7 import io.netty.channel.EventLoopGroup;
8 import io.netty.channel.nio.NioEventLoopGroup;
9 import io.netty.channel.socket.SocketChannel;
10 import io.netty.channel.socket.nio.NioSocketChannel;
11 import io.netty.handler.codec.LineBasedFrameDecoder;
12 import io.netty.handler.codec.string.StringDecoder;
13
14 /**
15 * @author lilinfeng
16 * @date 2014年2月14日
17 * @version 1.0
18 */
19 public class TimeClient {
20 public void connect(int port, String host) throws Exception {
21 // 配置客户端NIO线程组
22 EventLoopGroup group = new NioEventLoopGroup();
23 try {
24 Bootstrap b = new Bootstrap();
25 b.group(group).channel(NioSocketChannel.class)
26 .option(ChannelOption.TCP_NODELAY, true)
27 .handler(new ChannelInitializer<SocketChannel>() {
28 @Override
29 public void initChannel(SocketChannel ch)
30 throws Exception {
31 ch.pipeline().addLast(
32 new LineBasedFrameDecoder(1024));
33 ch.pipeline().addLast(new StringDecoder());
34 ch.pipeline().addLast(new TimeClientHandler());
35 }
36 });
37
38 // 发起异步连接操作
39 ChannelFuture f = b.connect(host, port).sync();
40
41 // 当代客户端链路关闭
42 f.channel().closeFuture().sync();
43 } finally {
44 // 优雅退出,释放NIO线程组
45 group.shutdownGracefully();
46 }
47 }
48
49 /**
50 * @param args
51 * @throws Exception
52 */
53 public static void main(String[] args) throws Exception {
54 int port = 8080;
55 if (args != null && args.length > 0) {
56 try {
57 port = Integer.valueOf(args[0]);
58 } catch (NumberFormatException e) {
59 // 采用默认值
60 }
61 }
62 new TimeClient().connect(port, "127.0.0.1");
63 }
64 }

31~34行与服务端类似,直接在TimeClientHandler之前新增LineBasedFrameDecoder
StringDecoder解码器,下面我们继续看TimeClientHandler的代码修改。

Netty时间服务器客户端TimeClientHandler

 1 package lqy3_nianbao_correct_88;
2
3 import io.netty.buffer.ByteBuf;
4 import io.netty.buffer.Unpooled;
5 import io.netty.channel.ChannelHandlerAdapter;
6 import io.netty.channel.ChannelHandlerContext;
7
8 import java.util.logging.Logger;
9
10 /**
11 * @author lilinfeng
12 * @date 2014年2月14日
13 * @version 1.0
14 */
15 public class TimeClientHandler extends ChannelHandlerAdapter {
16
17 private static final Logger logger = Logger
18 .getLogger(TimeClientHandler.class.getName());
19
20 private int counter;
21
22 private byte[] req;
23
24 /**
25 * Creates a client-side handler.
26 */
27 public TimeClientHandler() {
28 req = ("QUERY TIME ORDER" + System.getProperty("line.separator"))
29 .getBytes();
30 }
31
32 @Override
33 public void channelActive(ChannelHandlerContext ctx) {
34 ByteBuf message = null;
35 for (int i = 0; i < 100; i++) {
36 message = Unpooled.buffer(req.length);
37 message.writeBytes(req);
38 ctx.writeAndFlush(message);
39 }
40 }
41 @Override
42 public void channelRead(ChannelHandlerContext ctx, Object msg)
43 throws Exception {
44 String body = (String) msg;
45 System.out.println("Now is : " + body + " ; the counter is : "
46 + ++counter);
47 }
48
49 @Override
50 public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
51 // 释放资源
52 logger.warning("Unexpected exception from downstream : "
53 + cause.getMessage());
54 ctx.close();
55 }
56 }

第44~46行拿到的msg    已经是解码成字符串之后的应答消息了,相比于之前的代问简洁了很多。

下个小节我们运行重构后的时间服务器服务端和客户端,看看它能否像设计预期那样正常工作。

4.3.3    运行支持TCP粘包的时间服务器程序

为了尽量模拟TCP粘包和半包场景,采用简单的压力测试,链路建立成功之后,客户端连续发送100条消息给服务端,然后查看服务端和客户端的运行结果。
分别运行TimeServer和TimeClient,执行结果如下。

服务咐执行结果如下。

  1 The time server receive order : QUERY TIME ORDER ; the counter is : 1
2 The time server receive order : QUERY TIME ORDER ; the counter is : 2
3 The time server receive order : QUERY TIME ORDER ; the counter is : 3
4 The time server receive order : QUERY TIME ORDER ; the counter is : 4
5 The time server receive order : QUERY TIME ORDER ; the counter is : 5
6 The time server receive order : QUERY TIME ORDER ; the counter is : 6
7 The time server receive order : QUERY TIME ORDER ; the counter is : 7
8 The time server receive order : QUERY TIME ORDER ; the counter is : 8
9 The time server receive order : QUERY TIME ORDER ; the counter is : 9
10 The time server receive order : QUERY TIME ORDER ; the counter is : 10
11 The time server receive order : QUERY TIME ORDER ; the counter is : 11
12 The time server receive order : QUERY TIME ORDER ; the counter is : 12
13 The time server receive order : QUERY TIME ORDER ; the counter is : 13
14 The time server receive order : QUERY TIME ORDER ; the counter is : 14
15 The time server receive order : QUERY TIME ORDER ; the counter is : 15
16 The time server receive order : QUERY TIME ORDER ; the counter is : 16
17 The time server receive order : QUERY TIME ORDER ; the counter is : 17
18 The time server receive order : QUERY TIME ORDER ; the counter is : 18
19 The time server receive order : QUERY TIME ORDER ; the counter is : 19
20 The time server receive order : QUERY TIME ORDER ; the counter is : 20
21 The time server receive order : QUERY TIME ORDER ; the counter is : 21
22 The time server receive order : QUERY TIME ORDER ; the counter is : 22
23 The time server receive order : QUERY TIME ORDER ; the counter is : 23
24 The time server receive order : QUERY TIME ORDER ; the counter is : 24
25 The time server receive order : QUERY TIME ORDER ; the counter is : 25
26 The time server receive order : QUERY TIME ORDER ; the counter is : 26
27 The time server receive order : QUERY TIME ORDER ; the counter is : 27
28 The time server receive order : QUERY TIME ORDER ; the counter is : 28
29 The time server receive order : QUERY TIME ORDER ; the counter is : 29
30 The time server receive order : QUERY TIME ORDER ; the counter is : 30
31 The time server receive order : QUERY TIME ORDER ; the counter is : 31
32 The time server receive order : QUERY TIME ORDER ; the counter is : 32
33 The time server receive order : QUERY TIME ORDER ; the counter is : 33
34 The time server receive order : QUERY TIME ORDER ; the counter is : 34
35 The time server receive order : QUERY TIME ORDER ; the counter is : 35
36 The time server receive order : QUERY TIME ORDER ; the counter is : 36
37 The time server receive order : QUERY TIME ORDER ; the counter is : 37
38 The time server receive order : QUERY TIME ORDER ; the counter is : 38
39 The time server receive order : QUERY TIME ORDER ; the counter is : 39
40 The time server receive order : QUERY TIME ORDER ; the counter is : 40
41 The time server receive order : QUERY TIME ORDER ; the counter is : 41
42 The time server receive order : QUERY TIME ORDER ; the counter is : 42
43 The time server receive order : QUERY TIME ORDER ; the counter is : 43
44 The time server receive order : QUERY TIME ORDER ; the counter is : 44
45 The time server receive order : QUERY TIME ORDER ; the counter is : 45
46 The time server receive order : QUERY TIME ORDER ; the counter is : 46
47 The time server receive order : QUERY TIME ORDER ; the counter is : 47
48 The time server receive order : QUERY TIME ORDER ; the counter is : 48
49 The time server receive order : QUERY TIME ORDER ; the counter is : 49
50 The time server receive order : QUERY TIME ORDER ; the counter is : 50
51 The time server receive order : QUERY TIME ORDER ; the counter is : 51
52 The time server receive order : QUERY TIME ORDER ; the counter is : 52
53 The time server receive order : QUERY TIME ORDER ; the counter is : 53
54 The time server receive order : QUERY TIME ORDER ; the counter is : 54
55 The time server receive order : QUERY TIME ORDER ; the counter is : 55
56 The time server receive order : QUERY TIME ORDER ; the counter is : 56
57 The time server receive order : QUERY TIME ORDER ; the counter is : 57
58 The time server receive order : QUERY TIME ORDER ; the counter is : 58
59 The time server receive order : QUERY TIME ORDER ; the counter is : 59
60 The time server receive order : QUERY TIME ORDER ; the counter is : 60
61 The time server receive order : QUERY TIME ORDER ; the counter is : 61
62 The time server receive order : QUERY TIME ORDER ; the counter is : 62
63 The time server receive order : QUERY TIME ORDER ; the counter is : 63
64 The time server receive order : QUERY TIME ORDER ; the counter is : 64
65 The time server receive order : QUERY TIME ORDER ; the counter is : 65
66 The time server receive order : QUERY TIME ORDER ; the counter is : 66
67 The time server receive order : QUERY TIME ORDER ; the counter is : 67
68 The time server receive order : QUERY TIME ORDER ; the counter is : 68
69 The time server receive order : QUERY TIME ORDER ; the counter is : 69
70 The time server receive order : QUERY TIME ORDER ; the counter is : 70
71 The time server receive order : QUERY TIME ORDER ; the counter is : 71
72 The time server receive order : QUERY TIME ORDER ; the counter is : 72
73 The time server receive order : QUERY TIME ORDER ; the counter is : 73
74 The time server receive order : QUERY TIME ORDER ; the counter is : 74
75 The time server receive order : QUERY TIME ORDER ; the counter is : 75
76 The time server receive order : QUERY TIME ORDER ; the counter is : 76
77 The time server receive order : QUERY TIME ORDER ; the counter is : 77
78 The time server receive order : QUERY TIME ORDER ; the counter is : 78
79 The time server receive order : QUERY TIME ORDER ; the counter is : 79
80 The time server receive order : QUERY TIME ORDER ; the counter is : 80
81 The time server receive order : QUERY TIME ORDER ; the counter is : 81
82 The time server receive order : QUERY TIME ORDER ; the counter is : 82
83 The time server receive order : QUERY TIME ORDER ; the counter is : 83
84 The time server receive order : QUERY TIME ORDER ; the counter is : 84
85 The time server receive order : QUERY TIME ORDER ; the counter is : 85
86 The time server receive order : QUERY TIME ORDER ; the counter is : 86
87 The time server receive order : QUERY TIME ORDER ; the counter is : 87
88 The time server receive order : QUERY TIME ORDER ; the counter is : 88
89 The time server receive order : QUERY TIME ORDER ; the counter is : 89
90 The time server receive order : QUERY TIME ORDER ; the counter is : 90
91 The time server receive order : QUERY TIME ORDER ; the counter is : 91
92 The time server receive order : QUERY TIME ORDER ; the counter is : 92
93 The time server receive order : QUERY TIME ORDER ; the counter is : 93
94 The time server receive order : QUERY TIME ORDER ; the counter is : 94
95 The time server receive order : QUERY TIME ORDER ; the counter is : 95
96 The time server receive order : QUERY TIME ORDER ; the counter is : 96
97 The time server receive order : QUERY TIME ORDER ; the counter is : 97
98 The time server receive order : QUERY TIME ORDER ; the counter is : 98
99 The time server receive order : QUERY TIME ORDER ; the counter is : 99
100 The time server receive order : QUERY TIME ORDER ; the counter is : 100

客户端运行结果如下。

  1 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 1
2 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 2
3 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 3
4 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 4
5 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 5
6 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 6
7 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 7
8 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 8
9 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 9
10 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 10
11 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 11
12 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 12
13 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 13
14 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 14
15 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 15
16 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 16
17 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 17
18 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 18
19 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 19
20 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 20
21 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 21
22 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 22
23 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 23
24 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 24
25 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 25
26 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 26
27 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 27
28 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 28
29 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 29
30 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 30
31 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 31
32 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 32
33 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 33
34 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 34
35 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 35
36 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 36
37 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 37
38 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 38
39 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 39
40 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 40
41 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 41
42 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 42
43 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 43
44 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 44
45 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 45
46 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 46
47 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 47
48 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 48
49 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 49
50 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 50
51 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 51
52 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 52
53 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 53
54 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 54
55 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 55
56 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 56
57 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 57
58 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 58
59 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 59
60 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 60
61 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 61
62 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 62
63 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 63
64 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 64
65 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 65
66 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 66
67 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 67
68 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 68
69 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 69
70 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 70
71 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 71
72 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 72
73 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 73
74 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 74
75 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 75
76 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 76
77 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 77
78 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 78
79 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 79
80 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 80
81 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 81
82 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 82
83 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 83
84 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 84
85 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 85
86 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 86
87 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 87
88 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 88
89 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 89
90 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 90
91 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 91
92 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 92
93 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 93
94 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 94
95 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 95
96 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 96
97 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 97
98 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 98
99 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 99
100 Now is : Thu Oct 22 11:03:33 CST 2015 ; the counter is : 100

程序的运行结果完全符合预期,说明通过使用LineBasedFrameDecoder和StringDecoder成功解决了TCP粘包导致的读半包问题。对于使用者来说,只要将支持半包解码的handler添加到ChannelPipeline中即可,不需要写额外的代码,用户使用起来非常简单。

下个小节,我们就对添加LineBas巳dFrameDecoder和StringDecoder之后就能解决TCP粘包导致的读半包或者多包问题的原因进行分析。


4.3.4    LineBasedFrameDecoder和StringDecoder的原理分析

LineBasedFrameDecoder的工作原理是它依次遍历ByteBuf中的可读字节,判断看是否有\n或者以\r\n气如果有,就以此位置为结束位置,从可读索引到结束位置区间的字节就组成了一行。它是以换行符为结束标志的解码器,支持携带结束符或者不携带结束符两种解码方式,同时支持配置单行的最大长度。如果连续读取到最大长度后仍然没有发现换行符,就会抛出异常,同时忽略掉之前读到的异常码流。
StringDecoder的功能非常简单,就是将接收到的对象转换成字符串,然后继续调用后面的handler。LineBasedFrameDecoder+StringDecoder组合就是按行切换的文本解码器,它被设计用来支持TCP的粘包和拆包。

可能读者会提出新的疑问:如果发送的消息不是以换行符结束的该怎么办呢?或者没有回车换行符,靠消息头中的长度字段来分包怎么办?是不是需要自己写半包解码器?答案是否定的,Netty提供了多种支持TCP粘包/拆包的解码器,用来满足用户的不同诉求。
第5章我们将学习分隔符解码器,由于它在实际项目中应用非常广泛,所以单独用一章对其用法和原理进行讲解。

4.4    总结

本章首先对TCP的粘包和拆包进行了讲解,给出了解决这个问题的通用做法。然后我们对第3章的时间服务器进行改造和测试,首先验证没有考虑TCP粘包/拆包导致的问题。随后给出了解决方案,即利用LineBasedFrameDecoder+StringDecoder来解决TCP的粘包/拆包问题。

没有换行的原因就是把换行符也给读进来了,而没有让换行符发挥它应有的效果。

<Netty>(入门篇)TIP黏包/拆包问题原因及换行的初步解决之道的更多相关文章

  1. (入门篇 NettyNIO开发指南)第四章-TIP黏包/拆包问题解决之道

    熟悉TCP编程的读者可能都知道,无论是服务端还是客户端,当我们读取或者发送消息的时候,都需要考虑TCP底层的粘包/拆包机制.木章开始我们先简单介绍TCP粘包/拆包的基础知识,然后模拟一个没有考虑TCP ...

  2. netty]--最通用TCP黏包解决方案

    netty]--最通用TCP黏包解决方案:LengthFieldBasedFrameDecoder和LengthFieldPrepender 2017年02月19日 15:02:11 惜暮 阅读数:1 ...

  3. netty入门篇(1)

    上一篇 nio简介  下一篇 netty中级篇(2) 一.为什么选择Netty Netty是最流行的框架之一.健壮性.功能.性能.可定制性和可扩展性在同类框架中首屈一指,因此被大规模使用,例如ROCK ...

  4. netty的解码器和粘包拆包

    Tcp是一个流的协议,一个完整的包可能会被Tcp拆成多个包进行发送,也可能把一个小的包封装成一个大的数据包发送,这就是所谓的粘包和拆包问题 粘包.拆包出现的原因: 在流传输中出现,UDP不会出现粘包, ...

  5. 网络编程 -- RPC实现原理 -- Netty -- 迭代版本V4 -- 粘包拆包

    网络编程 -- RPC实现原理 -- 目录 啦啦啦 V2——Netty -- new LengthFieldPrepender(2) : 设置数据包 2 字节的特征码 new LengthFieldB ...

  6. 高性能NIO框架Netty入门篇

    http://cxytiandi.com/blog/detail/17345 Netty介绍 Netty是由JBOSS提供的一个java开源框架.Netty提供异步的.事件驱动的网络应用程序框架和工具 ...

  7. Netty(二)——TCP粘包/拆包

    转载请注明出处:http://www.cnblogs.com/Joanna-Yan/p/7814644.html 前面讲到:Netty(一)--Netty入门程序 主要内容: TCP粘包/拆包的基础知 ...

  8. Netty学习(四)-TCP粘包和拆包

    我们都知道TCP是基于字节流的传输协议.那么数据在通信层传播其实就像河水一样并没有明显的分界线,而数据具体表示什么意思什么地方有句号什么地方有分号这个对于TCP底层来说并不清楚.应用层向TCP层发送用 ...

  9. netty中级篇(2)

    上一篇 netty入门篇(1) 一.编码解码技术 如何评价一个编解码技术: 是否支持跨语言,或者说支持的语言是否丰富 编码码流大小,影响传输速度 编码和解码的性能,即时间 类库是否精致,API是否方便 ...

随机推荐

  1. POJ 1769 Minimizing maximizer(DP+zkw线段树)

    [题目链接] http://poj.org/problem?id=1769 [题目大意] 给出一些排序器,能够将区间li到ri进行排序,排序器按一定顺序摆放 问在排序器顺序不变的情况下,一定能够将最大 ...

  2. 1.14(java学习笔记)数组

    假如我们需要用到1000个相同类型的数据,肯定不可能创建1000个变量, 这样既不方便,也不直观,也不便于我们使用.这时就需要用到数组. 一.数组的声明与使用 public class Array { ...

  3. How To Use NSOperations and NSOperationQueues

    Update 10/7/14: This tutorial has now been updated for iOS 8 and Swift; check it out! Everyone has h ...

  4. 决定干点事儿--翻译一下《effective modern c++》

    写了非常多关于C++11的博客.总是认为不踏实,非常多东西都是东拼西凑.市场上也非常少有C++11的优秀书籍,但幸运的是Meyers老爷子并没有闲赋.为我们带来了<effective moder ...

  5. 跟我一起透彻理解template模板模式

    #include <iostream> using namespace std; //template模式. class Base { public: void DealWhat() { ...

  6. 在html页面中直接嵌入图片数据

    一般情况,通常是在html页面中应用图片的链接,如: <img src="http://baidu.com/logo.gif">   但是,这样的前提是我们需要将图片先 ...

  7. hadoop2.4 支持snappy

    我们hadoop2,4集群默认不支持snappy压缩,可是近期有业务方说他们的部分数据是snappy压缩的(这部分数据由另外一个集群提供给他们时就是snappy压缩格式的)想迁移到到我们集群上面来进行 ...

  8. 【MVC5】First AngularJS

    ※本文参照<ASP.NET MVC 5高级编程(第5版)> 1.创建Web工程 1-1.选择ASP.NET Web Application→Web API 工程名为[atTheMovie] ...

  9. python-mysql-replication

    python处理mysql binlog增量日志 http://python-mysql-replication.readthedocs.io/en/latest/examples.html 同样的项 ...

  10. Windows操作系统设置代理

    1.打开控制面板 2.点击网络和Internet 3.点击Internet选项 4.点击连接Tab页 5.点击局域网设置 6.选中代理服务器 7.输入代理的地址和端口号