reference from:http://docs.jboss.org/netty/3.1/guide/html/start.html

This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.

If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.

1.1. Before Getting Started

The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.5 or above. The latest version of Netty is available in the project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.

Is that all? To tell the truth, you should find these two are just enough to implement almost any type of protocols. Otherwise, please feel free to contact the Netty project community and let us know what's missing.

At last but not least, please refer to the API reference whenever you want to know more about the classes introduced here. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.

1.2. Writing a Discard Server

The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.

To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.

package org.jboss.netty.example.discard;

@ChannelPipelineCoverage("all")

public class DiscardServerHandler extends SimpleChannelHandler {

    @Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {

    }

    @Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {

        e.getCause().printStackTrace();

        Channel ch = e.getChannel();
ch.close();
}
}

ChannelPipelineCoverage annotates a handler type to tell if the handler instance of the annotated type can be shared by more than one Channel (and its associated ChannelPipeline). DiscardServerHandler does not manage any stateful information, and therefore it is annotated with the value "all".

DiscardServerHandler extends SimpleChannelHandler, which is an implementation of ChannelHandler.SimpleChannelHandler provides various event handler methods that you can override. For now, it is just enough to extend SimpleChannelHandler rather than to implement the handler interfaces by yourself.

We override the messageReceived event handler method here. This method is called with a MessageEvent, which contains the received data, whenever new data is received from a client. In this example, we ignore the received data by doing nothing to implement the DISCARD protocol.

exceptionCaught event handler method is called with an ExceptionEvent when an exception was raised by Netty due to I/O error or by a handler implementation due to the exception thrown while processing events. In most cases, the caught exception should be logged and its associated channel should be closed here, although the implementation of this method can be different depending on what you want to do to deal with an exceptional situation. For example, you might want to send a response message with an error code before closing the connection.

So far so good. We have implemented the first half of the DISCARD server. What's left now is to write the mainmethod which starts the server with the DiscardServerHandler.

package org.jboss.netty.example.discard;

import java.net.InetSocketAddress;
import java.util.concurrent.Executors; public class DiscardServer { public static void main(String[] args) throws Exception {
ChannelFactory factory =
new NioServerSocketChannelFactory

(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()); ServerBootstrap bootstrap = new ServerBootstrap

(factory);

        DiscardServerHandler handler = new DiscardServerHandler();
ChannelPipeline pipeline = bootstrap.getPipeline();
pipeline.addLast("handler", handler);

        bootstrap.setOption("child.tcpNoDelay", true);

        bootstrap.setOption("child.keepAlive", true);

        bootstrap.bind(new InetSocketAddress(8080));

    }
}

ChannelFactory is a factory which creates and manages Channels and its related resources. It processes all I/O requests and performs I/O to generate ChannelEvents. Netty provides various ChannelFactoryimplementations. We are implementing a server-side application in this example, and thereforeNioServerSocketChannelFactory was used. Another thing to note is that it does not create I/O threads by itself. It is supposed to acquire threads from the thread pool you specified in the constructor, and it gives you more control over how threads should be managed in the environment where your application runs, such as an application server with a security manager.

ServerBootstrap is a helper class that sets up a server. You can set up the server using a Channeldirectly. However, please note that this is a tedious process and you do not need to do that in most cases.

Here, we add the DiscardServerHandler to the default ChannelPipeline. Whenever a new connection is accepted by the server, a new ChannelPipeline will be created for a newly accepted Channel and all theChannelHandlers added here will be added to the new ChannelPipeline. It's just like a shallow-copy operation; all Channel and their ChannelPipelines will share the same DiscardServerHandler instance.

You can also set the parameters which are specific to the Channel implementation. We are writing a TCP/IP server, so we are allowed to set the socket options such as tcpNoDelay and keepAlive. Please note that the "child." prefix was added to all options. It means the options will be applied to the accepted Channels instead of the options of the ServerSocketChannel. You could do the following to set the options of the ServerSocketChannel:

bootstrap.setOption("reuseAddress", true);

We are ready to go now. What's left is to bind to the port and to start the server. Here, we bind to the port 8080 of all NICs (network interface cards) in the machine. You can now call the bind method as many times as you want (with different bind addresses.)

Congratulations! You've just finished your first server on top of Netty.

1.3. Looking into the Received Data

Now that we have written our first server, we need to test if it really works. The easiest way to test it is to use the telnet command. For example, you could enter "telnet localhost 8080" in the command line and type something.

However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.

We already know that MessageEvent is generated whenever data is received and the messageReceived handler method will be invoked. Let us put some code into the messageReceived method of the DiscardServerHandler:

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer

 buf = (ChannelBuffer) e.getMessage();
while(buf.readable()) {
System.out.println((char) buf.readByte());
}
}

It is safe to assume the message type in socket transports is always ChannelBufferChannelBuffer is a fundamental data structure which stores a sequence of bytes in Netty. It's similar to NIO ByteBuffer, but it is easier to use and more flexible. For example, Netty allows you to create a compositeChannelBuffer which combines multiple ChannelBuffers reducing the number of unnecessary memory copy.

Although it resembles to NIO ByteBuffer a lot, it is highly recommended to refer to the API reference. Learning how to use ChannelBuffer correctly is a critical step in using Netty without difficulty.

If you run the telnet command again, you will see the server prints what has received.

The full source code of the discard server is located in the org.jboss.netty.example.discard package of the distribution.

1.4. Writing an Echo Server

So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing the ECHOprotocol, where any received data is sent back.

The only difference from the discard server we have implemented in the previous sections is that it sends the received data back instead of printing the received data out to the console. Therefore, it is enough again to modify the messageReceived method:

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
Channel

 ch = e.getChannel();
ch.write(e.getMessage());
}

ChannelEvent object has a reference to its associated Channel. Here, the returned Channel represents the connection which received the MessageEvent. We can get the Channel and call the write method to write something back to the remote peer.

If you run the telnet command again, you will see the server sends back whatever you have sent to it.

The full source code of the echo server is located in the org.jboss.netty.example.echo package of the distribution.

1.5. Writing a Time Server

The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.

Because we are going to ignore any received data but to send a message as soon as a connection is established, we cannot use the messageReceived method this time. Instead, we should override thechannelConnected method. The following is the implementation:

package org.jboss.netty.example.time;

@ChannelPipelineCoverage("all")
public class TimeServerHandler extends SimpleChannelHandler { @Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {

        Channel ch = e.getChannel();

        ChannelBuffer time = ChannelBuffers.buffer(4);

        time.writeInt(System.currentTimeMillis() / 1000);

        ChannelFuture f = ch.write(time);

        f.addListener(new ChannelFutureListener() {

            public void operationComplete(ChannelFuture future) {
Channel ch = future.getChannel();
ch.close();
}
});
} @Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}

As explained, channelConnected method will be invoked when a connection is established. Let us write the 32-bit integer that represents the current time in seconds here.

To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need a ChannelBuffer whose capacity is 4 bytes. TheChannelBuffers helper class is used to allocate a new buffer. Besides the buffer method, ChannelBuffersprovides a lot of useful methods related to the ChannelBuffer. For more information, please refer to the API reference.

On the other hand, it is a good idea to use static imports for ChannelBuffers:

import static org.jboss.netty.buffer.ChannelBuffers.*;
...
ChannelBuffer dynamicBuf = dynamicBuffer(256);
ChannelBuffer ordinaryBuf = buffer(1024);

As usual, we write the constructed message.

But wait, where's the flip? Didn't we used to call ByteBuffer.flip() before sending a message in NIO?ChannelBuffer does not have such a method because it has two pointers; one for read operations and the other for write operations. The writer index increases when you write something to a ChannelBufferwhile the reader index does not change. The reader index and the writer index represents where the message starts and ends respectively.

In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the flip method. You will be in trouble when you forget to flip the buffer because nothing or incorrect data will be sent. Such an error does not happen in Netty because we have different pointer for different operation types. You will find it makes your life much easier as you get used to it -- a life without flipping out!

Another point to note is that the write method returns a ChannelFuture. A ChannelFuture represents an I/O operation which has not yet occurred. It means, any requested operation might not have been performed yet because all operations are asynchronous in Netty. For example, the following code might close the connection even before a message is sent:

Channel ch = ...;
ch.write(message);
ch.close();

Therefore, you need to call the close method after the ChannelFuture, which was returned by the writemethod, notifies you when the write operation has been done. Please note that, close might not close the connection immediately, and it returns a ChannelFuture.

How do we get notified when the write request is finished then? This is as simple as adding aChannelFutureListener to the returned ChannelFuture. Here, we created a new anonymousChannelFutureListener which closes the Channel when the operation is done.

Alternatively, you could simplify the code using a pre-defined listener:

f.addListener(ChannelFutureListener.CLOSE);

1.6. Writing a Time Client

Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.

The biggest and only difference between a server and a client in Netty is that different Bootstrap andChannelFactory are required. Please take a look at the following code:

package org.jboss.netty.example.time;

import java.net.InetSocketAddress;
import java.util.concurrent.Executors; public class TimeClient { public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]); ChannelFactory factory =
new NioClientSocketChannelFactory

(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool()); ClientBootstrap bootstrap = new ClientBootstrap

(factory);

        TimeClientHandler handler = new TimeClientHandler();
bootstrap.getPipeline().addLast("handler", handler); bootstrap.setOption("tcpNoDelay"

, true);
bootstrap.setOption("keepAlive", true); bootstrap.connect

(new InetSocketAddress(host, port));
}
}

NioClientSocketChannelFactory, instead of NioServerSocketChannelFactory was used to create a client-sideChannel.

ClientBootstrap is a client-side counterpart of ServerBootstrap.

Please note that there's no "child." prefix. A client-side SocketChannel does not have a parent.

We should call the connect method instead of the bind method.

As you can see, it is not really different from the server side startup. What about the ChannelHandlerimplementation? It should receive a 32-bit integer from the server, translate it into a human readable format, print the translated time, and close the connection:

package org.jboss.netty.example.time;

import java.util.Date;

@ChannelPipelineCoverage("all")
public class TimeClientHandler extends SimpleChannelHandler { @Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
long currentTimeMillis = buf.readInt() * 1000L;
System.out.println(new Date(currentTimeMillis));
e.getChannel().close();
} @Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}

It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising an IndexOutOfBoundsException. We discuss why this happens in the next section.

1.7.  Dealing with a Stream-based Transport

1.7.1.  One Small Caveat of Socket Buffer

In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:

+-----+-----+-----+
| ABC | DEF | GHI |
+-----+-----+-----+

Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:

+----+-------+---+---+
| AB | CDEFG | H | I |
+----+-------+---+---+

Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:

+-----+-----+-----+
| ABC | DEF | GHI |
+-----+-----+-----+

1.7.2.  The First Solution

Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.

The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modified TimeClientHandler implementation that fixes the problem:

package org.jboss.netty.example.time;

import static org.jboss.netty.buffer.ChannelBuffers.*;

import java.util.Date;

@ChannelPipelineCoverage("one")

public class TimeClientHandler extends SimpleChannelHandler {

    private final ChannelBuffer buf = dynamicBuffer();

    @Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer m = (ChannelBuffer) e.getMessage();
buf.writeBytes(m);

        if (buf.readableBytes() >= 4) {

            long currentTimeMillis = buf.readInt() * 1000L;
System.out.println(new Date(currentTimeMillis));
e.getChannel().close();
}
} @Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}

This time, "one" was used as the value of the ChannelPipelineCoverage annotation. It's because the newTimeClientHandler has to maintain the internal buffer and therefore cannot serve multiple Channels. If an instance of TimeClientHandler is shared by multiple Channels (and consequently multipleChannelPipelines), the content of the buf will be corrupted.

dynamic buffer is a ChannelBuffer which increases its capacity on demand. It's very useful when you don't know the length of the message.

First, all received data should be cumulated into buf.

And then, the handler must check if buf has enough data, 4 bytes in this example, and proceed to the actual business logic. Otherwise, Netty will call the messageReceived method again when more data arrives, and eventually all 4 bytes will be cumulated.

There's another place that needs a fix. Do you remember that we added a TimeClientHandler instance to thedefault ChannelPipeline of the ClientBootstrap? It means one same TimeClientHandler instance is going to handle multiple Channels and consequently the data will be corrupted. To create a new TimeClientHandlerinstance per Channel, we have to implement a ChannelPipelineFactory:

package org.jboss.netty.example.time;

public class TimeClientPipelineFactory implements ChannelPipelineFactory {

    public ChannelPipeline getPipeline() {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("handler", new TimeClientHandler());
return pipeline;
}
}

Now let us replace the following lines of TimeClient:

TimeClientHandler handler = new TimeClientHandler();
bootstrap.getPipeline().addLast("handler", handler);

with the following:

bootstrap.setPipelineFactory(new TimeClientPipelineFactory());

It might look somewhat complicated at the first glance, and it is true that we don't need to introduceTimeClientPipelineFactory in this particular case because TimeClient creates only one connection.

However, as your application gets more and more complex, you will almost always end up with writing aChannelPipelineFactory, which yields much more flexibility to the pipeline configuration.

1.7.3.  The Second Solution

Although the first solution has resolved the problem with the TIME client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. Your ChannelHandler implementation will become unmaintainable very quickly.

As you may have noticed, you can add more than one ChannelHandler to a ChannelPipeline, and therefore, you can split one monolithic ChannelHandler into multiple modular ones to reduce the complexity of your application. For example, you could split TimeClientHandler into two handlers:

  • TimeDecoder which deals with the fragmentation issue, and

  • the initial simple version of TimeClientHandler.

Fortunately, Netty provides an extensible class which helps you write the first one out of the box:

package org.jboss.netty.example.time;

public class TimeDecoder extends FrameDecoder {

    @Override
protected Object decode(
ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer)

 {

        if (buffer.readableBytes() < 4) {
return null;

        }

        return buffer.readBytes(4);

    }
}

There's no ChannelPipelineCoverage annotation this time because FrameDecoder is already annotated with"one".

FrameDecoder calls decode method with an internally maintained cumulative buffer whenever new data is received.

If null is returned, it means there's not enough data yet. FrameDecoder will call again when there is a sufficient amount of data.

If non-null is returned, it means the decode method has decoded a message successfully. FrameDecoderwill discard the read part of its internal cumulative buffer. Please remember that you don't need to decode multiple messages. FrameDecoder will keep calling the decoder method until it returns null.

If you are an adventurous person, you might want to try the ReplayingDecoder which simplifies the decoder even more. You will need to consult the API reference for more information though.

package org.jboss.netty.example.time;

public class TimeDecoder extends ReplayingDecoder<VoidEnum> {

    @Override
protected Object decode(
ChannelHandlerContext ctx, Channel channel,
ChannelBuffer buffer, VoidEnum state) { return buffer.readBytes(4);
}
}

Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:

  • org.jboss.netty.example.factorial for a binary protocol, and

  • org.jboss.netty.example.telnet for a text line-based protocol.

1.8.  Speaking in POJO instead of ChannelBuffer

All the examples we have reviewed so far used a ChannelBuffer as a primary data structure of a protocol message. In this section, we will improve the TIME protocol client and server example to use a POJO instead of a ChannelBuffer.

The advantage of using a POJO in your ChannelHandler is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information from ChannelBuffer out from the handler. In the TIME client and server examples, we read only one 32-bit integer and it is not a major issue to useChannelBuffer directly. However, you will find it is necessary to make the separation as you implement a real world protocol.

First, let us define a new type called UnixTime.

package org.jboss.netty.example.time;

import java.util.Date;

public class UnixTime {
private final int value; public UnixTime(int value) {
this.value = value;
} public int getValue() {
return value;
} @Override
public String toString() {
return new Date(value * 1000L).toString();
}
}

We can now revise the TimeDecoder to return a UnixTime instead of a ChannelBuffer.

@Override
protected Object decode(
ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
} return new UnixTime(buffer.readInt());

}

FrameDecoder and ReplayingDecoder allow you to return an object of any type. If they were restricted to return only a ChannelBuffer, we would have to insert another ChannelHandler which transforms aChannelBuffer into a UnixTime.

With the updated decoder, the TimeClientHandler does not use ChannelBuffer anymore:

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
UnixTime m = (UnixTime) e.getMessage();
System.out.println(m);
e.getChannel().close();
}

Much simpler and elegant, right? The same technique can be applied on the server side. Let us update theTimeServerHandler first this time:

@Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
UnixTime time = new UnixTime(System.currentTimeMillis() / 1000);
ChannelFuture f = e.getChannel().write(time);
f.addListener(ChannelFutureListener.CLOSE);
}

Now, the only missing piece is the ChannelHandler which translates a UnixTime back into a ChannelBuffer. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.

package org.jboss.netty.example.time;

import static org.jboss.netty.buffer.ChannelBuffers.*;

@ChannelPipelineCoverage("all")

public class TimeEncoder extends SimpleChannelHandler {

    public void writeRequested(ChannelHandlerContext ctx, MessageEvent

 e) {
UnixTime time = (UnixTime) e.getMessage(); ChannelBuffer buf = buffer(4);
buf.writeInt(time.getValue()); Channels.write(ctx, e.getFuture(), buf);

    }
}

The ChannelPipelineCoverage value of an encoder is usually "all" because this encoder is stateless. Actually, most encoders are stateless.

An encoder overrides the writeRequested method to intercept a write request. Please note that theMessageEvent parameter here is the same type which was specified in messageReceived but they are interpreted differently. A ChannelEvent can be either an upstream or downstream event depending on the direction where the event flows. For instance, a MessageEvent can be an upstream event when called for messageReceived or a downstream event when called for writeRequested. Please refer to the API reference to learn more about the difference between a upstream event and a downstream event.

Once done with transforming a POJO into a ChannelBuffer, you should forward the new buffer to the previous ChannelDownstreamHandler in the ChannelPipelineChannels provides various helper methods which generates and sends a ChannelEvent. In this example, Channels.write(...) method creates a newMessageEvent and sends it to the previous ChannelDownstreamHandler in the ChannelPipeline.

On the other hand, it is a good idea to use static imports for Channels:

import static org.jboss.netty.channel.Channels.*;
...
ChannelPipeline pipeline = pipeline();
write(ctx, e.getFuture(), buf);
fireChannelDisconnected(ctx);

The last task left is to insert a TimeEncoder into the ChannelPipeline on the server side, and it is left as a trivial exercise.

1.9.  Shutting Down Your Application

If you ran the TimeClient, you must have noticed that the application doesn't exit but just keep running doing nothing. Looking from the full stack trace, you will also find a couple I/O threads are running. To shut down the I/O threads and let the application exit gracefully, you need to release the resources allocated byChannelFactory.

The shutdown process of a typical network application is composed of the following three steps:

  1. Close all server sockets if there are any,

  2. Close all non-server sockets (i.e. client sockets and accepted sockets) if there are any, and

  3. Release all resources used by ChannelFactory.

To apply the three steps above to the TimeClientTimeClient.main() could shut itself down gracefully by closing the only one client connection and releasing all resources used by ChannelFactory:

package org.jboss.netty.example.time;

public class TimeClient {
public static void main(String[] args) throws Exception {
...
ChannelFactory factory = ...;
ClientBootstrap bootstrap = ...;
...
ChannelFuture future

 = bootstrap.connect(...);
future.awaitUninterruptible();

        if (!future.isSuccess()) {
future.getCause().printStackTrace();

        }
future.getChannel().getCloseFuture().awaitUninterruptibly();

        factory.releaseExternalResources();

    }
}

The connect method of ClientBootstrap returns a ChannelFuture which notifies when a connection attempt succeeds or fails. It also has a reference to the Channel which is associated with the connection attempt.

Wait for the returned ChannelFuture to determine if the connection attempt was successful or not.

If failed, we print the cause of the failure to know why it failed. the getCause() method of ChannelFuturewill return the cause of the failure if the connection attempt was neither successful nor cancelled.

Now that the connection attempt is over, we need to wait until the connection is closed by waiting for the closeFuture of the Channel. Every Channel has its own closeFuture so that you are notified and can perform a certain action on closure.

Even if the connection attempt has failed the closeFuture will be notified because the Channel will be closed automatically when the connection attempt fails.

All connections have been closed at this point. The only task left is to release the resources being used by ChannelFactory. It is as simple as calling its releaseExternalResources() method. All resources including the NIO Selectors and thread pools will be shut down and terminated automatically.

Shutting down a client was pretty easy, but how about shutting down a server? You need to unbind from the port and close all open accepted connections. To do this, you need a data structure that keeps track of the list of active connections, and it's not a trivial task. Fortunately, there is a solution, ChannelGroup.

ChannelGroup is a special extension of Java collections API which represents a set of open Channels. If a Channelis added to a ChannelGroup and the added Channel is closed, the closed Channel is removed from its ChannelGroupautomatically. You can also perform an operation on all Channels in the same group. For instance, you can close all Channels in a ChannelGroup when you shut down your server.

To keep track of open sockets, you need to modify the TimeServerHandler to add a new open Channel to the global ChannelGroupTimeServer.allChannels:

@Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
TimeServer.allChannels.add(e.getChannel());

}

Yes, ChannelGroup is thread-safe.

Now that the list of all active Channels are maintained automatically, shutting down a server is as easy as shutting down a client:

package org.jboss.netty.example.time;

public class TimeServer {

    static final ChannelGroup allChannels = new DefaultChannelGroup("time-server"

);

    public static void main(String[] args) throws Exception {
...
ChannelFactory factory = ...;
ServerBootstrap bootstrap = ...;
...
Channel channel

 = bootstrap.bind(...);
allChannels.add(channel);

        waitForShutdownCommand();

        ChannelGroupFuture future = allChannels.close();

        future.awaitUninterruptibly();
factory.releaseExternalResources();
}
}

DefaultChannelGroup requires the name of the group as a constructor parameter. The group name is solely used to distinguish one group from others.

The bind method of ServerBootstrap returns a server side Channel which is bound to the specified local address. Calling the close() method of the returned Channel will make the Channel unbind from the bound local address.

Any type of Channels can be added to a ChannelGroup regardless if it is either server side, client-side, or accepted. Therefore, you can close the bound Channel along with the accepted Channels in one shot when the server shuts down.

waitForShutdownCommand() is an imaginary method that waits for the shutdown signal. You could wait for a message from a privileged client or the JVM shutdown hook.

You can perform the same operation on all channels in the same ChannelGroup. In this case, we close all channels, which means the bound server-side Channel will be unbound and all accepted connections will be closed asynchronously. To notify when all connections were closed successfully, it returns aChannelGroupFuture which has a similar role with ChannelFuture.

1.10.  Summary

In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty. More questions you may have will be covered in the upcoming chapters and the revised version of this chapter. Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty based on your feed back.

netty Getting Started--reference的更多相关文章

  1. netty Architectural Overview --reference

    reference from:http://docs.jboss.org/netty/3.1/guide/html/architecture.html 2.1. Rich Buffer Data St ...

  2. 《Netty in action》 读书笔记

    声明:这篇文章是记录读书过程中的知识点,并加以归纳总结,成文.文中图片.代码出自<Netty in action>. 1. 为什么用Netty? 每个框架的流行,都一定有它出众的地方.Ne ...

  3. Reference counted objects

    Reference counted objects · netty/netty Wiki https://github.com/netty/netty/wiki/Reference-counted-o ...

  4. [转]RPC 框架通俗介绍

    关于RPC 你的题目是RPC(远程过程调用,Remote Procedure Call)框架,首先了解什么叫RPC,为什么要RPC,RPC是指远程过程调用,也就是说两台服务器A,B,一个应用部署在A服 ...

  5. RPC和Socket

    RPC和Socket的区别 rpc是通过什么实现啊?socket! RPC(Remote Procedure Call,远程过程调用)是建立在Socket之上的,出于一种类比的愿望,在一台机器上运行的 ...

  6. 【Netty官方文档翻译】引用计数对象(reference counted objects)

    知乎有关于引用计数和垃圾回收GC两种方式的详细讲解 https://www.zhihu.com/question/21539353 原文出处:http://netty.io/wiki/referenc ...

  7. netty 引用计数对象(reference counted objects)

    [Netty官方文档翻译]引用计数对象(reference counted objects) http://damacheng009.iteye.com/blog/2013657

  8. netty系列之:JVM中的Reference count原来netty中也有

    目录 简介 ByteBuf和ReferenceCounted ByteBuf的基本使用 ByteBuf的回收 ByteBuf的衍生方法 ChannelHandler中的引用计数 内存泄露 总结 简介 ...

  9. 基于Netty打造RPC服务器设计经验谈

    自从在园子里,发表了两篇如何基于Netty构建RPC服务器的文章:谈谈如何使用Netty开发实现高性能的RPC服务器.Netty实现高性能RPC服务器优化篇之消息序列化 之后,收到了很多同行.园友们热 ...

  10. Netty源码分析之服务端启动过程

    一.首先来看一段服务端的示例代码: public class NettyTestServer { public void bind(int port) throws Exception{ EventL ...

随机推荐

  1. Tomcat部署(转)

    首先说说tomcat的几种部署方法: 1.将应用文件夹或war文件塞到tomcat安装目录下的webapps子目录下,这样tomcat启动的时候会将webapps目录下的文件夹或war内容当成应用部署 ...

  2. ORA-12545:Connect failed beacuse target host or object does not exist

    更换计算机名,重新启动系统后 oracle 的监听器就无法正常启动, 总是提示ORA-12545:Connect failed beacuse target host or object does n ...

  3. 社区商业试玩O2O:良渚文化村新街坊牵手阿里巴巴

    在电商时代,越来越多的人选择便捷的网上购物,使得实体商业受到了不小的冲击,各种大型的购物中心.购物广场已经不再那么人气十足,因此一些特色商业街区.社区商业频频出现,也不乏一些新玩儿法. 阿里巴巴(专题 ...

  4. jquery 列求和

    列求和 var m = 0; $('#tb tr').each(function () { //td:eq(3)从0开始计数 $(this).find('td:eq(3)').each(functio ...

  5. Git branch (分支学习)

    以前总结的一些git操作,分享在这里. Git 保存的不是文件差异或者变化量,而只是一系列文件快照.   - 列出当前所有分支 git branch <--merge> | <--n ...

  6. 【转】由DFT推导出DCT

    原文地址:http://blog.sina.com.cn/s/blog_626631420100xvxd.htm 已知离散傅里叶变换(DFT)为: 由于许多要处理的信号都是实信号,在使用DFT时由于傅 ...

  7. node系列1

    NodeJS基础 JS是脚本语言,脚本语言都需要一个解析器才能运行,NodeJS就是一个解析器.nodejs.org 打开终端,键入node进入命令交互模式,可以输入一条代码语句后立即执行并显示结果 ...

  8. editpuls查找替换通配符

    1  \t    Tab character.         tab符号 2  \n    New line.              新的一行(换行符) 3  .     Matches any ...

  9. Storm系列(四)Topology提交校验过程

    功能:提交一个新的Topology,并为Topology创建storm-id(topology-id),校验其结构,设置必要的元数据,最后为Topology分配任务. 实现源码: 1  ); Conf ...

  10. POJ2407–Relatives(欧拉函数)

    题目大意 给定一个正整数n,要求你求出所有小于n的正整数当中与n互质的数的个数 题解 欧拉函数模板题~~~因为n过大~~~所以直接用公式求 代码: #include<iostream> # ...