sofa-rpc是阿里开源的一款高性能的rpc框架,这篇文章主要是对sofa-rpc provider启动服务流程的一个代码走读,下面是我简单绘制的一个基本的关系流程图

下面我们根据sofa-rpc代码,对流程进行一个跟踪与走读。我们以BoltServer的为例

    public static void main(String[] args) {
ApplicationConfig application = new ApplicationConfig().setAppName("test-server"); ServerConfig serverConfig = new ServerConfig()
.setPort(22000)
.setDaemon(false); ProviderConfig<HelloService> providerConfig = new ProviderConfig<HelloService>()
.setInterfaceId(HelloService.class.getName())
.setApplication(application)
.setRef(new HelloServiceImpl())
.setServer(serverConfig)
.setRegister(false); ProviderConfig<EchoService> providerConfig2 = new ProviderConfig<EchoService>()
.setInterfaceId(EchoService.class.getName())
.setApplication(application)
.setRef(new EchoServiceImpl())
.setServer(serverConfig)
.setRegister(false); providerConfig.export();
providerConfig2.export(); LOGGER.warn("started at pid {}", RpcRuntimeContext.PID);
}

可以看到sofa-rpc通过ProviderConfig类对服务提供方Provider进行了配置信息的初始化,同时也提供了export做为服务启动的入口。

    public synchronized void export() {
if (providerBootstrap == null) {
providerBootstrap = Bootstraps.from(this);
}
providerBootstrap.export();
}
根据ProviderConfig中setBootstrap()配置的Bootstrap类型,我们通过Bootstaps.from(this)可以获取到不同的Bootstrap引导服务,分别是DefaultProviderBootstrap与 DubboProviderBootstrap 
    /**
* 发布一个服务
*
* @param providerConfig 服务发布者配置
* @param <T> 接口类型
* @return 发布启动类
*/
public static <T> ProviderBootstrap<T> from(ProviderConfig<T> providerConfig) {
String bootstrap = providerConfig.getBootstrap();
if (StringUtils.isEmpty(bootstrap)) {
// Use default provider bootstrap 无的话就返回默认DefaultProviderBootstrap
bootstrap = RpcConfigs.getStringValue(RpcOptions.DEFAULT_PROVIDER_BOOTSTRAP);
providerConfig.setBootstrap(bootstrap);
}
ProviderBootstrap providerBootstrap = ExtensionLoaderFactory.getExtensionLoader(ProviderBootstrap.class)
.getExtension(bootstrap, new Class[] { ProviderConfig.class }, new Object[] { providerConfig });
return (ProviderBootstrap<T>) providerBootstrap;
}

DefaultProviderBootstrap与 DubboProviderBootstrap 都继承自ProviderBootstrap。

DefaultProviderBootstrap又被BoltProviderBootstrap、Http2ClearTextProviderBootstrap、RestProviderBootstrap三个类所继承,这其实对应了sofa-rpc中的三种server服务方式。

我们看下DefaultProviderBootstrap服务启动源码

    @Override
public void export() {
if (providerConfig.getDelay() > 0) { // 延迟加载,单位毫秒
Thread thread = factory.newThread(new Runnable() {
@Override
public void run() {
try {
Thread.sleep(providerConfig.getDelay());
} catch (Throwable ignore) { // NOPMD
}
doExport();
}
});
thread.start();
} else {
doExport();
}
} private void doExport() {
if (exported) {
return;
} // 检查参数
checkParameters(); String appName = providerConfig.getAppName(); //key is the protocol of server,for concurrent safe
Map<String, Boolean> hasExportedInCurrent = new ConcurrentHashMap<String, Boolean>();
// 将处理器注册到server
List<ServerConfig> serverConfigs = providerConfig.getServer();
for (ServerConfig serverConfig : serverConfigs) {
String protocol = serverConfig.getProtocol(); String key = providerConfig.buildKey() + ":" + protocol; if (LOGGER.isInfoEnabled(appName)) {
LOGGER.infoWithApp(appName, "Export provider config : {} with bean id {}", key, providerConfig.getId());
} // 注意同一interface,同一uniqleId,不同server情况
AtomicInteger cnt = EXPORTED_KEYS.get(key); // 计数器
if (cnt == null) { // 没有发布过
cnt = CommonUtils.putToConcurrentMap(EXPORTED_KEYS, key, new AtomicInteger(0));
}
int c = cnt.incrementAndGet();
hasExportedInCurrent.put(serverConfig.getProtocol(), true);
int maxProxyCount = providerConfig.getRepeatedExportLimit();
if (maxProxyCount > 0) {
if (c > maxProxyCount) {
decrementCounter(hasExportedInCurrent);
// 超过最大数量,直接抛出异常
throw new SofaRpcRuntimeException("Duplicate provider config with key " + key
+ " has been exported more than " + maxProxyCount + " times!"
+ " Maybe it's wrong config, please check it."
+ " Ignore this if you did that on purpose!");
} else if (c > 1) {
if (LOGGER.isInfoEnabled(appName)) {
LOGGER.infoWithApp(appName, "Duplicate provider config with key {} has been exported!"
+ " Maybe it's wrong config, please check it."
+ " Ignore this if you did that on purpose!", key);
}
}
} } try {
// 构造请求调用器
providerProxyInvoker = new ProviderProxyInvoker(providerConfig);
// 初始化注册中心
if (providerConfig.isRegister()) {
List<RegistryConfig> registryConfigs = providerConfig.getRegistry();
if (CommonUtils.isNotEmpty(registryConfigs)) {
for (RegistryConfig registryConfig : registryConfigs) {
RegistryFactory.getRegistry(registryConfig); // 提前初始化Registry
}
}
}
// 将处理器注册到server
for (ServerConfig serverConfig : serverConfigs) {
try {
//构建Server
Server server = serverConfig.buildIfAbsent();
// 注册序列化接口
server.registerProcessor(providerConfig, providerProxyInvoker);
if (serverConfig.isAutoStart()) {
//启动服务
server.start();
} } catch (SofaRpcRuntimeException e) {
throw e;
} catch (Exception e) {
LOGGER.errorWithApp(appName, "Catch exception when register processor to server: "
+ serverConfig.getId(), e);
}
} // 注册到注册中心
providerConfig.setConfigListener(new ProviderAttributeListener());
register();
} catch (Exception e) {
decrementCounter(hasExportedInCurrent); if (e instanceof SofaRpcRuntimeException) {
throw (SofaRpcRuntimeException) e;
} else {
throw new SofaRpcRuntimeException("Build provider proxy error!", e);
}
} // 记录一些缓存数据
RpcRuntimeContext.cacheProviderConfig(this);
exported = true;
}

代码中通过serverConfig.buildIfAbsent()构建Server服务对象,而在buildIfAbsent()函数中我们可以看到,sever是通过SeverFactory工厂获取到的,在SeverFactory的getSever()中根据SeverConfig的配置获取Sever的具体实例,并执行Init()进行初始化。

    /**
* 启动服务
*
* @return the server
*/
public synchronized Server buildIfAbsent() {
if (server != null) {
return server;
}
// 提前检查协议+序列化方式
// ConfigValueHelper.check(ProtocolType.valueOf(getProtocol()),
// SerializationType.valueOf(getSerialization())); //在sever工厂中拿到sever实例
server = ServerFactory.getServer(this);
return server;
}
    /**
* 初始化Server实例
*
* @param serverConfig 服务端配置
* @return Server
*/
public synchronized static Server getServer(ServerConfig serverConfig) {
try {
Server server = SERVER_MAP.get(Integer.toString(serverConfig.getPort()));
if (server == null) {
// 算下网卡和端口
resolveServerConfig(serverConfig); ExtensionClass<Server> ext = ExtensionLoaderFactory.getExtensionLoader(Server.class)
.getExtensionClass(serverConfig.getProtocol());
if (ext == null) {
throw ExceptionUtils.buildRuntime("server.protocol", serverConfig.getProtocol(),
"Unsupported protocol of server!");
}
server = ext.getExtInstance();
//服务初始化
server.init(serverConfig);
SERVER_MAP.put(serverConfig.getPort() + "", server);
}
return server;
} catch (SofaRpcRuntimeException e) {
throw e;
} catch (Throwable e) {
throw new SofaRpcRuntimeException(e.getMessage(), e);
}
}

sofa-rpc提供了三种server类型 BoltServer,RestServer与AbstractHttpServer

BoltServer中通讯底层通过RemotingServer实现的,RemotingServer是基于阿里sofa-bolt通信框架开发的。

    /**
* Bolt服务端
*/
protected RemotingServer remotingServer; @Override
public void start() {
if (started) {
return;
}
synchronized (this) {
if (started) {
return;
}
// 生成阿里基于netty的bolt服务Server对象
remotingServer = initRemotingServer();
try {
if (remotingServer.start(serverConfig.getBoundHost())) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("Bolt server has been bind to {}:{}", serverConfig.getBoundHost(),
serverConfig.getPort());
}
} else {
throw new SofaRpcRuntimeException("Failed to start bolt server, see more detail from bolt log.");
}
started = true; if (EventBus.isEnable(ServerStartedEvent.class)) {
EventBus.post(new ServerStartedEvent(serverConfig, bizThreadPool));
} } catch (SofaRpcRuntimeException e) {
throw e;
} catch (Exception e) {
throw new SofaRpcRuntimeException("Failed to start bolt server!", e);
}
}
}

AbstractHttpServer 提供http服务,底层通信通过ServerTransport类实现的

    /**
* 服务端通讯层
*/
private ServerTransport serverTransport; @Override
public void init(ServerConfig serverConfig) {
this.serverConfig = serverConfig;
this.serverTransportConfig = convertConfig(serverConfig);
// 启动线程池
this.bizThreadPool = initThreadPool(serverConfig);
// 服务端处理器
this.serverHandler = new HttpServerHandler(); // set default transport config
this.serverTransportConfig.setContainer(container);
this.serverTransportConfig.setServerHandler(serverHandler);
} @Override
public void start() {
if (started) {
return;
}
synchronized (this) {
if (started) {
return;
}
try {
// 启动线程池
this.bizThreadPool = initThreadPool(serverConfig);
this.serverHandler.setBizThreadPool(bizThreadPool);
//实例化服务,具体代码见
serverTransport = ServerTransportFactory.getServerTransport(serverTransportConfig);
started = serverTransport.start(); if (started) {
if (EventBus.isEnable(ServerStartedEvent.class)) {
EventBus.post(new ServerStartedEvent(serverConfig, bizThreadPool));
}
}
} catch (SofaRpcRuntimeException e) {
throw e;
} catch (Exception e) {
throw new SofaRpcRuntimeException("Failed to start HTTP/2 server!", e);
}
}
}

ServerTransport是个抽象类,具体实现为transport包下AbstractHttp2ServerTransport

    /**
* 构造函数
*
* @param transportConfig 服务端配置
*/
protected AbstractHttp2ServerTransport(ServerTransportConfig transportConfig) {
super(transportConfig);
} @Override
public boolean start() {
if (serverBootstrap != null) {
return true;
}
synchronized (this) {
if (serverBootstrap != null) {
return true;
}
boolean flag = false;
SslContext sslCtx = SslContextBuilder.build(); // Configure the server.
EventLoopGroup bossGroup = NettyHelper.getServerBossEventLoopGroup(transportConfig); //可以看到然是基于Netty
HttpServerHandler httpServerHandler = (HttpServerHandler) transportConfig.getServerHandler();
bizGroup = NettyHelper.getServerBizEventLoopGroup(transportConfig, httpServerHandler.getBizThreadPool()); serverBootstrap = new ServerBootstrap(); serverBootstrap.group(bossGroup, bizGroup)
.channel(transportConfig.isUseEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, transportConfig.getBacklog())
.option(ChannelOption.SO_REUSEADDR, transportConfig.isReuseAddr())
.option(ChannelOption.RCVBUF_ALLOCATOR, NettyHelper.getRecvByteBufAllocator())
.option(ChannelOption.ALLOCATOR, NettyHelper.getByteBufAllocator())
.childOption(ChannelOption.SO_KEEPALIVE, transportConfig.isKeepAlive())
.childOption(ChannelOption.TCP_NODELAY, transportConfig.isTcpNoDelay())
.childOption(ChannelOption.SO_RCVBUF, 8192 * 128)
.childOption(ChannelOption.SO_SNDBUF, 8192 * 128)
.handler(new LoggingHandler(LogLevel.DEBUG))
.childOption(ChannelOption.ALLOCATOR, NettyHelper.getByteBufAllocator())
.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(
transportConfig.getBufferMin(), transportConfig.getBufferMax()))
.childHandler(new Http2ServerChannelInitializer(bizGroup, sslCtx,
httpServerHandler, transportConfig.getPayload())); // 绑定到全部网卡 或者 指定网卡
ChannelFuture future = serverBootstrap.bind(
new InetSocketAddress(transportConfig.getHost(), transportConfig.getPort()));
ChannelFuture channelFuture = future.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("HTTP/2 Server bind to {}:{} success!",
transportConfig.getHost(), transportConfig.getPort());
}
} else {
LOGGER.error("HTTP/2 Server bind to {}:{} failed!",
transportConfig.getHost(), transportConfig.getPort());
stop();
}
}
}); try {
channelFuture.await();
if (channelFuture.isSuccess()) {
flag = Boolean.TRUE;
} else {
throw new SofaRpcRuntimeException("Server start fail!", future.cause());
}
} catch (InterruptedException e) {
LOGGER.error(e.getMessage(), e);
}
return flag;
}
}

RestServer 提供Rest服务,底层通信实现具体可见SofaNettyJaxrsServer。

    /**
* Rest服务端
*/
protected SofaNettyJaxrsServer httpServer; @Override
public void init(ServerConfig serverConfig) {
this.serverConfig = serverConfig;
httpServer = buildServer();
}

SofaNettyJaxrsServer中服务启动的具体代码

 @Override
public void start() {
// CHANGE: 增加线程名字
boolean daemon = serverConfig.isDaemon();
boolean isEpoll = serverConfig.isEpoll();
NamedThreadFactory ioFactory = new NamedThreadFactory("SEV-REST-IO-" + port, daemon);
NamedThreadFactory bizFactory = new NamedThreadFactory("SEV-REST-BIZ-" + port, daemon);
eventLoopGroup = isEpoll ? new EpollEventLoopGroup(ioWorkerCount, ioFactory)
: new NioEventLoopGroup(ioWorkerCount, ioFactory);
eventExecutor = isEpoll ? new EpollEventLoopGroup(executorThreadCount, bizFactory)
: new NioEventLoopGroup(executorThreadCount, bizFactory);
// Configure the server.
bootstrap = new ServerBootstrap()
.group(eventLoopGroup)
.channel(isEpoll ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
.childHandler(createChannelInitializer())
.option(ChannelOption.SO_BACKLOG, backlog)
.childOption(ChannelOption.SO_KEEPALIVE, serverConfig.isKeepAlive()); // CHANGE: setKeepAlive for (Map.Entry<ChannelOption, Object> entry : channelOptions.entrySet()) {
bootstrap.option(entry.getKey(), entry.getValue());
} for (Map.Entry<ChannelOption, Object> entry : childChannelOptions.entrySet()) {
bootstrap.childOption(entry.getKey(), entry.getValue());
} final InetSocketAddress socketAddress;
if (null == hostname || hostname.isEmpty()) {
socketAddress = new InetSocketAddress(port);
} else {
socketAddress = new InetSocketAddress(hostname, port);
} bootstrap.bind(socketAddress).syncUninterruptibly();
}

OK,以上就是sofa-rpc服务端启动的一个基本的流程,这里关注的只是简单的服务启动流程,没有深入代码功能进行分析,在此基础上,我们可以进一步探究代码的具体实现。

关注微信公众号,查看更多技术文章。

sofa-rpc 服务端源码流程走读的更多相关文章

  1. Eureka服务端源码流程梳理

    一.简述 spring cloud三步走,一导包,二依赖,三配置为我们简化了太多东西,以至于很多东西知其然不知其所以然,了解底层实现之后对于一些问题我们也可以快速的定位问题所在. spring clo ...

  2. vs2008编译FileZilla服务端源码

    vs2008编译FileZilla服务端源码 FileZilla服务端下载地址:https://download.filezilla-project.org/server/.FileZilla服务端源 ...

  3. Zookeeper 源码(四)Zookeeper 服务端源码

    Zookeeper 源码(四)Zookeeper 服务端源码 Zookeeper 服务端的启动入口为 QuorumPeerMain public static void main(String[] a ...

  4. kbengine mmo源码(完整服务端源码+资源+完整客户端源码)

      本项目作为kbengine服务端引擎的客户端演示而写 更新kbengine插件库(https://github.com/kbengine/kbengine_unity3d_plugins):    ...

  5. Netty5服务端源码解析

    Netty5源码解析 今天让我来总结下netty5的服务端代码. 服务端(ServerBootstrap) 示例代码如下: import io.netty.bootstrap.ServerBootst ...

  6. CMPP服务端源码

    CMPP服务端,带数据库,可以接收第三方CMPP客户端的短信,并存入数据库,结合我的cmpp客户端服务程序,将可以实现接收第三方SP的短信并转发到网关实现发送,并将状态报告.上行短信转发给第三方SP, ...

  7. netty(一)---服务端源码阅读

    NIO Select 知识 select 示例代码 : //创建 channel 并设置为非阻塞 ServerSocketChannel serverChannel = ServerSocketCha ...

  8. Netty源码解读(二)-服务端源码讲解

    简单Echo案例 注释版代码地址:netty 代码是netty的源码,我添加了自己理解的中文注释. 了解了Netty的线程模型和组件之后,我们先看看如何写一个简单的Echo案例,后续的源码讲解都基于此 ...

  9. 仿陌陌的ios客户端+服务端源码项目

    软件功能:模仿陌陌客户端,功能很相似,注册.登陆.上传照片.浏览照片.浏览查找附近会员.关注.取消关注.聊天.语音和文字聊天,还有拼车和搭车的功能,支持微博分享和查找好友. 后台是php+mysql, ...

随机推荐

  1. Shiro自定义Realm时用注解的方式注入父类的credentialsMatcher

    用Shiro做登录权限控制时,密码加密是自定义的. 数据库的密码通过散列获取,如下,算法为:md5,盐为一个随机数字,散列迭代次数为3次,最终将salt与散列后的密码保存到数据库内,第二次登录时将登录 ...

  2. DG备库无法接受主库归档日志之密码文件

    DG备库无法接受主库归档日志之密码文件 实验目的:还原某个客户案例,客户审计需要,对主库sys用户进行锁定,一小时后对sys用户进行解锁后,发现备库无法接受主库的归档日志 本篇文章,测试sys用户与D ...

  3. shell脚本-预定义常量

    $0 这个程式的执行名字$n 这个程式的第n个参数值,n=1..9$* 这个程式的所有参数,此选项参数可超过9个.$# 这个程式的参数个数$$ 这个程式的PID(脚本运行的当前进程ID号)$! 执行上 ...

  4. flask中利用from来进行对修改修改时旧密码的验证

    在flask中,肯定是post提交个from进行密码验证.还有一定就是修改密码肯定是登录之后才能进行对密码的修改,这么说,在浏览器中的session中一定会有用户的信息,可以通过相对应的信息去获取到相 ...

  5. jqGrid 加载完jqGrid之后可以执行函数的方法

    , gridComplete: function() { jQuery('#first_gridpager').html("首页 "); jQuery('#prev_gridpag ...

  6. C#中的IDisposable接口

    深入理解C#中的IDisposable接口 写在前面 在开始之前,我们需要明确什么是C#(或者说.NET)中的资源,打码的时候我们经常说释放资源,那么到底什么是资源,简单来讲,C#中的每一种类型都是一 ...

  7. linux下简单制作iso,img镜像文件

    1. 如果你是直接从cd压制iso文件的,执行sudo umount /dev/cdromdd if=/dev/cdrom of=file.iso bs=1024 2. 如果你要把某个文件或者目录压到 ...

  8. CF使用TGP下载后,分卷文件损坏的解决方法

    首先从游戏的列表删除游戏(安装失败出现分卷文件损坏的游戏) 然后进入游戏重新,继续找到该游戏(安装失败的游戏) 点击下载游戏!不会重新下载的,之后下载一些失败的文件,不会花费多少时间,慢慢等待即可 之 ...

  9. linux之数据备份

    第一种方法:tar备份 [root@bogon ~]# cat bp/linux.txt no centos [root@bogon ~]# tar cvf bp.tar bp //打包bp目录 bp ...

  10. idea使用的JDK版本1.9换成1.8后相关的更改设置

    File——>Project Structure 一.查看Project中的jdk 1.检查Project SDK:中jdk 版本是否为1.8版本 2.检查Project language le ...