记druid 在配置中心下的一个大坑: cpu 达到 100%
把我们的dubbo 应用移步到配置中心上去之后,发现我们的应用过一段时间就会出现cpu 100%的情况 (大概是12个小时),一开始cpu占用是2-5% 的样子,什么都没做,后面竟然用尽了cpu。。
把jvm 线程堆栈打印一下,发现线程数竟然达到了上万..... 发现最多是这样的一个线程:
"com.alibaba.nacos.client.Worker.fixed-192.168.11.196_8848-1f3c60b6-3e28-44eb-9798-7f7eeeff6a8d" #138 daemon prio=5 os_prio=0 tid=0x0000000026c49000 nid=0xf60 waiting on condition [0x000000002aebe000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000079c3a15a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
我们的配置中心是nacos,从com.alibaba.nacos.client.Worker.fixed来看, 跟nacos 好像有点关系。事实也是如此,我们之前的没有用nacos , 是根本没有这个问题的。 但是,也不是每个应用都是这样, 我们的网关应用就没这个问题。。
spring-cloud-alibaba-dependencies 刚刚从0.2.1.RELEASE 升级到了 0.2.2.RELEASE, 难道这个有关系? 和 spring-cloud-dependencies 的兼容性问题?和 spring-cloud-starter的兼容性问题?
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.SR1</version>
<type>pom</type>
<scope>import</scope>
</dependency> <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>0.2..RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency> <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter</artifactId>
<version>2.0..RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
但退回原来的版本,问题依旧。
重新写了一个spring boot 程序,没有问题。 把原来的那一套加上,问题出现了。 观察发现数据源似乎有问题。
怀疑是nacos 的坑。com.alibaba.nacos.client.Worker 是什么鬼? 全局搜索类名, 没发现,难道不是一个类吗? fixed 是一个方法吗? 找不到。 如是把nacos 1.0.0 的源码下载了下来。 全局搜索类com.alibaba.nacos.client.Worker, 还是没发现。全局全文搜索fixed ,发现了com/alibaba/nacos/client/config/impl/ServerListManager.java。通过本机调试,发现了 com.alibaba.nacos.client.config.impl.ClientWorker, 同时发现了com.alibaba.nacos.client.Worker 字眼 , 原来是写死的! 依然不清楚问题原因。
直觉告诉我 正是 ClientWorker 不断创建了我们的 “问题线程”, 但是其机制我并不太清楚。贸然的调试没有发现根本原因,反而把我弄得一头雾水。 我得看一下源码。
不断的加断点, 发现了 com.alibaba.nacos.client.config.impl.CacheData 的 safeNotifyListener 方法, 其中的 listener.receiveConfigInfo 方法似乎出现了问题, 执行到了1的地方,2却不会执行,真是奇了怪了, 难道陷入了死循环?
private void safeNotifyListener(final String dataId, final String group, final String content, final String md5, final ManagerListenerWrap listenerWrap) {
final Listener listener = listenerWrap.listener;
Runnable job = new Runnable() {
public void run() {
ClassLoader myClassLoader = Thread.currentThread().getContextClassLoader();
ClassLoader appClassLoader = listener.getClass().getClassLoader(); try {
if (listener instanceof AbstractSharedListener) {
AbstractSharedListener adapter = (AbstractSharedListener)listener;
adapter.fillContext(dataId, group);
CacheData.LOGGER.info("[{}] [notify-context] dataId={}, group={}, md5={}", new Object[]{CacheData.this.name, dataId, group, md5});
} Thread.currentThread().setContextClassLoader(appClassLoader);
ConfigResponse cr = new ConfigResponse();
cr.setDataId(dataId);
cr.setGroup(group);
cr.setContent(content);
CacheData.this.configFilterChainManager.doFilter((IConfigRequest)null, cr);
String contentTmp = cr.getContent();
listener.receiveConfigInfo(contentTmp); //1 出错!
listenerWrap.lastCallMd5 = md5; // 2
CacheData.LOGGER.info("[{}] [notify-ok] dataId={}, group={}, md5={}, listener={} ", new Object[]{CacheData.this.name, dataId, group, md5, listener});
} catch (NacosException var9) {
CacheData.LOGGER.error("[{}] [notify-error] dataId={}, group={}, md5={}, listener={} errCode={} errMsg={}", new Object[]{CacheData.this.name, dataId, group, md5, listener, var9.getErrCode(), var9.getErrMsg()});
} catch (Throwable var10) { //3 异常到了这里来, 但是这个异常并没有打印出来!!
CacheData.LOGGER.error("[{}] [notify-error] dataId={}, group={}, md5={}, listener={} tx={}", new Object[]{CacheData.this.name, dataId, group, md5, listener, var10.getCause()});
} finally { // 4
Thread.currentThread().setContextClassLoader(myClassLoader);
} }
};
long startNotify = System.currentTimeMillis(); try {
if (null != listener.getExecutor()) {
listener.getExecutor().execute(job);
} else {
job.run();
}
} catch (Throwable var12) {
LOGGER.error("[{}] [notify-error] dataId={}, group={}, md5={}, listener={} throwable={}", new Object[]{this.name, dataId, group, md5, listener, var12.getCause()});
} long finishNotify = System.currentTimeMillis();
LOGGER.info("[{}] [notify-listener] time cost={}ms in ClientWorker, dataId={}, group={}, md5={}, listener={} ", new Object[]{this.name, finishNotify - startNotify, dataId, group, md5, listener});
}
结果发现并不是的,原因在于 我没有在3 和 4的 地方打断点, 所以误以为是 死循环。 其实1 的地方有报错, 直接跳到了3, 所以2没有执行。 加断点给3 和 4 就发现了问题原因!listener.receiveConfigInfo 是一个接口, 其实现不确定, F7 跳入发现 实际是 com.alibaba.nacos.api.config.listener.Listener#receiveConfigInfo 方法,跟进去, 发现执行到NacosContextRefresher.this.applicationContext.publishEvent(new RefreshEvent(this, (Object)null, "Refresh Nacos config")); 的时候, 下一行似乎没有执行。。 可是publishEvent 方法怎么跟进去,这就太麻烦了。。
从3 处, 我知道了问题的cause:
Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true t = {ConfigurationPropertiesBindException@} "org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
beanType = {Class@} "class com.alibaba.druid.pool.DruidDataSource"
annotation = {$Proxy30@} "@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)"
beanName = "writeDataSource"
resourceDescription = null
relatedCauses = null
detailMessage = "Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true"
cause = {BindException@} "org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
target = {Bindable@} "[Bindable@7edd0fc7 type = com.alibaba.druid.pool.DruidDataSource, value = 'provided', annotations = array<Annotation>[@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)]]"
property = {ConfigurationProperty@} "[ConfigurationProperty@3794965c name = spring.datasource.write.url, value = '${MYSQL_WRITE_URL:jdbc:mysql://192.168.11.200:3418/elppmdb}?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false', origin = "spring.datasource.write.url" from property source "bootstrapProperties"]"
name = {ConfigurationPropertyName@} "spring.datasource.write"
detailMessage = "Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
cause = {IllegalStateException@} "java.lang.IllegalStateException: Unable to set value for property url"
detailMessage = "Unable to set value for property url"
cause = {InvocationTargetException@} "java.lang.reflect.InvocationTargetException"
stackTrace = {StackTraceElement[]@}
suppressedExceptions = {Collections$UnmodifiableRandomAccessList@} size =
stackTrace = {StackTraceElement[]@}
suppressedExceptions = {Collections$UnmodifiableRandomAccessList@} size =
stackTrace = {StackTraceElement[]@}
suppressedExceptions = {Collections$UnmodifiableRandomAccessList@} size =
CacheData.this.name = "fixed-192.168.11.196_8848-1f3c60b6-3e28-44eb-9798-7f7eeeff6a8d"
可以看到根本原因是:Unable to set value for property url , 难道是 nacos 上的配置中心的配置有问题?
spring:
application:
name: erdp_discuss_app
profile:
active: ${PROFILE_ACTIVE:local}
cloud:
nacos:
discovery:
server-addr: ${NACOS_SERVER_ADDR:192.168.11.196:}
profiles:
active: ${PROFILE_ACTIVE:local}
jackson:
time-zone: GMT+
date-format: yyyy-MM-dd HH:mm:ss
serialization:
write_dates_as_timestamps: false datasource:
write:
url: jdbc:mysql://192.168.11.200:3418/elppmdb?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false
username: elppm_user
password: elppm
driver-class-name: com.mysql.jdbc.Driver
max-active:
initial-size:
min-idle:
max-wait:
time-between-eviction-runs-millis:
min-evictable-idle-time-millis:
validation-query: SELECT 'x' FROM DUAL
test-while-idle: true
test-on-borrow: false
test-on-return: false
# 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
filters: stat,wall,log4j
# 通过connectProperties属性来打开mergeSql功能;慢SQL记录
connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=
# 合并多个DruidDataSource的监控数据
#spring.datasource.useGlobalDataSourceStat=true
read:
url: ${MYSQL_READ_URL:jdbc:mysql://192.168.11.200:3418/elppmdb}?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false
username: ${MYSQL_DB_USERNAME:elppm_user}
password: ${MYSQL_DB_PASSWORD:elppm}
driver-class-name: com.mysql.jdbc.Driver
max-active:
initial-size:
min-idle:
max-wait:
time-between-eviction-runs-millis:
min-evictable-idle-time-millis:
validation-query: SELECT 'x' FROM DUAL
test-while-idle: true
test-on-borrow: false
test-on-return: false
# 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙
filters: stat,wall,log4j
# 通过connectProperties属性来打开mergeSql功能;慢SQL记录
connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=
# 合并多个DruidDataSource的监控数据
#spring.datasource.useGlobalDataSourceStat=true redis:
host: ${APP_REDIS_HOST:192.168.11.200}
password: ${APP_REDIS_PW:Redis!} #password: # Login password of the redis server.
port: ${APP_REDIS_PORT:}
database:
timeout:
pool:
max-active: #最大连接数
max-idle: #最大空闲连接数
max-wait: - #最大等待时间
min-idle: #初始化连接数
http:
multipart:
enabled: true # Enable support of multi-part uploads.
file-size-threshold: 4KB # Threshold after which files will be written to disk. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
location: /tmp # Intermediate location of uploaded files.
max-file-size: 5Mb # Max file size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
max-request-size: 50Mb # Max request size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size. data:
mongodb:
uri: mongodb://${MONGODB_SERV_URL:192.168.11.200:27117/docs} mybatis:
typeAliasesPackage: com.lkk.ppm.discuss.domain
configLocation: classpath:mybatis-config.xml
mapperLocations: classpath:mybatis/*.xml server:
port: ${SERVER_PORT:8092} erdpapp:
basepath: ${RDP_BASE_PATH:/eRDPServer}
logcharset: ${RDP_LOG_CHARSET:UTF-8}
可以看到,我们的配置是 数据源读写分离的, 并不是原始的那种。
我们的代码配置是:
package com.lkk.platform.common.service.datasource; import com.alibaba.druid.pool.DruidDataSource;
import com.baomidou.mybatisplus.entity.GlobalConfiguration;
import com.baomidou.mybatisplus.enums.DBType;
import com.baomidou.mybatisplus.enums.IdType;
import com.baomidou.mybatisplus.plugins.PaginationInterceptor;
import com.baomidou.mybatisplus.spring.MybatisSqlSessionFactoryBean;
import com.lkk.platform.common.exception.SystemException; import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.boot.autoconfigure.MybatisProperties;
import org.mybatis.spring.boot.autoconfigure.SpringBootVFS;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.env.Environment;
import org.springframework.core.io.DefaultResourceLoader;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils; import javax.sql.DataSource; /**
* 数据源配置
*
* @author luo
*
*/
@Configuration
public class DatasourceConfig { @Autowired
Environment env; /**
* Write data source druid data source.
*
* @return the druid data source
*/
@Primary
@Bean(name = "writeDataSource")
@ConfigurationProperties("spring.datasource.write")
public DruidDataSource writeDataSource() { System.out.println("+++++++++++++++++" + env.getProperty("spring.datasource.write.url"));
return new DruidDataSource();
} /**
* Read data source druid data source.
*
* @return the druid data source
*/
@Bean(name = "readDataSource")
@ConfigurationProperties("spring.datasource.read")
public DruidDataSource readDataSource() {
return new DruidDataSource();
} /**
* Dynamic data source data source.
*
* @return the data source
*/
@Bean(name = "dynamicDataSource")
public DataSource dynamicDataSource() {
DynamicDataSource dynamicDataSource = new DynamicDataSource();
dynamicDataSource.setWriteDataSource(writeDataSource());
dynamicDataSource.setReadDataSource(readDataSource()); return dynamicDataSource;
} /**
* Dynamic transaction manager data source transaction manager.
*
* @param dynamicDataSource the dynamic data source
* @return the data source transaction manager
*/
@Bean(name = "dynamicTransactionManager")
public DataSourceTransactionManager dynamicTransactionManager(@Qualifier("dynamicDataSource") DataSource dynamicDataSource) {
return new DynamicDataSourceTransactionManager(dynamicDataSource);
} /**
* Dynamic sql session factory sql session factory.
*
* @param dynamicDataSource the dynamic data source
* @param properties the properties
* @return the sql session factory
*/
@Bean
@ConfigurationProperties(prefix = MybatisProperties.MYBATIS_PREFIX)
public SqlSessionFactory dynamicSqlSessionFactory(
@Qualifier("dynamicDataSource") DataSource dynamicDataSource,
MybatisProperties properties) { // final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
// sessionFactory.setDataSource(dynamicDataSource);
// sessionFactory.setVfs(SpringBootVFS.class);
// sessionFactory.setTypeAliasesPackage(properties.getTypeAliasesPackage());
// sessionFactory.setConfigLocation(new DefaultResourceLoader().getResource(properties.getConfigLocation()));
// sessionFactory.setMapperLocations(properties.resolveMapperLocations()); final MybatisSqlSessionFactoryBean sessionFactory = new MybatisSqlSessionFactoryBean();
sessionFactory.setDataSource(dynamicDataSource);
sessionFactory.setVfs(SpringBootVFS.class);
if (StringUtils.hasLength(properties.getTypeAliasesPackage())) {
sessionFactory.setTypeAliasesPackage(properties.getTypeAliasesPackage());
}
if (StringUtils.hasText(properties.getConfigLocation())) {
sessionFactory.setConfigLocation(new DefaultResourceLoader().getResource(properties.getConfigLocation()));
}
if (!ObjectUtils.isEmpty(properties.resolveMapperLocations())) {
sessionFactory.setMapperLocations(properties.resolveMapperLocations());
} //翻页插件
PaginationInterceptor paginationInterceptor = new PaginationInterceptor();
paginationInterceptor.setLocalPage(true);
sessionFactory.setPlugins(new Interceptor[]{
paginationInterceptor
});
//全局配置
GlobalConfiguration globalConfig = new GlobalConfiguration();
globalConfig.setDbType(DBType.MYSQL.getDb());
globalConfig.setIdType(IdType.INPUT.getKey());
globalConfig.setDbColumnUnderline(true);
//热加载
//globalConfig.setRefresh(true);
//自动填充
globalConfig.setMetaObjectHandler(new MyMetaObjectHandler());
sessionFactory.setGlobalConfig(globalConfig); try {
return sessionFactory.getObject();
} catch (Exception e) {
throw new SystemException(e);
}
}
}
根本原因都找到了,但是却不知道是哪一行报错的。 就像水中的月亮,看起来近在咫尺,却远在天边。干脆DruidDataSource 进去看看, 加几个断点吧!
做了一些代码上的注释/调整后,发现这样的错误:
detailMessage = "Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true"
cause = {BindException@} "org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
target = {Bindable@} "[Bindable@36eeef8b type = com.alibaba.druid.pool.DruidDataSource, value = 'provided', annotations = array<Annotation>[@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)]]"
property = {ConfigurationProperty@} "[ConfigurationProperty@49384031 name = spring.datasource.write.username, value = '${MYSQL_DB_USERNAME:elppm_user}', origin = "spring.datasource.write.username" from property source "bootstrapProperties"]"
name = {ConfigurationPropertyName@} "spring.datasource.write"
detailMessage = "Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
cause = {IllegalStateException@} "java.lang.IllegalStateException: Unable to set value for property username"
detailMessage = "Unable to set value for property username"
cause = {InvocationTargetException@} "java.lang.reflect.InvocationTargetException"
stackTrace = {StackTraceElement[]@}
suppressedExceptions = {Collections$UnmodifiableRandomAccessList@} size =
stackTrace = {StackTraceElement[]@}
suppressedExceptions = {Collections$UnmodifiableRandomAccessList@} size =
这个错误依旧没能解决问题。 但 这回我聪明了, 不断的展开cause, 我发现了 InvocationTargetException, 但是 这个发型并没有什么用。
调试 DruidDataSource 找找setUsername看看, 没有发现,找父类 DruidAbstractDataSource, 发现了,
public void setUsername(String username) {
if (this.inited) {
throw new UnsupportedOperationException();
} else {
this.username = username;
}
}
原来如此, inited 引起的。 加断点, 果然如此, 就是说, 如果init 过了, 再次setUsername 是会报错的! 线程堆栈是这样的:
setUsername:, DruidAbstractDataSource (com.alibaba.druid.pool)
invoke:-, GeneratedMethodAccessor275 (sun.reflect)
invoke:, DelegatingMethodAccessorImpl (sun.reflect)
invoke:, Method (java.lang.reflect)
setValue:, JavaBeanBinder$BeanProperty (org.springframework.boot.context.properties.bind)
bind:, JavaBeanBinder (org.springframework.boot.context.properties.bind)
bind:, JavaBeanBinder (org.springframework.boot.context.properties.bind)
bind:, JavaBeanBinder (org.springframework.boot.context.properties.bind)
lambda$null$:, Binder (org.springframework.boot.context.properties.bind)
apply:-, (org.springframework.boot.context.properties.bind.Binder$$Lambda$)
accept:, ReferencePipeline$$ (java.util.stream)
tryAdvance:, ArrayList$ArrayListSpliterator (java.util)
forEachWithCancel:, ReferencePipeline (java.util.stream)
copyIntoWithCancel:, AbstractPipeline (java.util.stream)
copyInto:, AbstractPipeline (java.util.stream)
wrapAndCopyInto:, AbstractPipeline (java.util.stream)
evaluateSequential:, FindOps$FindOp (java.util.stream)
evaluate:, AbstractPipeline (java.util.stream)
findFirst:, ReferencePipeline (java.util.stream)
lambda$bindBean$:, Binder (org.springframework.boot.context.properties.bind)
get:-, (org.springframework.boot.context.properties.bind.Binder$$Lambda$)
withIncreasedDepth:, Binder$Context (org.springframework.boot.context.properties.bind)
withBean:, Binder$Context (org.springframework.boot.context.properties.bind)
access$:, Binder$Context (org.springframework.boot.context.properties.bind)
bindBean:, Binder (org.springframework.boot.context.properties.bind)
bindObject:, Binder (org.springframework.boot.context.properties.bind)
bind:, Binder (org.springframework.boot.context.properties.bind)
bind:, Binder (org.springframework.boot.context.properties.bind)
bind:, Binder (org.springframework.boot.context.properties.bind)
bind:, ConfigurationPropertiesBinder (org.springframework.boot.context.properties)
bind:, ConfigurationPropertiesBindingPostProcessor (org.springframework.boot.context.properties)
postProcessBeforeInitialization:, ConfigurationPropertiesBindingPostProcessor (org.springframework.boot.context.properties)
applyBeanPostProcessorsBeforeInitialization:, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
initializeBean:, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
initializeBean:, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
rebind:, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
rebind:, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
onApplicationEvent:, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
onApplicationEvent:, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
doInvokeListener:, SimpleApplicationEventMulticaster (org.springframework.context.event)
invokeListener:, SimpleApplicationEventMulticaster (org.springframework.context.event)
multicastEvent:, SimpleApplicationEventMulticaster (org.springframework.context.event)
publishEvent:, AbstractApplicationContext (org.springframework.context.support)
publishEvent:, AbstractApplicationContext (org.springframework.context.support)
refresh:, ContextRefresher (org.springframework.cloud.context.refresh)
handle:, RefreshEventListener (org.springframework.cloud.endpoint.event)
invoke:-, GeneratedMethodAccessor286 (sun.reflect)
invoke:, DelegatingMethodAccessorImpl (sun.reflect)
invoke:, Method (java.lang.reflect)
doInvoke:, ApplicationListenerMethodAdapter (org.springframework.context.event)
processEvent:, ApplicationListenerMethodAdapter (org.springframework.context.event)
onApplicationEvent:, ApplicationListenerMethodAdapter (org.springframework.context.event)
doInvokeListener:, SimpleApplicationEventMulticaster (org.springframework.context.event)
invokeListener:, SimpleApplicationEventMulticaster (org.springframework.context.event)
multicastEvent:, SimpleApplicationEventMulticaster (org.springframework.context.event)
publishEvent:, AbstractApplicationContext (org.springframework.context.support)
publishEvent:, AbstractApplicationContext (org.springframework.context.support)
receiveConfigInfo:, NacosContextRefresher$ (org.springframework.cloud.alibaba.nacos.refresh)
run:, CacheData$ (com.alibaba.nacos.client.config.impl)
safeNotifyListener:, CacheData (com.alibaba.nacos.client.config.impl)
checkListenerMd5:, CacheData (com.alibaba.nacos.client.config.impl)
run:, ClientWorker$LongPollingRunnable (com.alibaba.nacos.client.config.impl)
runWorker:, ThreadPoolExecutor (java.util.concurrent)
run:, ThreadPoolExecutor$Worker (java.util.concurrent)
run:, Thread (java.lang)
可以看到publishEvent , 可以肯定的是 这个publishEvent 正是前面的 NacosContextRefresher.this.applicationContext.publishEvent 方法 !! (这堆栈好深啊!) 仔细观察堆栈,发现 有个rebind 方法,感觉好奇怪( 后面发现问题就是这个引起的!!)
那第一次init 是什么时候呢? 调试发现是dynamicSqlSessionFactory 方法的 return sessionFactory.getObject();
还是不清楚 咋回事。 问题可怎么解决呢??
Oh my God!!!
上网一把搜索,无果! 没见到同样问题的出现。 难道这个问题就我一个人碰到吗? 难道nacos 就我一个人用吗???
查看 https://blog.csdn.net/qq_37859539/article/details/81592954 https://blog.csdn.net/JAdroid/article/details/80490679 https://github.com/alibaba/druid/issues/2507 , 难道是 filter 的问题?
查看https://github.com/alibaba/nacos issue, 没有。。
不至于这么大的坑, 就一个人碰到吧! 感觉是哪个哪个的版本 问题吧, druid 从 1.0.26 升级到了 1.1.5 , 再次观察 setUsername 方法:
public void setUsername(String username) {
if (!StringUtils.equals(this.username, username)) {
if (this.inited) {
throw new UnsupportedOperationException();
} else {
this.username = username;
}
}
}
我一下子就豁然开朗!
原来如此, 这里的这个判断是不可少的吧! nacos 会每个30 s 就去获取配置, 然后rebind, 但是druid-1.0.26 rebind 就会报错, 报错有没有打印出来! 我擦。 druid-1.1.5 就不会有这个问题了!检查其他方法, 发现都做了 !StringUtils.equals 判断。
升级就对了嘛!原来 nacos 并没有错, 30s 重新获取配置然后重新绑定都是正确的, 逻辑都是没有问题的! 原来是 druid的 大坑!!
记druid 在配置中心下的一个大坑: cpu 达到 100%的更多相关文章
- SpringCloud 进阶之分布式配置中心(SpringCloud Config)
1. SpringCloud Config SpringCLoud Config 为微服务架构中的微服务提供集中化的外部配置支持,配置服务器为各个不同微服务应用 的所有环境提供了一个中心化的外部配置; ...
- Spring Cloud 入门教程 - 搭建配置中心服务
简介 Spring Cloud 提供了一个部署微服务的平台,包括了微服务中常见的组件:配置中心服务, API网关,断路器,服务注册与发现,分布式追溯,OAuth2,消费者驱动合约等.我们不必先知道每个 ...
- SpringCloud(6)分布式配置中心Spring Cloud Config
1.Spring Cloud Config 简介 在分布式系统中,由于服务数量巨多,为了方便服务配置文件统一管理,实时更新,所以需要分布式配置中心组件.在Spring Cloud中,有分布式配置中心组 ...
- Spring Cloud(Dalston.SR5)--Config 集群配置中心
Spring Cloud Config 是一个全新的项目,用来为分布式系统中的基础设施和微服务应用提供集中化的外部配置支持,他分为服务端和客户端两个部分.服务端也称为分布式配置中心,是一个独立的微服务 ...
- Spring Cloud Consul使用——配置中心
1.pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www ...
- .Net Core 自定义配置源从配置中心读取配置
配置,几乎所有的应用程序都离不开它..Net Framework时代我们使用App.config.Web.config,到了.Net Core的时代我们使用appsettings.json,这些我们再 ...
- 白话SpringCloud | 第八章:分布式配置中心的服务化及动态刷新
前言 上一章节,简单介绍了分布式配置中心Spring Cloud Config的使用.同时,我们也遗漏了一些问题,比如如何配置实时生效,当服务端地址变更或者集群部署时,如何指定服务端地址?回想,在服务 ...
- (八)统一配置中心-Config
对于配置的重要性,我想我不用进行任何强调,大家都可以明白其重要性.在普通单体应用,我们常使用配置文件(application(*).properties(yml))管理应用的所有配置.这些配置文件在单 ...
- spring-cloud-config 配置中心快速上手
spring-cloud-config 配置中心实现 Spring Cloud Config 用于为分布式系统中的基础设施和微服务应用提供集中化的外部配置支持,分为server端和client端. s ...
随机推荐
- flanneld 安装
目录 flanneld 安装 下载分发flanneld二进制文件 分发二进制文件到所有集群的节点 创建Flannel证书和私钥 创建证书签名请求 生成证书和私钥 向etcd写入Pod网段信息 创建fl ...
- Codeforces Round #452 (Div. 2) A B C
Codeforces Round #452 (Div. 2) A Splitting in Teams 题目链接: http://codeforces.com/contest/899/problem/ ...
- 一文看尽Java-并发编程知识点
一.前言 从7月份开始一直加班比较多,一直到双11结束,博客没跟上写,接下来写一点总结性的东西,比如Java并发编程总结.Mybatis源码总结.Spring源码和基础知识总结,首先来看下并发 ...
- SpringBoot入门简介(一)
1.SpringBoot简介 1.1 什么是Spring 随着动态语言的流行 (Ruby.Groovy.Scala.Node.js),Java 的开发显得格外的笨重:繁多的配置.低下的开发效率.复杂的 ...
- Java修炼——ArrayList常用的方法以及三种方式遍历集合元素。
List接口ArrayList用法详解 ArrayList常用方法: 1. List.add():添加的方法(可以添加字符串,常量,以及对象) List list=new ArrayList(); l ...
- 2018HDU多校训练-3-Problem M. Walking Plan
链接:http://acm.hdu.edu.cn/showproblem.php?pid=6331 Walking Plan Problem Description There are n inte ...
- 【译】Visual Studio 2019 中 WPF & UWP 的 XAML 开发工具新特性
原文 | Dmitry 翻译 | 郑子铭 自Visual Studio 2019推出以来,我们为使用WPF或UWP桌面应用程序的XAML开发人员发布了许多新功能.在本周的 Visual Studio ...
- [FPGA]Verilog 60s秒表计时器(最大可计时间长达9min)
[FPGA]Verilog 60s秒表计时器 1.引述 这次的实验来自于本人本科课程数电结课时的自选题目.由于这次上传是后知后觉,学校已将小脚丫板子回收,所以在这篇文章中没法贴出代码结果的效果图了,但 ...
- GlusterFS 存储
GlusterFS简介: 互联网四大开源分布式文件系统分别是:MooseFS.CEPH.Lustre.GusterFS. GluterFS最早由Gluster公司开发,其目的是开发一个能为客户提供全局 ...
- iSensor APP 之 摄像头调试 OV3640 OV2640 MT9d112 ov5642
iSensor APP 之 摄像头调试 OV3640 OV2640 MT9d112 iSensor app 非常适合调试各种摄像头,已测试通过的sensor有: l OV7670.OV7725.O ...