因为某些异步日志设置了即使队列满了,也不可丢弃,在并发高的时候,导致请求方法同步执行,响应变慢.

编写这个玩意,除了集中日志输出以外,还希望在高并发的时间点有缓冲作用.

之前用Kafka实现了一次入队速度不太理想,应该说比我写本地机械硬盘还慢..................不知道是不是我方式不对,而且估计是因为针对有序写入做了极大的优化,写出固态硬盘下居然比机械还慢.............

后来用Redis实现了一次,由于Redis的连接方式问题,所以必须使用管道来减少获取连接的性能损耗,入队效率非常不错

瞎扯,增加程序复杂度,又增加运维成本,完全不科学,拿固态硬盘阵列的服务器组分布式文件系统,挂载到应用服务器上才是王道!

由于管道没有到达需要写出得大小时(默认53个 arges),如果刚好又没有日志进来,那么可能存在一直等待写出得情况,所以里面写了一个定时线程,可以根据需要修改写出得周期时间

logback KafkaAppender 写入Kafka队列,集中日志输出.

生产者程序完整POM

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6.  
  7. <groupId>org.lzw</groupId>
  8. <artifactId>logqueue</artifactId>
  9. <version>1.0-SNAPSHOT</version>
  10.  
  11. <dependencies>
  12.  
  13. <dependency>
  14. <groupId>redis.clients</groupId>
  15. <artifactId>jedis</artifactId>
  16. <version>2.9.0</version>
  17. <scope>provided</scope>
  18. </dependency>
  19.  
  20. <dependency>
  21. <groupId>ch.qos.logback</groupId>
  22. <artifactId>logback-core</artifactId>
  23. <version>1.2.3</version>
  24. <scope>provided</scope>
  25. </dependency>
  26.  
  27. <dependency>
  28. <groupId>org.slf4j</groupId>
  29. <artifactId>slf4j-api</artifactId>
  30. <version>1.7.25</version>
  31. <scope>provided</scope>
  32. </dependency>
  33.  
  34. </dependencies>
  35.  
  36. <distributionManagement>
  37.  
  38. <repository>
  39. <id>maven-releases</id>
  40. <name>maven-releases</name>
  41. <url>http://192.168.91.137:8081/repository/maven-releases/</url>
  42. </repository>
  43.  
  44. <snapshotRepository>
  45. <id>maven-snapshots</id>
  46. <name>maven-snapshots</name>
  47. <url>http://192.168.91.137:8081/repository/maven-snapshots/</url>
  48. </snapshotRepository>
  49.  
  50. </distributionManagement>
  51.  
  52. <build>
  53. <plugins>
  54. <plugin>
  55. <groupId>org.apache.maven.plugins</groupId>
  56. <artifactId>maven-compiler-plugin</artifactId>
  57. <configuration>
  58. <source>1.8</source>
  59. <target>1.8</target>
  60. </configuration>
  61. </plugin>
  62. </plugins>
  63. </build>
  64.  
  65. </project>

只有一个类RedisAppender.java

  1. package org.lzw.log.appender;
  2.  
  3. import ch.qos.logback.core.Layout;
  4. import redis.clients.jedis.Client;
  5. import redis.clients.jedis.Pipeline;
  6.  
  7. import ch.qos.logback.core.AppenderBase;
  8. import ch.qos.logback.core.status.ErrorStatus;
  9. import org.slf4j.Logger;
  10. import org.slf4j.LoggerFactory;
  11.  
  12. import java.io.IOException;
  13. import java.io.StringReader;
  14. import java.util.List;
  15. import java.util.Properties;
  16. import java.util.concurrent.ScheduledThreadPoolExecutor;
  17. import java.util.concurrent.TimeUnit;
  18.  
  19. /**
  20. * User: laizhenwei
  21. */
  22. public class RedisAppender<E> extends AppenderBase<E> {
  23.  
  24. protected Layout<E> layout;
  25. private static final Logger LOGGER = LoggerFactory.getLogger("local");
  26. private Pipeline pipeline;
  27. private Client client;
  28. private static ScheduledThreadPoolExecutor exec = new ScheduledThreadPoolExecutor(1);
  29. private String queueKey;
  30. private String redisProperties;
  31. public void start() {
  32. super.start();
  33. int errors = 0;
  34. if (this.layout == null) {
  35. this.addStatus(new ErrorStatus("No layout set for the appender named \"" + this.name + "\".", this));
  36. ++errors;
  37. }
  38.  
  39. LOGGER.info("Starting RedisAppender...");
  40. final Properties properties = new Properties();
  41. try {
  42. properties.load(new StringReader(redisProperties));
  43. pipeline = new Pipeline();
  44. client = new Client(properties.get("host").toString(), Integer.parseInt(properties.get("port").toString()));
  45. pipeline.setClient(client);
  46. } catch (Exception exception) {
  47. ++errors;
  48. LOGGER.warn(String.join("Kafka日志线程被拒绝:记录倒本地日志:"), exception);
  49. }
  50. if (queueKey == null) {
  51. ++errors;
  52. System.out.println("未配置queueKey");
  53. } else {
  54. System.out.println("日志将进入key为:[" + queueKey + "]的队列!");
  55. }
  56.  
  57. if (errors == 0) {
  58. super.start();
  59. exec.scheduleAtFixedRate(() -> this.sync(), 5, 5, TimeUnit.SECONDS);
  60. }
  61.  
  62. }
  63.  
  64. @Override
  65. public void stop() {
  66. super.stop();
  67. pipeline.sync();
  68. try {
  69. pipeline.close();
  70. } catch (IOException e) {
  71. LOGGER.warn("Stopping RedisAppender...",e);
  72. }
  73. LOGGER.info("Stopping RedisAppender...");
  74. }
  75.  
  76. @Override
  77. protected void append(E event) {
  78. String msg = layout.doLayout(event);
  79. this.lpush(msg);
  80.  
  81. }
  82.  
  83. private void lpush(String msg){
  84. try {
  85. pipeline.lpush(queueKey,msg);
  86. }catch (Exception ex){
  87. LOGGER.warn(String.join(":","推送redis队列失败!",msg),ex);
  88. }
  89. }
  90.  
  91. private void sync(){
  92. try {
  93. pipeline.sync();
  94. }catch (Exception ex){
  95. List<Object> datas = client.getAll();
  96. datas.stream().forEach(d->LOGGER.warn(String.join(":","推送redis队列失败!记录到本地!",d.toString())));
  97. }
  98.  
  99. }
  100.  
  101. public String getQueueKey() {
  102. return queueKey;
  103. }
  104.  
  105. public void setQueueKey(String queueKey) {
  106. this.queueKey = queueKey;
  107. }
  108.  
  109. public void setLayout(Layout<E> layout) {
  110. this.layout = layout;
  111. }
  112.  
  113. public String getRedisProperties() {
  114. return redisProperties;
  115. }
  116.  
  117. public void setRedisProperties(String redisProperties) {
  118. this.redisProperties = redisProperties;
  119. }
  120. }

写完了,发布到Nexus

消费者application-test.yml

  1. spring:
  2. application:
  3. name: logconsumer
  4. profiles:
  5. #指定读取配置文件:dev(开发环境),prod(生产环境),qa(测试环境)
  6. active: test
  7.  
  8. logKey:
  9. basic-info-api: basic-info-api
  10.  
  11. redisParam:
  12. host: 192.168.1.207
  13. port: 6379
  14. pool:
  15. maxIdle: 20
  16. maxTotal: 200
  17. maxWaitMillis: -1
  18. testOnBorrow: false
  19. testOnReturn: false

logback-test.xml

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <configuration debug="true">
  3. <contextName>logback</contextName>
  4. <property name="LOG_HOME" value="/logconsumer"/>
  5.  
  6. <!-- basicInfoApi -->
  7. <appender name="basicInfoApiAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
  8. <file>${LOG_HOME}/basic-info-api/logback.log</file>
  9. <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
  10. <fileNamePattern>${LOG_HOME}/basic-info-api/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
  11. <!--<fileNamePattern>${LOG_HOME}/e9/e9-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
  12. <!-- 日志文件保留天数 -->
  13. <MaxHistory>30</MaxHistory>
  14. <!-- 文件大小触发重写新文件 -->
  15. <MaxFileSize>50MB</MaxFileSize>
  16. <totalSizeCap>10GB</totalSizeCap>
  17. </rollingPolicy>
  18. <encoder>
  19. <pattern>%msg%n</pattern>
  20. <charset>UTF-8</charset>
  21. </encoder>
  22. </appender>
  23.  
  24. <!--basicInfoApi异步输出-->
  25. <appender name="basicInfoApiAasyncFile" class="ch.qos.logback.classic.AsyncAppender">
  26. <discardingThreshold>0</discardingThreshold>
  27. <queueSize>2048</queueSize>
  28. <appender-ref ref="basicInfoApiAppender"/>
  29. </appender>
  30.  
  31. <!--basicInfoApi消费者所在包路径-->
  32. <logger name="org.lzw.logconsumer.consumer.BasicInfoApiConsumer" level="INFO" additivity="false">
  33. <appender-ref ref="basicInfoApiAasyncFile"/>
  34. </logger>
  35.  
  36. <!--&lt;!&ndash; ############################## 我是分割线 ############################################ &ndash;&gt;-->
  37.  
  38. <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
  39. <file>${LOG_HOME}/logconsumer/logback.log</file>
  40. <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
  41. <fileNamePattern>${LOG_HOME}/logconsumer/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
  42. <!--<fileNamePattern>${LOG_HOME}/front/front-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
  43. <!-- 日志文件保留天数 -->
  44. <MaxHistory>30</MaxHistory>
  45. <!-- 文件大小触发重写新文件 -->
  46. <MaxFileSize>50MB</MaxFileSize>
  47. <totalSizeCap>1GB</totalSizeCap>
  48. </rollingPolicy>
  49.  
  50. <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
  51. <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
  52. <charset>UTF-8</charset>
  53. </encoder>
  54. </appender>
  55.  
  56. <root level="warn">
  57. <appender-ref ref="file" />
  58. </root>
  59.  
  60. </configuration>
  1. 启动类LogconsumerApplication.java
  1. /**
  2. * User: laizhenwei
  3. */
  4. @SpringBootApplication
  5. public class LogconsumerApplication {
  6.  
  7. public static void main(String[] args) {
  8. SpringApplication.run(LogconsumerApplication.class, args);
  9. }
  10.  
  11. @Component
  12. public static class ConsumersStartup implements CommandLineRunner {
  13.  
  14. ExecutorService executorService = Executors.newCachedThreadPool();
  15.  
  16. @Autowired
  17. private BasicInfoApiConsumer basicInfoApiConsumer;
  18.  
  19. @Override
  20. public void run(String... strings) {
  21. executorService.execute(()-> basicInfoApiConsumer.writeLog());
  22. }
  23. }
  24.  
  25. }
  1. RedisService.java
  1. @Component
  2. public class RedisService {
  3.  
  4. Logger logger = LoggerFactory.getLogger(this.getClass());
  5.  
  6. @Value("${redisParam.host}")
  7. private String host;
  8.  
  9. @Value("${redisParam.port}")
  10. private Integer port;
  11.  
  12. @Value("${redisParam.pool.maxIdle}")
  13. private Integer maxIdle;
  14.  
  15. @Value("${redisParam.pool.maxTotal}")
  16. private Integer maxTotal;
  17.  
  18. @Value("${redisParam.pool.maxWaitMillis}")
  19. private Integer maxWaitMillis;
  20.  
  21. @Value("${redisParam.pool.testOnBorrow}")
  22. private Boolean testOnBorrow;
  23.  
  24. @Value("${redisParam.pool.testOnReturn}")
  25. private Boolean testOnReturn;
  26.  
  27. private static JedisPoolConfig config = new JedisPoolConfig();
  28. private static JedisPool pool;
  29.  
  30. @PostConstruct
  31. public void init(){
  32. config.setMaxIdle(maxIdle);
  33. config.setMaxTotal(maxTotal);
  34. config.setMaxWaitMillis(maxWaitMillis);
  35. config.setTestOnBorrow(testOnBorrow);
  36. config.setTestOnReturn(testOnReturn);
  37. pool = new JedisPool(config, host, port);
  38. }
  39.  
  40. public String brpop(int timeOut, String key) {
  41. Jedis jedis = null;
  42. try {
  43. jedis = pool.getResource();
  44. return jedis.brpop(timeOut, key).get(1);
  45. } catch (Exception ex) {
  46. logger.warn("redis消费异常",ex);
  47. return "redis消费异常";
  48. } finally {
  49. if (jedis != null)
  50. jedis.close();
  51. }
  52. }
  53.  
  54. }
  1. BasicInfoApiConsumer.java
  1. /**
  2. * 日志消费者
  3. */
  4. @Component
  5. public class BasicInfoApiConsumer {
  6.  
  7. private Logger logger = LoggerFactory.getLogger(this.getClass());
  8.  
  9. @Value("${logKey.basic-info-api}")
  10. private String logKey;
  11.  
  12. @Autowired
  13. private RedisService redisService;
  14.  
  15. public void writeLog() {
  16. while (true){
  17. System.out.println(1);
  18. logger.info(redisService.brpop(0, logKey));
  19. }
  20.  
  21. }
  22.  
  23. }

随便拿个应用跑一个 这里用basic-info-api

pom.xml

  1. <dependency>
  2. <groupId>org.lzw</groupId>
  3. <artifactId>logqueue</artifactId>
  4. <version>1.0-SNAPSHOT</version>
  5. </dependency>
  6.  
  7. <dependency>
  8. <groupId>redis.clients</groupId>
  9. <artifactId>jedis</artifactId>
  10. <version>2.9.0</version>
  11. </dependency>

logback.xml

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <configuration debug="true">
  3. <contextName>logback</contextName>
  4. <property name="log.path" value="/home/logs/basic-info-api/logback.log"/>
  5.  
  6. <appender name="redisAppender" class="org.lzw.log.appender.RedisAppender">
  7. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  8. <level>warn</level>
  9. </filter>
  10. <queueKey>basic-info-api</queueKey>
  11. <redisProperties>
  12. host=192.168.1.207
  13. port=6379
  14. </redisProperties>
  15. <layout class="ch.qos.logback.classic.PatternLayout">
  16. <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
  17. </layout>
  18. </appender>
  19.  
  20. <appender name="localAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
  21. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  22. <level>warn</level>
  23. </filter>
  24. <file>${log.path}</file>
  25. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  26. <level>warn</level>
  27. </filter>
  28. <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
  29. <fileNamePattern>${log.path}.%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>
  30. <!-- 日志文件保留天数 -->
  31. <MaxHistory>30</MaxHistory>
  32. <!-- 文件大小触发重写新文件 -->
  33. <MaxFileSize>50MB</MaxFileSize>
  34. <totalSizeCap>10GB</totalSizeCap>
  35. </rollingPolicy>
  36.  
  37. <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
  38. <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
  39. <charset>UTF-8</charset>
  40. </encoder>
  41. </appender>
  42.  
  43. <appender name="asyncLocal" class="ch.qos.logback.classic.AsyncAppender">
  44. <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
  45. <discardingThreshold>0</discardingThreshold>
  46. <queueSize>2048</queueSize>
  47. <appender-ref ref="localAppender"/>
  48. </appender>
  49.  
  50. <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
  51. <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  52. <level>debug</level>
  53. </filter>
  54. <encoder>
  55. <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%n
  56. </pattern>
  57. </encoder>
  58. </appender>
  59. <!--万一redis队列不通,记录到本地-->
  60. <logger name="local" additivity="false">
  61. <appender-ref ref="asyncLocal"/>
  62. </logger>
  63.  
  64. <appender name="asyncRedisAppender" class="ch.qos.logback.classic.AsyncAppender">
  65. <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
  66. <discardingThreshold>0</discardingThreshold>
  67. <queueSize>2048</queueSize>
  68. <appender-ref ref="redisAppender"/>
  69. </appender>
  70.  
  71. <root level="warn">
  72. <appender-ref ref="asyncRedisAppender"/>
  73. </root>
  74. <logger name="org.springframework.session.web.http.SessionRepositoryFilter" level="error"/>
  75. <logger name="org.springframework.scheduling" level="error"/>
  76. <Logger name="org.apache.catalina.util.LifecycleBase" level="error"/>
  77. <Logger name="org.springframework.amqp" level="warn"/>
  78. </configuration>

写一段长日志

  1. @Slf4j
  2. @EnableEurekaClient
  3. @EnableCircuitBreaker
  4. @SpringBootApplication
  5. public class BasicInfoApiApplication {
  6. public static void main(String[] args) {
  7. SpringApplication.run(BasicInfoApiApplication.class, args);
  8. }
  9.  
  10. @Component
  11. public static class ConsumersStartup implements CommandLineRunner {
  12.  
  13. ExecutorService executorService = Executors.newCachedThreadPool();
  14.  
  15. String msg = "--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379," +
  16. "https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  17. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  18. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  19. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  20. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  21. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  22. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  23. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  24. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  25. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  26. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  27. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  28. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  29. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  30. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  31. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  32. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  33. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
  34. "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,h" +
  35. "ttps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,ht" +
  36. "tps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,http" +
  37. "s://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://" +
  38. "192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.1" +
  39. "68.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91." +
  40. "139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:23" +
  41. "79,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,http" +
  42. "s://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192" +
  43. ".168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.9" +
  44. "1.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140" +
  45. ":2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379-" +
  46. "-endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoi" +
  47. "nts=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=h" +
  48. "ttps://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https:/" +
  49. "/192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192." +
  50. "168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91" +
  51. ".138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379";
  52.  
  53. @Override
  54. public void run(String... strings) {
  55. long begin = System.nanoTime();
  56. for(int i=0;i<10000;i++)
  57. executorService.execute(() -> log.warn(msg));
  58. executorService.shutdown();
  59. for(;;){
  60. if(executorService.isTerminated()){
  61. System.out.println((System.nanoTime()-begin)/1000000);
  62. break;
  63. }
  64. }
  65. }
  66. }
  67. }

输出 1328 毫秒,这里仅仅只是进去队列的时间就需要1秒多,感觉还是很慢.这里进入了以后并没有完全写入硬盘,看另一个消费程序,还不停地在消费.

Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.的更多相关文章

  1. logback KafkaAppender 写入Kafka队列,集中日志输出.

    为了减少应用服务器对磁盘的读写,以及可以集中日志在一台机器上,方便使用ELK收集日志信息,所以考虑做一个jar包,让应用集中输出日志 网上搜了一圈,只发现有人写了个程序在github 地址:https ...

  2. Java并发编程:4种线程池和缓冲队列BlockingQueue

    一. 线程池简介 1. 线程池的概念: 线程池就是首先创建一些线程,它们的集合称为线程池.使用线程池可以很好地提高性能,线程池在系统启动时即创建大量空闲的线程,程序将一个任务传给线程池,线程池就会启动 ...

  3. 自定义ThreadPoolExecutor带Queue缓冲队列的线程池 + JMeter模拟并发下单请求

    .原文:https://blog.csdn.net/u011677147/article/details/80271174 拓展: https://github.com/jwpttcg66/GameT ...

  4. ELK之使用kafka作为消息队列收集日志

    参考:https://www.cnblogs.com/fengjian2016/p/5841556.html    https://www.cnblogs.com/hei12138/p/7805475 ...

  5. SpringBoot项目框架下ThreadPoolExecutor线程池+Queue缓冲队列实现高并发中进行下单业务

    主要是自己在项目中(中小型项目) 有支付下单业务(只是办理VIP,没有涉及到商品库存),目前用户量还没有上来,目前没有出现问题,但是想到如果用户量变大,下单并发量变大,可能会出现一系列的问题,趁着空闲 ...

  6. Nginx插件之openresty反向代理和日志滚动配置案例

    Nginx插件之openresty反向代理和日志滚动配置案例 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.openresty介绍 1>.Nginx介绍 Nginx是一款 ...

  7. 关于nagios系统下使用shell脚本自定义监控插件的编写以及没有实时监控图的问题

    关于nagios系统下shell自定义监控插件的编写.脚本规范以及没有实时监控图的问题的解决办法 在自已编写监控插件之前我们首先需要对nagios监控原理有一定的了解 Nagios的功能是监控服务和主 ...

  8. Redis基础篇(三)持久化:AOF日志

    Redis是内存数据库,但是一旦服务器宕机,内存中的数据将会全部丢失. 最简单的恢复方式是从后端数据库恢复,但这种方式有两个问题: 频繁访问数据库,会给数据库带来巨大的压力: 从数据库中读取相比从Re ...

  9. Rainbond通过插件整合ELK/EFK,实现日志收集

    前言 ELK 是三个开源项目的首字母缩写:Elasticsearch.Logstash 和 Kibana.但后来出现的 FileBeat 可以完全替代 Logstash的数据收集功能,也比较轻量级.本 ...

随机推荐

  1. android:碎片的使用方式

    介绍了这么多抽象的东西,也是时候应该学习一下碎片的具体用法了.你已经知道,碎 片通常都是在平板开发中才会使用的,因此我们首先要做的就是新建一个平板电脑的模拟 器.由于 4.0 系统的平板模拟器好像存在 ...

  2. import pandas as pd Python安装pandas模块

    在学习python过程中需要用到一个叫pandas的模块,在pycharm中安装时总是出错. 千般百度折腾还是无果,后来发现它需要安装很多依赖包.就问你气不气~ 需要手动安装啊,千万记住,这里有个py ...

  3. 使用 BeautifulSoup 进行解析 html

    #coding=utf-8   import urllib2 import socket import httplib from bs4 import BeautifulSoup   UserAgen ...

  4. Android属性allowBackup安全风险浅析

    1.allowBackup安全风险描述 Android API Level 8及其以上Android系统提供了为应用程序数据的备份和恢复功能,此功能的开关决定于该应用程序中AndroidManifes ...

  5. win32 进程崩溃时禁止弹出错误对话框

    在程序初始化的时候加入以下代码即可: SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX);    _set_abort_behav ...

  6. node服务器中打开html文件的两种方法

    方法1:利用 Express 托管静态文件,详情查看这里 方法2:使用fs模块提供的readFile方法打开文件,让其以text/html的形式输出. 代码: var express = requir ...

  7. Photodesk for Mac(Instagram 桌面客户端)破解版安装

    1.软件简介    PhotoDesk - for Instagram 是 macOS 系统上一款 Instagram 客户端,可以让大家在 Mac 上观看朋友的新照片.或是最近热门的作品,也可以 f ...

  8. Appium 输入 & 符号,实际输入&-

    2018-10-11 12:27:12:178 - [debug] [MJSONWP] Calling AppiumDriver.setValue() with args: [["& ...

  9. 【Spark深入学习-11】Spark基本概念和运行模式

    ----本节内容------- 1.大数据基础 1.1大数据平台基本框架 1.2学习大数据的基础 1.3学习Spark的Hadoop基础 2.Hadoop生态基本介绍 2.1Hadoop生态组件介绍 ...

  10. 基于Python, Selenium, Phantomjs无头浏览器访问页面

    引言: 在自动化测试以及爬虫领域,无头浏览器的应用场景非常广泛,本文将梳理其中的若干概念和思路,并基于代码示例其中的若干使用技巧. 1. 无头浏览器 通常大家在在打开网页的工具就是浏览器,通过界面上输 ...