因为某些异步日志设置了即使队列满了,也不可丢弃,在并发高的时候,导致请求方法同步执行,响应变慢.

编写这个玩意,除了集中日志输出以外,还希望在高并发的时间点有缓冲作用.

之前用Kafka实现了一次入队速度不太理想,应该说比我写本地机械硬盘还慢..................不知道是不是我方式不对,而且估计是因为针对有序写入做了极大的优化,写出固态硬盘下居然比机械还慢.............

后来用Redis实现了一次,由于Redis的连接方式问题,所以必须使用管道来减少获取连接的性能损耗,入队效率非常不错

瞎扯,增加程序复杂度,又增加运维成本,完全不科学,拿固态硬盘阵列的服务器组分布式文件系统,挂载到应用服务器上才是王道!

由于管道没有到达需要写出得大小时(默认53个 arges),如果刚好又没有日志进来,那么可能存在一直等待写出得情况,所以里面写了一个定时线程,可以根据需要修改写出得周期时间

logback KafkaAppender 写入Kafka队列,集中日志输出.

生产者程序完整POM

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>org.lzw</groupId>
<artifactId>logqueue</artifactId>
<version>1.0-SNAPSHOT</version> <dependencies> <dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
<scope>provided</scope>
</dependency> <dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
<scope>provided</scope>
</dependency> <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
<scope>provided</scope>
</dependency> </dependencies> <distributionManagement> <repository>
<id>maven-releases</id>
<name>maven-releases</name>
<url>http://192.168.91.137:8081/repository/maven-releases/</url>
</repository> <snapshotRepository>
<id>maven-snapshots</id>
<name>maven-snapshots</name>
<url>http://192.168.91.137:8081/repository/maven-snapshots/</url>
</snapshotRepository> </distributionManagement> <build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build> </project>

只有一个类RedisAppender.java

package org.lzw.log.appender;

import ch.qos.logback.core.Layout;
import redis.clients.jedis.Client;
import redis.clients.jedis.Pipeline; import ch.qos.logback.core.AppenderBase;
import ch.qos.logback.core.status.ErrorStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.io.IOException;
import java.io.StringReader;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.TimeUnit; /**
* User: laizhenwei
*/
public class RedisAppender<E> extends AppenderBase<E> { protected Layout<E> layout;
private static final Logger LOGGER = LoggerFactory.getLogger("local");
private Pipeline pipeline;
private Client client;
private static ScheduledThreadPoolExecutor exec = new ScheduledThreadPoolExecutor(1);
private String queueKey;
private String redisProperties;
public void start() {
super.start();
int errors = 0;
if (this.layout == null) {
this.addStatus(new ErrorStatus("No layout set for the appender named \"" + this.name + "\".", this));
++errors;
} LOGGER.info("Starting RedisAppender...");
final Properties properties = new Properties();
try {
properties.load(new StringReader(redisProperties));
pipeline = new Pipeline();
client = new Client(properties.get("host").toString(), Integer.parseInt(properties.get("port").toString()));
pipeline.setClient(client);
} catch (Exception exception) {
++errors;
LOGGER.warn(String.join("Kafka日志线程被拒绝:记录倒本地日志:"), exception);
}
if (queueKey == null) {
++errors;
System.out.println("未配置queueKey");
} else {
System.out.println("日志将进入key为:[" + queueKey + "]的队列!");
} if (errors == 0) {
super.start();
exec.scheduleAtFixedRate(() -> this.sync(), 5, 5, TimeUnit.SECONDS);
} } @Override
public void stop() {
super.stop();
pipeline.sync();
try {
pipeline.close();
} catch (IOException e) {
LOGGER.warn("Stopping RedisAppender...",e);
}
LOGGER.info("Stopping RedisAppender...");
} @Override
protected void append(E event) {
String msg = layout.doLayout(event);
this.lpush(msg); } private void lpush(String msg){
try {
pipeline.lpush(queueKey,msg);
}catch (Exception ex){
LOGGER.warn(String.join(":","推送redis队列失败!",msg),ex);
}
} private void sync(){
try {
pipeline.sync();
}catch (Exception ex){
List<Object> datas = client.getAll();
datas.stream().forEach(d->LOGGER.warn(String.join(":","推送redis队列失败!记录到本地!",d.toString())));
} } public String getQueueKey() {
return queueKey;
} public void setQueueKey(String queueKey) {
this.queueKey = queueKey;
} public void setLayout(Layout<E> layout) {
this.layout = layout;
} public String getRedisProperties() {
return redisProperties;
} public void setRedisProperties(String redisProperties) {
this.redisProperties = redisProperties;
}
}

写完了,发布到Nexus

消费者application-test.yml

spring:
application:
name: logconsumer
profiles:
#指定读取配置文件:dev(开发环境),prod(生产环境),qa(测试环境)
active: test logKey:
basic-info-api: basic-info-api redisParam:
host: 192.168.1.207
port: 6379
pool:
maxIdle: 20
maxTotal: 200
maxWaitMillis: -1
testOnBorrow: false
testOnReturn: false

logback-test.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
<contextName>logback</contextName>
<property name="LOG_HOME" value="/logconsumer"/> <!-- basicInfoApi -->
<appender name="basicInfoApiAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/basic-info-api/logback.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/basic-info-api/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<!--<fileNamePattern>${LOG_HOME}/e9/e9-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>50MB</MaxFileSize>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender> <!--basicInfoApi异步输出-->
<appender name="basicInfoApiAasyncFile" class="ch.qos.logback.classic.AsyncAppender">
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="basicInfoApiAppender"/>
</appender> <!--basicInfoApi消费者所在包路径-->
<logger name="org.lzw.logconsumer.consumer.BasicInfoApiConsumer" level="INFO" additivity="false">
<appender-ref ref="basicInfoApiAasyncFile"/>
</logger> <!--&lt;!&ndash; ############################## 我是分割线 ############################################ &ndash;&gt;--> <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/logconsumer/logback.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/logconsumer/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<!--<fileNamePattern>${LOG_HOME}/front/front-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>50MB</MaxFileSize>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender> <root level="warn">
<appender-ref ref="file" />
</root> </configuration>
启动类LogconsumerApplication.java
/**
* User: laizhenwei
*/
@SpringBootApplication
public class LogconsumerApplication { public static void main(String[] args) {
SpringApplication.run(LogconsumerApplication.class, args);
} @Component
public static class ConsumersStartup implements CommandLineRunner { ExecutorService executorService = Executors.newCachedThreadPool(); @Autowired
private BasicInfoApiConsumer basicInfoApiConsumer; @Override
public void run(String... strings) {
executorService.execute(()-> basicInfoApiConsumer.writeLog());
}
} }
RedisService.java
@Component
public class RedisService { Logger logger = LoggerFactory.getLogger(this.getClass()); @Value("${redisParam.host}")
private String host; @Value("${redisParam.port}")
private Integer port; @Value("${redisParam.pool.maxIdle}")
private Integer maxIdle; @Value("${redisParam.pool.maxTotal}")
private Integer maxTotal; @Value("${redisParam.pool.maxWaitMillis}")
private Integer maxWaitMillis; @Value("${redisParam.pool.testOnBorrow}")
private Boolean testOnBorrow; @Value("${redisParam.pool.testOnReturn}")
private Boolean testOnReturn; private static JedisPoolConfig config = new JedisPoolConfig();
private static JedisPool pool; @PostConstruct
public void init(){
config.setMaxIdle(maxIdle);
config.setMaxTotal(maxTotal);
config.setMaxWaitMillis(maxWaitMillis);
config.setTestOnBorrow(testOnBorrow);
config.setTestOnReturn(testOnReturn);
pool = new JedisPool(config, host, port);
} public String brpop(int timeOut, String key) {
Jedis jedis = null;
try {
jedis = pool.getResource();
return jedis.brpop(timeOut, key).get(1);
} catch (Exception ex) {
logger.warn("redis消费异常",ex);
return "redis消费异常";
} finally {
if (jedis != null)
jedis.close();
}
} }
BasicInfoApiConsumer.java
/**
* 日志消费者
*/
@Component
public class BasicInfoApiConsumer { private Logger logger = LoggerFactory.getLogger(this.getClass()); @Value("${logKey.basic-info-api}")
private String logKey; @Autowired
private RedisService redisService; public void writeLog() {
while (true){
System.out.println(1);
logger.info(redisService.brpop(0, logKey));
} } }

随便拿个应用跑一个 这里用basic-info-api

pom.xml

        <dependency>
<groupId>org.lzw</groupId>
<artifactId>logqueue</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency> <dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>

logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
<contextName>logback</contextName>
<property name="log.path" value="/home/logs/basic-info-api/logback.log"/> <appender name="redisAppender" class="org.lzw.log.appender.RedisAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>warn</level>
</filter>
<queueKey>basic-info-api</queueKey>
<redisProperties>
host=192.168.1.207
port=6379
</redisProperties>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
</layout>
</appender> <appender name="localAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>warn</level>
</filter>
<file>${log.path}</file>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>warn</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${log.path}.%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>50MB</MaxFileSize>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender> <appender name="asyncLocal" class="ch.qos.logback.classic.AsyncAppender">
<!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="localAppender"/>
</appender> <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>debug</level>
</filter>
<encoder>
<pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%n
</pattern>
</encoder>
</appender>
<!--万一redis队列不通,记录到本地-->
<logger name="local" additivity="false">
<appender-ref ref="asyncLocal"/>
</logger> <appender name="asyncRedisAppender" class="ch.qos.logback.classic.AsyncAppender">
<!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="redisAppender"/>
</appender> <root level="warn">
<appender-ref ref="asyncRedisAppender"/>
</root>
<logger name="org.springframework.session.web.http.SessionRepositoryFilter" level="error"/>
<logger name="org.springframework.scheduling" level="error"/>
<Logger name="org.apache.catalina.util.LifecycleBase" level="error"/>
<Logger name="org.springframework.amqp" level="warn"/>
</configuration>

写一段长日志

@Slf4j
@EnableEurekaClient
@EnableCircuitBreaker
@SpringBootApplication
public class BasicInfoApiApplication {
public static void main(String[] args) {
SpringApplication.run(BasicInfoApiApplication.class, args);
} @Component
public static class ConsumersStartup implements CommandLineRunner { ExecutorService executorService = Executors.newCachedThreadPool(); String msg = "--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379," +
"https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
"https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,h" +
"ttps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,ht" +
"tps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,http" +
"s://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://" +
"192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.1" +
"68.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91." +
"139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:23" +
"79,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,http" +
"s://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192" +
".168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.9" +
"1.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140" +
":2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379-" +
"-endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoi" +
"nts=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=h" +
"ttps://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https:/" +
"/192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192." +
"168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91" +
".138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379"; @Override
public void run(String... strings) {
long begin = System.nanoTime();
for(int i=0;i<10000;i++)
executorService.execute(() -> log.warn(msg));
executorService.shutdown();
for(;;){
if(executorService.isTerminated()){
System.out.println((System.nanoTime()-begin)/1000000);
break;
}
}
}
}
}

输出 1328 毫秒,这里仅仅只是进去队列的时间就需要1秒多,感觉还是很慢.这里进入了以后并没有完全写入硬盘,看另一个消费程序,还不停地在消费.

Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.的更多相关文章

  1. logback KafkaAppender 写入Kafka队列,集中日志输出.

    为了减少应用服务器对磁盘的读写,以及可以集中日志在一台机器上,方便使用ELK收集日志信息,所以考虑做一个jar包,让应用集中输出日志 网上搜了一圈,只发现有人写了个程序在github 地址:https ...

  2. Java并发编程:4种线程池和缓冲队列BlockingQueue

    一. 线程池简介 1. 线程池的概念: 线程池就是首先创建一些线程,它们的集合称为线程池.使用线程池可以很好地提高性能,线程池在系统启动时即创建大量空闲的线程,程序将一个任务传给线程池,线程池就会启动 ...

  3. 自定义ThreadPoolExecutor带Queue缓冲队列的线程池 + JMeter模拟并发下单请求

    .原文:https://blog.csdn.net/u011677147/article/details/80271174 拓展: https://github.com/jwpttcg66/GameT ...

  4. ELK之使用kafka作为消息队列收集日志

    参考:https://www.cnblogs.com/fengjian2016/p/5841556.html    https://www.cnblogs.com/hei12138/p/7805475 ...

  5. SpringBoot项目框架下ThreadPoolExecutor线程池+Queue缓冲队列实现高并发中进行下单业务

    主要是自己在项目中(中小型项目) 有支付下单业务(只是办理VIP,没有涉及到商品库存),目前用户量还没有上来,目前没有出现问题,但是想到如果用户量变大,下单并发量变大,可能会出现一系列的问题,趁着空闲 ...

  6. Nginx插件之openresty反向代理和日志滚动配置案例

    Nginx插件之openresty反向代理和日志滚动配置案例 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.openresty介绍 1>.Nginx介绍 Nginx是一款 ...

  7. 关于nagios系统下使用shell脚本自定义监控插件的编写以及没有实时监控图的问题

    关于nagios系统下shell自定义监控插件的编写.脚本规范以及没有实时监控图的问题的解决办法 在自已编写监控插件之前我们首先需要对nagios监控原理有一定的了解 Nagios的功能是监控服务和主 ...

  8. Redis基础篇(三)持久化:AOF日志

    Redis是内存数据库,但是一旦服务器宕机,内存中的数据将会全部丢失. 最简单的恢复方式是从后端数据库恢复,但这种方式有两个问题: 频繁访问数据库,会给数据库带来巨大的压力: 从数据库中读取相比从Re ...

  9. Rainbond通过插件整合ELK/EFK,实现日志收集

    前言 ELK 是三个开源项目的首字母缩写:Elasticsearch.Logstash 和 Kibana.但后来出现的 FileBeat 可以完全替代 Logstash的数据收集功能,也比较轻量级.本 ...

随机推荐

  1. SpringMVC类型转换、数据绑定详解

    public String method(Integer num, Date birth) { ... } Http请求传递的数据都是字符串String类型的,上面这个方法在Controller中定义 ...

  2. 这些年我在技术路上做过最虚伪愚蠢的事情,就是在CSDN上刷屏赚分

    现在似乎Github成了所谓技术人士的新宠,之前是博客,更早则是论坛. CSDN是众多技术论坛里比较突出的一个,人多高手也多,很多问题都能得到满意的回答. 谁都希望自己卓尔不群,我也不例外,我也想像那 ...

  3. Android打造完美的刮刮乐效果控件

    技术:Android+Java   概述 趁着元旦假期之际,首先在这里,我祝福大家在新的2019年都一个个的新健康,新收入,新顺利,新如意!!! 上一偏,我介绍了用Xfermode实现自定义圆角和椭圆 ...

  4. node库的选择

    mongodb mongodb:524335 mongodb官方库 mongoose:252190 mongodb封装库 mongodb封装较少 websocket socket.io:1,148,2 ...

  5. linux达人养成计划学习笔记(七)—— 用户登录查看命令

    一.查看用户登录信息 1.命令格式 w 2.命令结果 第一行信息是:系统当前时间     系统运行总时间     登录用户数量     一分钟/五分钟/十分钟的系统负载(越大越差) 二.who命令 1 ...

  6. Python反射机制理解

    Python反射机制用沛齐老师总结的话说就是:利用字符串的形式去对象(模块)中操作(寻找)成员. getattr(object, name) object代表模块,name代表模块中的属性或成员,该函 ...

  7. oracle中decode的用法(例子)

    使用结构: decode(条件,值1,返回值1,值2,返回值2,...值n,返回值n,缺省值) 该函数的含义如下: IF 条件=值1 THEN RETURN(翻译值1)ELSIF 条件=值2 THEN ...

  8. 比Screen更好用的神器:tmux

    安装并启动 tmux tmux 应用程序的名称来源于终端(terminal)复用器(muxer)或多路复用器(multiplexer).换句话说,它可以将您的单终端会话分成多个会话. 它管理窗口和窗格 ...

  9. TP支持菜单动态生成RBAC权限系统数据库结构设计方案

    最简单基于RBAC权限系统数据库结构设计 包括如下几个表 1. 用户表 -- Table "t_user" DDL CREATE TABLE `t_user` ( `id` int ...

  10. 【iCore4 双核心板_ARM】例程十五:USB_CDC实验——高速数据传输

    实验方法: 1.安装USB CDC驱动,驱动安装方法参考例程包中安装方法文档. 2.将跳线冒跳至USB_OTG,通过Micro USB 线将iCore4 USB-OTG接口与电脑相连. 3.打开上位机 ...