为了减少应用服务器对磁盘的读写,以及可以集中日志在一台机器上,方便使用ELK收集日志信息,所以考虑做一个jar包,让应用集中输出日志

Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.

网上搜了一圈,只发现有人写了个程序在github

地址:https://github.com/johnmpage/logback-kafka

Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.

本来打算引用一下这个jar就完事了,没想到在pom里下不下来,只好把源码下了,拷贝了代码过来,自己修改一下.

首先,安装一个Kafka,作为一个懒得出神入化得程序员,我选择的安装方式是

启动zookeeper容器

docker run -d --name zookeeper --net=host  -t wurstmeister/zookeeper

启动kafka容器

docker run --name kafka -d -e HOST_IP=192.168.1.7 --net=host -v /usr/local/docker/kafka/conf/server.properties:/opt/kafka_2.-1.0./config/server.properties  -v /etc/localtime:/etc/localtime:ro -e KAFKA_ADVERTISED_PORT= -e KAFKA_BROKER_ID= -t wurstmeister/kafka

要修改Kafka的server.properties 中zookeeper配置

配置文件如下

listeners=PLAINTEXT://192.168.1.7:9092
delete.topic.enable=true
advertised.listeners=PLAINTEXT://192.168.1.7:9092
num.network.threads=
num.io.threads=
socket.send.buffer.bytes=
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=
log.dirs=/kafka/kafka-logs-92cfb0bbd88c
num.partitions=
num.recovery.threads.per.data.dir=
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr=
log.retention.hours=
log.retention.bytes=
log.segment.bytes=
log.retention.check.interval.ms=
zookeeper.connect=192.168.1.7:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=
group.initial.rebalance.delay.ms=
version=1.0.

启动好了,新建SpringBoot项目,首先消费队列的

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.</modelVersion>
<groupId>com.lzw</groupId>
<artifactId>kafkalog</artifactId>
<version>0.0.-SNAPSHOT</version>
<packaging>jar</packaging> <name>kafkalog</name>
<description>Demo project for Spring Boot</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0..M6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</project>

程序结构

KafkaConfig

package com.lzw.kafkalog.config;
/**
* Created by laizhenwei on 2017/11/28
*/
@Configuration
@EnableKafka
public class KafkaConfig { @Value("${spring.kafka.consumer.bootstrap-servers}")
private String consumerBootstrapServers; @Value("${spring.kafka.producer.bootstrap-servers}")
private String producerBootstrapServers; @Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
} @Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, consumerBootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "foo");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return props;
} @Bean
public Areceiver areceiver() {
return new Areceiver();
} @Bean
public Breceiver breceiver(){
return new Breceiver();
}
}
KafkaAdminConfig
package com.lzw.kafkalog.config;

import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.KafkaAdmin; import java.util.HashMap;
import java.util.Map; /**
* Created by laizhenwei on 2017/11/28
*/
@Configuration
public class KafkaAdminConfig { @Value("${spring.kafka.producer.bootstrap-servers}")
private String producerBootstrapServers; @Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,producerBootstrapServers);
return new KafkaAdmin(configs);
} /**
* 创建队列A,1个分区
* @return
*/
@Bean
public NewTopic a() {
return new NewTopic("A", 1, (short) 1);
} /**
* 创建队列B,1个分区
* @return
*/
@Bean
public NewTopic b() {
return new NewTopic("B", 1, (short) 1);
}
}

B队列消费者

package com.lzw.kafkalog.b;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener; /**
* Created by laizhenwei on 2017/11/28
*/
public class Breceiver {
Logger logger = LoggerFactory.getLogger(this.getClass());
@KafkaListener(topics={"B"})
public void listen(ConsumerRecord data) {
logger.info(data.value().toString());
}
}

application.yml

spring:
kafka:
consumer:
bootstrap-servers: 192.168.1.7:9092
producer:
bootstrap-servers: 192.168.1.7:9092

logback-test.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
<contextName>logback</contextName>
<property name="LOG_HOME" value="F:/log" />
<appender name="aAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/a/a.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/a/a-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<!--<fileNamePattern>${LOG_HOME}/a/a-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>100MB</MaxFileSize>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender> <appender name="bAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/b/b.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/b/b-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<!--<fileNamePattern>${LOG_HOME}/b/b-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>100MB</MaxFileSize>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy> <encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender> <!--异步输出-->
<appender name="aAsyncFile" class="ch.qos.logback.classic.AsyncAppender">
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="aAppender" />
</appender> <logger name="com.lzw.kafkalog.a" level="INFO" additivity="false">
<appender-ref ref="aAsyncFile" />
</logger> <!--异步输出-->
<appender name="bAsyncFile" class="ch.qos.logback.classic.AsyncAppender">
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="bAppender" />
</appender>
<logger name="com.lzw.kafkalog.b" level="INFO" additivity="false">
<appender-ref ref="bAsyncFile" />
</logger> </configuration>

消费者程序,重点是红框部分

红框源码,本来想做个容错,后来发现不行,原因等下再说

package com.lzw.project_b.kafka;

import ch.qos.logback.core.AppenderBase;
import ch.qos.logback.core.Layout;
import ch.qos.logback.core.status.ErrorStatus;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.io.StringReader;
import java.util.Properties; public class KafkaAppender<E> extends AppenderBase<E> { protected Layout<E> layout;
private static final Logger LOGGER = LoggerFactory.getLogger("local");
private boolean logToLocal = false;
private String kafkaProducerProperties;
private String topic;
private KafkaProducer producer; public void start() {
super.start();
int errors = 0;
if (this.layout == null) {
this.addStatus(new ErrorStatus("No layout set for the appender named \"" + this.name + "\".", this));
++errors;
} if (errors == 0) {
super.start();
} LOGGER.info("Starting KafkaAppender...");
final Properties properties = new Properties();
try {
properties.load(new StringReader(kafkaProducerProperties));
producer = new KafkaProducer<>(properties);
} catch (Exception exception) {
System.out.println("KafkaAppender: Exception initializing Producer. " + exception + " : " + exception.getMessage());
}
System.out.println("KafkaAppender: Producer initialized: " + producer);
if (topic == null) {
System.out.println("KafkaAppender requires a topic. Add this to the appender configuration.");
} else {
System.out.println("KafkaAppender will publish messages to the '" + topic + "' topic.");
}
LOGGER.info("kafkaProducerProperties = {}", kafkaProducerProperties);
LOGGER.info("Kafka Producer Properties = {}", properties);
if (logToLocal) {
LOGGER.info("KafkaAppender: kafkaProducerProperties = '" + kafkaProducerProperties + "'.");
LOGGER.info("KafkaAppender: properties = '" + properties + "'.");
}
} @Override
public void stop() {
super.stop();
LOGGER.info("Stopping KafkaAppender...");
producer.close();
} @Override
protected void append(E event) {
/**
* 源码这里是用Formatter类转为JSON
*/
String msg = layout.doLayout(event);
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topic, msg);
producer.send(producerRecord);
} public String getTopic() {
return topic;
} public void setTopic(String topic) {
this.topic = topic;
} public boolean getLogToLocal() {
return logToLocal;
} public void setLogToLocal(String logToLocal) {
if (Boolean.valueOf(logToLocal)) {
this.logToLocal = true;
}
} public void setLayout(Layout<E> layout) {
this.layout = layout;
} public String getKafkaProducerProperties() {
return kafkaProducerProperties;
} public void setKafkaProducerProperties(String kafkaProducerProperties) {
this.kafkaProducerProperties = kafkaProducerProperties;
}
}
LogService就记录一段长的垃圾日志
package com.lzw.project_b.service;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component; /**
* Created by laizhenwei on 2017/12/1
*/
@Component
public class LogService {
Logger logger = LoggerFactory.getLogger(this.getClass()); private static final String msg = "asdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfa" +
"sdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdf" +
"sadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfa" +
"sdfsadfasdfsadfasdfsaasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsa" +
"dfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadf" +
"asdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfas" +
"dfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsa" +
"dfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfas" +
"dfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadf" +
"sdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfa" +
"sdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsadfasdfsa"; public void dolog() {
logger.info(msg, new RuntimeException(msg));
} }
KafkaLogController就一个很无聊的输出日志请求,并记录入队时间
package com.lzw.project_b.controller;

import com.lzw.project_b.service.LogService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController; /**
* Created by laizhenwei on 2017/11/29
*/
@RestController
@RequestMapping(path = "/kafka")
public class KafkaLogController { @Autowired
private LogService logService; @GetMapping(path = "/aa")
public void aa() {
long begin = System.nanoTime();
for (int i = 0; i < 100000; i++) {
logService.dolog();
}
long end = System.nanoTime(); System.out.println((end - begin) / 1000000);
} }

启动两个程序,来一个请求

查看耗时

生产者的 logback-test.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">
<appender name="KAFKA" class="com.lzw.project_b.kafka.KafkaAppender">
<topic>B</topic>
<kafkaProducerProperties>
bootstrap.servers=192.168.1.7:9092
retries=0
value.serializer=org.apache.kafka.common.serialization.StringSerializer
key.serializer=org.apache.kafka.common.serialization.StringSerializer
<!--reconnect.backoff.ms=1-->
producer.type=async
request.required.acks=0
<!--acks=0-->
<!--producer.type=async -->
<!--request.required.acks=1 -->
<!--queue.buffering.max.ms=20000 -->
<!--queue.buffering.max.messages=1000-->
<!--queue.enqueue.timeout.ms = -1 -->
<!--batch.num.messages=8-->
<!--metadata.fetch.timeout.ms=3000-->
<!--producer.type=sync-->
<!--request.required.acks=1-->
<!--reconnect.backoff.ms=3000-->
<!--retry.backoff.ms=3000-->
</kafkaProducerProperties>
<logToLocal>true</logToLocal>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
</layout>
</appender> 时间滚动输出 level为 monitor 日志
<appender name="localAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>F:/localLog/b/b.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>F:/localLog/b/b-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>
<!-- 日志文件保留天数 -->
<MaxHistory>30</MaxHistory>
<!-- 文件大小触发重写新文件 -->
<MaxFileSize>200MB</MaxFileSize>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
<charset>UTF-8</charset>
</encoder>
</appender> <appender name="asyncLocal" class="ch.qos.logback.classic.AsyncAppender">
<!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
<discardingThreshold>0</discardingThreshold>
<queueSize>2048</queueSize>
<appender-ref ref="localAppender"/>
</appender> <!--万一kafka队列不通,记录到本地-->
<logger name="local" additivity="false">
<appender-ref ref="asyncLocal"/>
</logger> <!--<appender name="asyncKafka" class="ch.qos.logback.classic.AsyncAppender">-->
<!--&lt;!&ndash; 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 &ndash;&gt;-->
<!--<discardingThreshold>0</discardingThreshold>-->
<!--<queueSize>2048</queueSize>-->
<!--<appender-ref ref="KAFKA"/>-->
<!--</appender>--> <root level="INFO">
<appender-ref ref="KAFKA"/>
</root> </configuration>

关于为什么我没用有源码中的Json Formatter ,因为转换Json会花更多时间,性能更低.源码中是用了Json-simple,我换成了Gson,快了很多,但是还是有性能影响,如果非要转成Json

我选择在ELK中转,也不会在应用中耗时间去转

生产者之里,我用了最极端的one way 方式.吞吐量最高,但是无法得知是否已经入队.

这里生产者的程序里Logback 必须使用同步日志才能客观知道入队的耗时.

总结

容错:我尝试在生产者中写一段容错代码,一旦链接Kafka不通.或者队列不可写的时候,记录倒本地日志.关闭Kafka测试,生产者却阻塞了,一直重连,程序基本废了

找了很多方法,没有找到关闭重连的方式.

灵活性:相比起redis队列来说,Kafka就比较尴尬(例如我这个场景,还需要保证Kafka队列可用,性能没提升多少,还增加了维护成本)

性能:我在固态硬盘与机械硬盘中测试过,由于Kafka很懂机械硬盘,并且对顺序写入做了很大优化,在机械硬盘上表现比固态硬盘性能大概高30%,主打低成本?

入队的性能不怎么高,实际上还比不上直接写入本地(别忘了入队以后,在消费者那边还要写盘,队列也是持久化倒硬盘,等于写了两次盘)

用户体验:据说JAVA驱动还算是做得比较好的了

最后:不适合我的业务场景.也用得不深.最后我选择了redis做队列

我也没找到办法关闭Kafaka的持久化,写两次硬盘,某些情况日志并不是不可丢失(redis做队列很灵活,写不进队列的时候,可以写入本地硬盘),redis进的快消费快,内存基本不会有很大压力,cpu消耗也不高,个人认为在数据不是特别重要的情况下成本比Kafka还低,性能可是质的提升.

logback KafkaAppender 写入Kafka队列,集中日志输出.的更多相关文章

  1. Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.

    因为某些异步日志设置了即使队列满了,也不可丢弃,在并发高的时候,导致请求方法同步执行,响应变慢. 编写这个玩意,除了集中日志输出以外,还希望在高并发的时间点有缓冲作用. 之前用Kafka实现了一次入队 ...

  2. springboot+logback日志输出企业实践(下)

    目录 1.引言 2. 输出 logback 状态数据 3. logback 异步输出日志 3.1 异步输出配置 3.2 异步输出原理 4. springboot 多环境下 logback 配置 5. ...

  3. springboot+logback日志输出企业实践(上)

    目录 1.引言 2.logback简介 3. springboot默认日志框架-logback 3.1 springboot示例工程搭建 3.2 日志输出与基本配置 3.2.1 日志默认输出 3.2. ...

  4. logback kafkaAppender输出日志到kafka

    官网地址https://github.com/danielwegener/logback-kafka-appender 本文以spring boot项目为基础,更多的信息,请参考官网 https:// ...

  5. Logback 整合 RabbitMQ 实现统一日志输出

    原文地址:Logback 整合 RabbitMQ 实现统一日志输出 博客地址:http://www.extlight.com 一.前言 公司项目做了集群实现请求分流,由于线上或多或少会出现请求失败或系 ...

  6. Flume 读取RabbitMq消息队列消息,并将消息写入kafka

    首先是关于flume的基础介绍 组件名称 功能介绍 Agent代理 使用JVM 运行Flume.每台机器运行一个agent,但是可以在一个agent中包含多个sources和sinks. Client ...

  7. 2.logback+slf4j+janino 配置项目的日志输出

    作者QQ:1095737364    QQ群:123300273     欢迎加入! 1.创建项目 参考:http://www.cnblogs.com/yysbolg/p/6898453.html 2 ...

  8. 使用Log4j将程序日志实时写入Kafka(转)

    原文链接:使用Log4j将程序日志实时写入Kafka 很多应用程序使用Log4j记录日志,如何使用Kafka实时的收集与存储这些Log4j产生的日志呢?一种方案是使用其他组件(比如Flume,或者自己 ...

  9. java实时监听日志写入kafka(转)

    原文链接:http://www.sjsjw.com/kf_cloud/article/020376ABA013802.asp 目的 实时监听某目录下的日志文件,如有新文件切换到新文件,并同步写入kaf ...

随机推荐

  1. eclipse-java开发实用快捷键

    Expand All:ctrl+小键盘* Collapse All:ctrl+shift+小键盘/

  2. Linux指令--rcp,scp

    rcp代表"remote file copy"(远程文件拷贝).该命令用于在计算机之间拷贝文件.rcp命令有两种格式.第一种格式用于文件到文件的拷贝:第二种格式用于把文件或目录拷贝 ...

  3. MyEclipse中Lombok的安装及使用

    lombok是一款通过注解的形式简化我们必须有又显得臃肿的代码的工具.最常用的就是@Data注解.实体类上用了这个注解,实体类的各个属性就不需要书写get和set方法. 安装步骤: 1.关闭Myecl ...

  4. VAssistX插件

    一.什么是VassistX? VassistX的全称是Visual Assist X,是whole tomato开发的一个非常好用的插件,可用于VC6.0及Visual Studio的各个版本(包括V ...

  5. Unity Android 5.6版本Resources.Load效率的问题

    0x00 前言 相信不少使用Unity的小伙伴都听说过,甚至也亲身经历过在Unity5.6最初的几个版本中使用Resources.Load方法加载资源变--慢的问题. 这个问题的确是存在的,比如这个i ...

  6. Sql Server的艺术(四) SQL多表查询

    表的基本连接 SQL的一个重要特性就是能通过JOIN关键词,从多个交叉表中查询.分析数据. 连接表的目的 在关系数据库中,数据表设计的一个重要原则就是要避免冗余性. 减少了冗余信息,节省了数据库存储空 ...

  7. javaScript补充

    一.字符串常用的方法 obj.length 长度 obj.trim() 移除前后空白 obj.trimLeft() 移除前空白 obj.trimRight() 移除后空白 obj.charAt(n) ...

  8. left join on/right join on/inner join on/full join on连接

    现在有两张表,第一张表是用户表,第二张表是订单表.情况是这样的,在我这张用户表里用户很多,但是真正下单的人却不多,而且,每一个用户可以有多个订单.然后领导喊话了,小王,你给我查下,现在咱们的订单有多少 ...

  9. Pandas快速入门笔记

    我正以Python作为突破口,入门机器学习相关知识.出于机器学习实践过程中的需要,我快速了解了一下提供了类似关系型或标签型数据结构的Pandas的使用方法.下面记录相关学习笔记. 数据结构 Panda ...

  10. quartz的一些记录

    定时任务总会遇到任务重叠执行的情况,比如一个任务1分钟执行一次,而任务的执行时间超过了1分钟,这样就会有两个相同任务并发执行了.有时候我们是允许这种情况的发生的,比如任务执行的代码是幂等的,而有时候我 ...