Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.
因为某些异步日志设置了即使队列满了,也不可丢弃,在并发高的时候,导致请求方法同步执行,响应变慢.
编写这个玩意,除了集中日志输出以外,还希望在高并发的时间点有缓冲作用.
之前用Kafka实现了一次入队速度不太理想,应该说比我写本地机械硬盘还慢..................不知道是不是我方式不对,而且估计是因为针对有序写入做了极大的优化,写出固态硬盘下居然比机械还慢.............
后来用Redis实现了一次,由于Redis的连接方式问题,所以必须使用管道来减少获取连接的性能损耗,入队效率非常不错
瞎扯,增加程序复杂度,又增加运维成本,完全不科学,拿固态硬盘阵列的服务器组分布式文件系统,挂载到应用服务器上才是王道!
由于管道没有到达需要写出得大小时(默认53个 arges),如果刚好又没有日志进来,那么可能存在一直等待写出得情况,所以里面写了一个定时线程,可以根据需要修改写出得周期时间
logback KafkaAppender 写入Kafka队列,集中日志输出.
生产者程序完整POM
- <?xml version="1.0" encoding="UTF-8"?>
- <project xmlns="http://maven.apache.org/POM/4.0.0"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>org.lzw</groupId>
- <artifactId>logqueue</artifactId>
- <version>1.0-SNAPSHOT</version>
- <dependencies>
- <dependency>
- <groupId>redis.clients</groupId>
- <artifactId>jedis</artifactId>
- <version>2.9.0</version>
- <scope>provided</scope>
- </dependency>
- <dependency>
- <groupId>ch.qos.logback</groupId>
- <artifactId>logback-core</artifactId>
- <version>1.2.3</version>
- <scope>provided</scope>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-api</artifactId>
- <version>1.7.25</version>
- <scope>provided</scope>
- </dependency>
- </dependencies>
- <distributionManagement>
- <repository>
- <id>maven-releases</id>
- <name>maven-releases</name>
- <url>http://192.168.91.137:8081/repository/maven-releases/</url>
- </repository>
- <snapshotRepository>
- <id>maven-snapshots</id>
- <name>maven-snapshots</name>
- <url>http://192.168.91.137:8081/repository/maven-snapshots/</url>
- </snapshotRepository>
- </distributionManagement>
- <build>
- <plugins>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
- </project>
只有一个类RedisAppender.java
- package org.lzw.log.appender;
- import ch.qos.logback.core.Layout;
- import redis.clients.jedis.Client;
- import redis.clients.jedis.Pipeline;
- import ch.qos.logback.core.AppenderBase;
- import ch.qos.logback.core.status.ErrorStatus;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import java.io.IOException;
- import java.io.StringReader;
- import java.util.List;
- import java.util.Properties;
- import java.util.concurrent.ScheduledThreadPoolExecutor;
- import java.util.concurrent.TimeUnit;
- /**
- * User: laizhenwei
- */
- public class RedisAppender<E> extends AppenderBase<E> {
- protected Layout<E> layout;
- private static final Logger LOGGER = LoggerFactory.getLogger("local");
- private Pipeline pipeline;
- private Client client;
- private static ScheduledThreadPoolExecutor exec = new ScheduledThreadPoolExecutor(1);
- private String queueKey;
- private String redisProperties;
- public void start() {
- super.start();
- int errors = 0;
- if (this.layout == null) {
- this.addStatus(new ErrorStatus("No layout set for the appender named \"" + this.name + "\".", this));
- ++errors;
- }
- LOGGER.info("Starting RedisAppender...");
- final Properties properties = new Properties();
- try {
- properties.load(new StringReader(redisProperties));
- pipeline = new Pipeline();
- client = new Client(properties.get("host").toString(), Integer.parseInt(properties.get("port").toString()));
- pipeline.setClient(client);
- } catch (Exception exception) {
- ++errors;
- LOGGER.warn(String.join("Kafka日志线程被拒绝:记录倒本地日志:"), exception);
- }
- if (queueKey == null) {
- ++errors;
- System.out.println("未配置queueKey");
- } else {
- System.out.println("日志将进入key为:[" + queueKey + "]的队列!");
- }
- if (errors == 0) {
- super.start();
- exec.scheduleAtFixedRate(() -> this.sync(), 5, 5, TimeUnit.SECONDS);
- }
- }
- @Override
- public void stop() {
- super.stop();
- pipeline.sync();
- try {
- pipeline.close();
- } catch (IOException e) {
- LOGGER.warn("Stopping RedisAppender...",e);
- }
- LOGGER.info("Stopping RedisAppender...");
- }
- @Override
- protected void append(E event) {
- String msg = layout.doLayout(event);
- this.lpush(msg);
- }
- private void lpush(String msg){
- try {
- pipeline.lpush(queueKey,msg);
- }catch (Exception ex){
- LOGGER.warn(String.join(":","推送redis队列失败!",msg),ex);
- }
- }
- private void sync(){
- try {
- pipeline.sync();
- }catch (Exception ex){
- List<Object> datas = client.getAll();
- datas.stream().forEach(d->LOGGER.warn(String.join(":","推送redis队列失败!记录到本地!",d.toString())));
- }
- }
- public String getQueueKey() {
- return queueKey;
- }
- public void setQueueKey(String queueKey) {
- this.queueKey = queueKey;
- }
- public void setLayout(Layout<E> layout) {
- this.layout = layout;
- }
- public String getRedisProperties() {
- return redisProperties;
- }
- public void setRedisProperties(String redisProperties) {
- this.redisProperties = redisProperties;
- }
- }
写完了,发布到Nexus
消费者application-test.yml
- spring:
- application:
- name: logconsumer
- profiles:
- #指定读取配置文件:dev(开发环境),prod(生产环境),qa(测试环境)
- active: test
- logKey:
- basic-info-api: basic-info-api
- redisParam:
- host: 192.168.1.207
- port: 6379
- pool:
- maxIdle: 20
- maxTotal: 200
- maxWaitMillis: -1
- testOnBorrow: false
- testOnReturn: false
logback-test.xml
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration debug="true">
- <contextName>logback</contextName>
- <property name="LOG_HOME" value="/logconsumer"/>
- <!-- basicInfoApi -->
- <appender name="basicInfoApiAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <file>${LOG_HOME}/basic-info-api/logback.log</file>
- <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
- <fileNamePattern>${LOG_HOME}/basic-info-api/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
- <!--<fileNamePattern>${LOG_HOME}/e9/e9-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
- <!-- 日志文件保留天数 -->
- <MaxHistory>30</MaxHistory>
- <!-- 文件大小触发重写新文件 -->
- <MaxFileSize>50MB</MaxFileSize>
- <totalSizeCap>10GB</totalSizeCap>
- </rollingPolicy>
- <encoder>
- <pattern>%msg%n</pattern>
- <charset>UTF-8</charset>
- </encoder>
- </appender>
- <!--basicInfoApi异步输出-->
- <appender name="basicInfoApiAasyncFile" class="ch.qos.logback.classic.AsyncAppender">
- <discardingThreshold>0</discardingThreshold>
- <queueSize>2048</queueSize>
- <appender-ref ref="basicInfoApiAppender"/>
- </appender>
- <!--basicInfoApi消费者所在包路径-->
- <logger name="org.lzw.logconsumer.consumer.BasicInfoApiConsumer" level="INFO" additivity="false">
- <appender-ref ref="basicInfoApiAasyncFile"/>
- </logger>
- <!--<!– ############################## 我是分割线 ############################################ –>-->
- <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <file>${LOG_HOME}/logconsumer/logback.log</file>
- <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
- <fileNamePattern>${LOG_HOME}/logconsumer/logback-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
- <!--<fileNamePattern>${LOG_HOME}/front/front-%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>-->
- <!-- 日志文件保留天数 -->
- <MaxHistory>30</MaxHistory>
- <!-- 文件大小触发重写新文件 -->
- <MaxFileSize>50MB</MaxFileSize>
- <totalSizeCap>1GB</totalSizeCap>
- </rollingPolicy>
- <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
- <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
- <charset>UTF-8</charset>
- </encoder>
- </appender>
- <root level="warn">
- <appender-ref ref="file" />
- </root>
- </configuration>
- 启动类LogconsumerApplication.java
- /**
- * User: laizhenwei
- */
- @SpringBootApplication
- public class LogconsumerApplication {
- public static void main(String[] args) {
- SpringApplication.run(LogconsumerApplication.class, args);
- }
- @Component
- public static class ConsumersStartup implements CommandLineRunner {
- ExecutorService executorService = Executors.newCachedThreadPool();
- @Autowired
- private BasicInfoApiConsumer basicInfoApiConsumer;
- @Override
- public void run(String... strings) {
- executorService.execute(()-> basicInfoApiConsumer.writeLog());
- }
- }
- }
- RedisService.java
- @Component
- public class RedisService {
- Logger logger = LoggerFactory.getLogger(this.getClass());
- @Value("${redisParam.host}")
- private String host;
- @Value("${redisParam.port}")
- private Integer port;
- @Value("${redisParam.pool.maxIdle}")
- private Integer maxIdle;
- @Value("${redisParam.pool.maxTotal}")
- private Integer maxTotal;
- @Value("${redisParam.pool.maxWaitMillis}")
- private Integer maxWaitMillis;
- @Value("${redisParam.pool.testOnBorrow}")
- private Boolean testOnBorrow;
- @Value("${redisParam.pool.testOnReturn}")
- private Boolean testOnReturn;
- private static JedisPoolConfig config = new JedisPoolConfig();
- private static JedisPool pool;
- @PostConstruct
- public void init(){
- config.setMaxIdle(maxIdle);
- config.setMaxTotal(maxTotal);
- config.setMaxWaitMillis(maxWaitMillis);
- config.setTestOnBorrow(testOnBorrow);
- config.setTestOnReturn(testOnReturn);
- pool = new JedisPool(config, host, port);
- }
- public String brpop(int timeOut, String key) {
- Jedis jedis = null;
- try {
- jedis = pool.getResource();
- return jedis.brpop(timeOut, key).get(1);
- } catch (Exception ex) {
- logger.warn("redis消费异常",ex);
- return "redis消费异常";
- } finally {
- if (jedis != null)
- jedis.close();
- }
- }
- }
- BasicInfoApiConsumer.java
- /**
- * 日志消费者
- */
- @Component
- public class BasicInfoApiConsumer {
- private Logger logger = LoggerFactory.getLogger(this.getClass());
- @Value("${logKey.basic-info-api}")
- private String logKey;
- @Autowired
- private RedisService redisService;
- public void writeLog() {
- while (true){
- System.out.println(1);
- logger.info(redisService.brpop(0, logKey));
- }
- }
- }
随便拿个应用跑一个 这里用basic-info-api
pom.xml
- <dependency>
- <groupId>org.lzw</groupId>
- <artifactId>logqueue</artifactId>
- <version>1.0-SNAPSHOT</version>
- </dependency>
- <dependency>
- <groupId>redis.clients</groupId>
- <artifactId>jedis</artifactId>
- <version>2.9.0</version>
- </dependency>
logback.xml
- <?xml version="1.0" encoding="UTF-8"?>
- <configuration debug="true">
- <contextName>logback</contextName>
- <property name="log.path" value="/home/logs/basic-info-api/logback.log"/>
- <appender name="redisAppender" class="org.lzw.log.appender.RedisAppender">
- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
- <level>warn</level>
- </filter>
- <queueKey>basic-info-api</queueKey>
- <redisProperties>
- host=192.168.1.207
- port=6379
- </redisProperties>
- <layout class="ch.qos.logback.classic.PatternLayout">
- <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
- </layout>
- </appender>
- <appender name="localAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
- <level>warn</level>
- </filter>
- <file>${log.path}</file>
- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
- <level>warn</level>
- </filter>
- <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
- <fileNamePattern>${log.path}.%d{yyyy-MM-dd}.%i.tar.gz</fileNamePattern>
- <!-- 日志文件保留天数 -->
- <MaxHistory>30</MaxHistory>
- <!-- 文件大小触发重写新文件 -->
- <MaxFileSize>50MB</MaxFileSize>
- <totalSizeCap>10GB</totalSizeCap>
- </rollingPolicy>
- <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
- <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n</pattern>
- <charset>UTF-8</charset>
- </encoder>
- </appender>
- <appender name="asyncLocal" class="ch.qos.logback.classic.AsyncAppender">
- <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
- <discardingThreshold>0</discardingThreshold>
- <queueSize>2048</queueSize>
- <appender-ref ref="localAppender"/>
- </appender>
- <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
- <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
- <level>debug</level>
- </filter>
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{36} - %msg%n
- </pattern>
- </encoder>
- </appender>
- <!--万一redis队列不通,记录到本地-->
- <logger name="local" additivity="false">
- <appender-ref ref="asyncLocal"/>
- </logger>
- <appender name="asyncRedisAppender" class="ch.qos.logback.classic.AsyncAppender">
- <!-- 不丢失日志.默认的,如果队列的80%已满,则会丢弃TRACT、DEBUG、INFO级别的日志 -->
- <discardingThreshold>0</discardingThreshold>
- <queueSize>2048</queueSize>
- <appender-ref ref="redisAppender"/>
- </appender>
- <root level="warn">
- <appender-ref ref="asyncRedisAppender"/>
- </root>
- <logger name="org.springframework.session.web.http.SessionRepositoryFilter" level="error"/>
- <logger name="org.springframework.scheduling" level="error"/>
- <Logger name="org.apache.catalina.util.LifecycleBase" level="error"/>
- <Logger name="org.springframework.amqp" level="warn"/>
- </configuration>
写一段长日志
- @Slf4j
- @EnableEurekaClient
- @EnableCircuitBreaker
- @SpringBootApplication
- public class BasicInfoApiApplication {
- public static void main(String[] args) {
- SpringApplication.run(BasicInfoApiApplication.class, args);
- }
- @Component
- public static class ConsumersStartup implements CommandLineRunner {
- ExecutorService executorService = Executors.newCachedThreadPool();
- String msg = "--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379," +
- "https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379," +
- "https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,h" +
- "ttps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,ht" +
- "tps://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,http" +
- "s://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://" +
- "192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.1" +
- "68.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91." +
- "139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:23" +
- "79,https://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,http" +
- "s://192.168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192" +
- ".168.91.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.9" +
- "1.140:2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140" +
- ":2379--endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379-" +
- "-endpoints=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoi" +
- "nts=https://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=h" +
- "ttps://192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https:/" +
- "/192.168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192." +
- "168.91.138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379--endpoints=https://192.168.91" +
- ".138:2379,https://192.168.91.139:2379,https://192.168.91.140:2379";
- @Override
- public void run(String... strings) {
- long begin = System.nanoTime();
- for(int i=0;i<10000;i++)
- executorService.execute(() -> log.warn(msg));
- executorService.shutdown();
- for(;;){
- if(executorService.isTerminated()){
- System.out.println((System.nanoTime()-begin)/1000000);
- break;
- }
- }
- }
- }
- }
输出 1328 毫秒,这里仅仅只是进去队列的时间就需要1秒多,感觉还是很慢.这里进入了以后并没有完全写入硬盘,看另一个消费程序,还不停地在消费.
Redis 自定义 RedisAppender 插件, 实现日志缓冲队列,集中日志输出.的更多相关文章
- logback KafkaAppender 写入Kafka队列,集中日志输出.
为了减少应用服务器对磁盘的读写,以及可以集中日志在一台机器上,方便使用ELK收集日志信息,所以考虑做一个jar包,让应用集中输出日志 网上搜了一圈,只发现有人写了个程序在github 地址:https ...
- Java并发编程:4种线程池和缓冲队列BlockingQueue
一. 线程池简介 1. 线程池的概念: 线程池就是首先创建一些线程,它们的集合称为线程池.使用线程池可以很好地提高性能,线程池在系统启动时即创建大量空闲的线程,程序将一个任务传给线程池,线程池就会启动 ...
- 自定义ThreadPoolExecutor带Queue缓冲队列的线程池 + JMeter模拟并发下单请求
.原文:https://blog.csdn.net/u011677147/article/details/80271174 拓展: https://github.com/jwpttcg66/GameT ...
- ELK之使用kafka作为消息队列收集日志
参考:https://www.cnblogs.com/fengjian2016/p/5841556.html https://www.cnblogs.com/hei12138/p/7805475 ...
- SpringBoot项目框架下ThreadPoolExecutor线程池+Queue缓冲队列实现高并发中进行下单业务
主要是自己在项目中(中小型项目) 有支付下单业务(只是办理VIP,没有涉及到商品库存),目前用户量还没有上来,目前没有出现问题,但是想到如果用户量变大,下单并发量变大,可能会出现一系列的问题,趁着空闲 ...
- Nginx插件之openresty反向代理和日志滚动配置案例
Nginx插件之openresty反向代理和日志滚动配置案例 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.openresty介绍 1>.Nginx介绍 Nginx是一款 ...
- 关于nagios系统下使用shell脚本自定义监控插件的编写以及没有实时监控图的问题
关于nagios系统下shell自定义监控插件的编写.脚本规范以及没有实时监控图的问题的解决办法 在自已编写监控插件之前我们首先需要对nagios监控原理有一定的了解 Nagios的功能是监控服务和主 ...
- Redis基础篇(三)持久化:AOF日志
Redis是内存数据库,但是一旦服务器宕机,内存中的数据将会全部丢失. 最简单的恢复方式是从后端数据库恢复,但这种方式有两个问题: 频繁访问数据库,会给数据库带来巨大的压力: 从数据库中读取相比从Re ...
- Rainbond通过插件整合ELK/EFK,实现日志收集
前言 ELK 是三个开源项目的首字母缩写:Elasticsearch.Logstash 和 Kibana.但后来出现的 FileBeat 可以完全替代 Logstash的数据收集功能,也比较轻量级.本 ...
随机推荐
- android:碎片的使用方式
介绍了这么多抽象的东西,也是时候应该学习一下碎片的具体用法了.你已经知道,碎 片通常都是在平板开发中才会使用的,因此我们首先要做的就是新建一个平板电脑的模拟 器.由于 4.0 系统的平板模拟器好像存在 ...
- import pandas as pd Python安装pandas模块
在学习python过程中需要用到一个叫pandas的模块,在pycharm中安装时总是出错. 千般百度折腾还是无果,后来发现它需要安装很多依赖包.就问你气不气~ 需要手动安装啊,千万记住,这里有个py ...
- 使用 BeautifulSoup 进行解析 html
#coding=utf-8 import urllib2 import socket import httplib from bs4 import BeautifulSoup UserAgen ...
- Android属性allowBackup安全风险浅析
1.allowBackup安全风险描述 Android API Level 8及其以上Android系统提供了为应用程序数据的备份和恢复功能,此功能的开关决定于该应用程序中AndroidManifes ...
- win32 进程崩溃时禁止弹出错误对话框
在程序初始化的时候加入以下代码即可: SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX); _set_abort_behav ...
- node服务器中打开html文件的两种方法
方法1:利用 Express 托管静态文件,详情查看这里 方法2:使用fs模块提供的readFile方法打开文件,让其以text/html的形式输出. 代码: var express = requir ...
- Photodesk for Mac(Instagram 桌面客户端)破解版安装
1.软件简介 PhotoDesk - for Instagram 是 macOS 系统上一款 Instagram 客户端,可以让大家在 Mac 上观看朋友的新照片.或是最近热门的作品,也可以 f ...
- Appium 输入 & 符号,实际输入&-
2018-10-11 12:27:12:178 - [debug] [MJSONWP] Calling AppiumDriver.setValue() with args: [["& ...
- 【Spark深入学习-11】Spark基本概念和运行模式
----本节内容------- 1.大数据基础 1.1大数据平台基本框架 1.2学习大数据的基础 1.3学习Spark的Hadoop基础 2.Hadoop生态基本介绍 2.1Hadoop生态组件介绍 ...
- 基于Python, Selenium, Phantomjs无头浏览器访问页面
引言: 在自动化测试以及爬虫领域,无头浏览器的应用场景非常广泛,本文将梳理其中的若干概念和思路,并基于代码示例其中的若干使用技巧. 1. 无头浏览器 通常大家在在打开网页的工具就是浏览器,通过界面上输 ...