项目实践

现在Java项目使用mybatis多一些,所以我也做了一个springboot+mybatisplus+sharding-jdbc分库分表项目例子分享给大家。

要是用的springboot+jpa可以看这篇文章:https://www.cnblogs.com/owenma/p/11364624.html

其它的框架内容不做赘述,直接上代码。

数据准备

装备两个数据库。并在两个库中建表,建表sql如下:

DROP TABLE IF EXISTS `t_user_0`;
CREATE TABLE `t_user_0` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键id',
`order_id` BIGINT(20) DEFAULT '0' COMMENT '顺序编号',
`user_id` BIGINT(20) DEFAULT '0' COMMENT '用户编号',
`user_name` varchar(32) DEFAULT NULL COMMENT '用户名',
`pass_word` varchar(32) DEFAULT NULL COMMENT '密码',
`nick_name` varchar(32) DEFAULT NULL COMMENT '倪名',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4; DROP TABLE IF EXISTS `t_user_1`;
CREATE TABLE `t_user_1` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键id',
`order_id` BIGINT(20) DEFAULT '0' COMMENT '顺序编号',
`user_id` BIGINT(20) DEFAULT '0' COMMENT '用户编号',
`user_name` varchar(32) DEFAULT NULL COMMENT '用户名',
`pass_word` varchar(32) DEFAULT NULL COMMENT '密码',
`nick_name` varchar(32) DEFAULT NULL COMMENT '倪名',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

POM配置

 <dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.dangdang</groupId>
<artifactId>sharding-jdbc-core</artifactId>
<version>1.5.4</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.1.3</version>
</dependency>
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.44</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- mybatis-plus begin -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatisplus-spring-boot-starter</artifactId>
<version>${mybatisplus-spring-boot-starter.version}</version>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus</artifactId>
<version>${mybatisplus.version}</version>
</dependency>
<dependency>
<groupId>org.apache.velocity</groupId>
<artifactId>velocity</artifactId>
<version>${velocity.version}</version>
</dependency>
<!-- mybatis-plus end -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.51</version>
</dependency>
</dependencies>

application.properties配置

spring.devtools.remote.restart.enabled=false

spring.jdbc1.type=com.alibaba.druid.pool.DruidDataSource
spring.jdbc1.driverClassName=com.mysql.jdbc.Driver
spring.jdbc1.url=jdbc:mysql://localhost:3306/mazhq?serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8
spring.jdbc1.username=root
spring.jdbc1.password=123456
spring.jdbc1.connectionProperties=config.decrypt=true;druid.stat.slowSqlMillis=3000;druid.stat.logSlowSql=true;druid.stat.mergeSql=true
spring.jdbc1.filters=stat
spring.jdbc1.maxActive=100
spring.jdbc1.initialSize=1
spring.jdbc1.maxWait=15000
spring.jdbc1.minIdle=1
spring.jdbc1.timeBetweenEvictionRunsMillis=30000
spring.jdbc1.minEvictableIdleTimeMillis=180000
spring.jdbc1.validationQuery=SELECT 'x'
spring.jdbc1.testWhileIdle=true
spring.jdbc1.testOnBorrow=false
spring.jdbc1.testOnReturn=false
spring.jdbc1.poolPreparedStatements=false
spring.jdbc1.maxPoolPreparedStatementPerConnectionSize=20
spring.jdbc1.removeAbandoned=true
spring.jdbc1.removeAbandonedTimeout=600
spring.jdbc1.logAbandoned=false
spring.jdbc1.connectionInitSqls= spring.jdbc2.type=com.alibaba.druid.pool.DruidDataSource
spring.jdbc2.driverClassName=com.mysql.jdbc.Driver
spring.jdbc2.url=jdbc:mysql://localhost:3306/liugh?serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8
spring.jdbc2.username=root
spring.jdbc2.password=123456
spring.jdbc2.connectionProperties=config.decrypt=true;druid.stat.slowSqlMillis=3000;druid.stat.logSlowSql=true;druid.stat.mergeSql=true
spring.jdbc2.filters=stat
spring.jdbc2.maxActive=100
spring.jdbc2.initialSize=1
spring.jdbc2.maxWait=15000
spring.jdbc2.minIdle=1
spring.jdbc2.timeBetweenEvictionRunsMillis=30000
spring.jdbc2.minEvictableIdleTimeMillis=180000
spring.jdbc2.validationQuery=SELECT 'x'
spring.jdbc2.testWhileIdle=true
spring.jdbc2.testOnBorrow=false
spring.jdbc2.testOnReturn=false
spring.jdbc2.poolPreparedStatements=false
spring.jdbc2.maxPoolPreparedStatementPerConnectionSize=20
spring.jdbc2.removeAbandoned=true
spring.jdbc2.removeAbandonedTimeout=600
spring.jdbc2.logAbandoned=false
spring.jdbc2.connectionInitSqls= mybatis-plus.mapper-locations=classpath:/com/mazhq/web/mapper/xml/*Mapper.xml
mybatis-plus.type-aliases-package=com.mazhq.web.entity
#1:数据库ID自增 2:用户输入id 3:全局唯一id(IdWorker) 4:全局唯一ID(uuid)
mybatis-plus.global-config.id-type=3
mybatis-plus.global-config.db-column-underline=true
mybatis-plus.global-config.refresh-mapper=true
mybatis-plus.configuration.map-underscore-to-camel-case=true
#配置的缓存的全局开关
mybatis-plus.configuration.cache-enabled=true
#延时加载的开关
mybatis-plus.configuration.lazy-loading-enabled=true
#开启的话,延时加载一个属性时会加载该对象全部属性,否则按需加载属性
mybatis-plus.configuration.multiple-result-sets-enabled=true
#打印sql语句,调试用
mybatis-plus.configuration.log-impl=org.apache.ibatis.logging.stdout.StdOutImpl

分库分表最主要有几个配置:

 1. 有多少个数据源 (2个:database0和database1)

@Data
@ConfigurationProperties(prefix = "spring.jdbc1")
public class ShardDataSource1 {
private String driverClassName;
private String url;
private String username;
private String password;
private String filters;
private int maxActive;
private int initialSize;
private int maxWait;
private int minIdle;
private int timeBetweenEvictionRunsMillis;
private int minEvictableIdleTimeMillis;
private String validationQuery;
private boolean testWhileIdle;
private boolean testOnBorrow;
private boolean testOnReturn;
private boolean poolPreparedStatements;
private int maxPoolPreparedStatementPerConnectionSize;
private boolean removeAbandoned;
private int removeAbandonedTimeout;
private boolean logAbandoned;
private List<String> connectionInitSqls;
private String connectionProperties;
}

  2. 用什么列进行分库以及分库算法 

/**
* @author mazhq
* @date 2019/8/7 17:23
*/
public class DataBaseShardingAlgorithm implements SingleKeyDatabaseShardingAlgorithm<Long> { @Override
public String doEqualSharding(Collection<String> databaseNames, ShardingValue<Long> shardingValue) {
for (String each : databaseNames) {
if (each.endsWith(Long.parseLong(shardingValue.getValue().toString()) % 2 + "")) {
return each;
}
}
throw new IllegalArgumentException();
} @Override
public Collection<String> doInSharding(Collection<String> databaseNames, ShardingValue<Long> shardingValue) {
Collection<String> result = new LinkedHashSet<>(databaseNames.size());
for (Long value : shardingValue.getValues()) {
for (String tableName : databaseNames) {
if (tableName.endsWith(value % 2 + "")) {
result.add(tableName);
}
}
}
return result;
} @Override
public Collection<String> doBetweenSharding(Collection<String> databaseNames, ShardingValue<Long> shardingValue) {
Collection<String> result = new LinkedHashSet<>(databaseNames.size());
Range<Long> range = (Range<Long>) shardingValue.getValueRange();
for (Long i = range.lowerEndpoint(); i <= range.upperEndpoint(); i++) {
for (String each : databaseNames) {
if (each.endsWith(i % 2 + "")) {
result.add(each);
}
}
}
return result;
}
}

  3. 用什么列进行分表以及分表算法

/**
* @author mazhq
* @Title: TableShardingAlgorithm
* @date 2019/8/12 16:40
*/
public class TableShardingAlgorithm implements SingleKeyTableShardingAlgorithm<Long> {
@Override
public String doEqualSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) {
for (String each : tableNames) {
if (each.endsWith(shardingValue.getValue() % 2 + "")) {
return each;
}
}
throw new IllegalArgumentException();
} @Override
public Collection<String> doInSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) {
Collection<String> result = new LinkedHashSet<>(tableNames.size());
for (Long value : shardingValue.getValues()) {
for (String tableName : tableNames) {
if (tableName.endsWith(value % 2 + "")) {
result.add(tableName);
}
}
}
return result;
} @Override
public Collection<String> doBetweenSharding(Collection<String> tableNames, ShardingValue<Long> shardingValue) {
Collection<String> result = new LinkedHashSet<>(tableNames.size());
Range<Long> range = (Range<Long>) shardingValue.getValueRange();
for (Long i = range.lowerEndpoint(); i <= range.upperEndpoint(); i++) {
for (String each : tableNames) {
if (each.endsWith(i % 2 + "")) {
result.add(each);
}
}
}
return result;
}
}

  4. 每张表的逻辑表名和所有物理表名和集成调用

/**
* @author mazhq
* @Title: DataSourceConfig
* @date 2019/8/7 17:05
*/
@Configuration
@EnableTransactionManagement
@ConditionalOnClass(DruidDataSource.class)
@EnableConfigurationProperties({ShardDataSource1.class, ShardDataSource2.class})
public class DataSourceConfig {
@Autowired
private ShardDataSource1 dataSource1;
@Autowired
private ShardDataSource2 dataSource2; /**
* 配置数据源0,数据源的名称最好要有一定的规则,方便配置分库的计算规则
* @return
*/
private DataSource db1() throws SQLException {
return this.getDB1(dataSource1);
}
/**
* 配置数据源1,数据源的名称最好要有一定的规则,方便配置分库的计算规则
* @return
*/
private DataSource db2() throws SQLException {
return this.getDB2(dataSource2);
} /**
* 配置数据源规则,即将多个数据源交给sharding-jdbc管理,并且可以设置默认的数据源,
* 当表没有配置分库规则时会使用默认的数据源
* @return
*/
@Bean
public DataSourceRule dataSourceRule() throws SQLException {
Map<String, DataSource> dataSourceMap = new HashMap<>();
dataSourceMap.put("dataSource0", this.db1());
dataSourceMap.put("dataSource1", this.db2());
return new DataSourceRule(dataSourceMap, "dataSource0");
} /**
* 配置数据源策略和表策略,具体策略需要自己实现
* @param dataSourceRule
* @return
*/
@Bean
public ShardingRule shardingRule(@Qualifier("dataSourceRule") DataSourceRule dataSourceRule){
//具体分库分表策略
TableRule orderTableRule = TableRule.builder("t_user")
.actualTables(Arrays.asList("t_user_0", "t_user_1"))
.tableShardingStrategy(new TableShardingStrategy("order_id", new TableShardingAlgorithm()))
.dataSourceRule(dataSourceRule)
.build(); //绑定表策略,在查询时会使用主表策略计算路由的数据源,因此需要约定绑定表策略的表的规则需要一致,可以一定程度提高效率
List<BindingTableRule> bindingTableRuleList = new ArrayList<BindingTableRule>();
bindingTableRuleList.add(new BindingTableRule(Arrays.asList(orderTableRule)));
return ShardingRule.builder().dataSourceRule(dataSourceRule)
.tableRules(Arrays.asList(orderTableRule))
.bindingTableRules(bindingTableRuleList)
.databaseShardingStrategy(new DatabaseShardingStrategy("user_id", new DataBaseShardingAlgorithm()))
.tableShardingStrategy(new TableShardingStrategy("order_id", new TableShardingAlgorithm()))
.build();
} /**
* 创建sharding-jdbc的数据源DataSource,MybatisAutoConfiguration会使用此数据源
* @param shardingRule
* @return
* @throws SQLException
*/
@Bean
public DataSource shardingDataSource(@Qualifier("shardingRule") ShardingRule shardingRule) throws SQLException {
return ShardingDataSourceFactory.createDataSource(shardingRule);
} private DruidDataSource getDB1(ShardDataSource1 shardDataSource1) throws SQLException {
DruidDataSource ds = new DruidDataSource();
ds.setDriverClassName(shardDataSource1.getDriverClassName());
ds.setUrl(shardDataSource1.getUrl());
ds.setUsername(shardDataSource1.getUsername());
ds.setPassword(shardDataSource1.getPassword());
ds.setFilters(shardDataSource1.getFilters());
ds.setMaxActive(shardDataSource1.getMaxActive());
ds.setInitialSize(shardDataSource1.getInitialSize());
ds.setMaxWait(shardDataSource1.getMaxWait());
ds.setMinIdle(shardDataSource1.getMinIdle());
ds.setTimeBetweenEvictionRunsMillis(shardDataSource1.getTimeBetweenEvictionRunsMillis());
ds.setMinEvictableIdleTimeMillis(shardDataSource1.getMinEvictableIdleTimeMillis());
ds.setValidationQuery(shardDataSource1.getValidationQuery());
ds.setTestWhileIdle(shardDataSource1.isTestWhileIdle());
ds.setTestOnBorrow(shardDataSource1.isTestOnBorrow());
ds.setTestOnReturn(shardDataSource1.isTestOnReturn());
ds.setPoolPreparedStatements(shardDataSource1.isPoolPreparedStatements());
ds.setMaxPoolPreparedStatementPerConnectionSize(
shardDataSource1.getMaxPoolPreparedStatementPerConnectionSize());
ds.setRemoveAbandoned(shardDataSource1.isRemoveAbandoned());
ds.setRemoveAbandonedTimeout(shardDataSource1.getRemoveAbandonedTimeout());
ds.setLogAbandoned(shardDataSource1.isLogAbandoned());
ds.setConnectionInitSqls(shardDataSource1.getConnectionInitSqls());
ds.setConnectionProperties(shardDataSource1.getConnectionProperties());
return ds;
} private DruidDataSource getDB2(ShardDataSource2 shardDataSource2) throws SQLException {
DruidDataSource ds = new DruidDataSource();
ds.setDriverClassName(shardDataSource2.getDriverClassName());
ds.setUrl(shardDataSource2.getUrl());
ds.setUsername(shardDataSource2.getUsername());
ds.setPassword(shardDataSource2.getPassword());
ds.setFilters(shardDataSource2.getFilters());
ds.setMaxActive(shardDataSource2.getMaxActive());
ds.setInitialSize(shardDataSource2.getInitialSize());
ds.setMaxWait(shardDataSource2.getMaxWait());
ds.setMinIdle(shardDataSource2.getMinIdle());
ds.setTimeBetweenEvictionRunsMillis(shardDataSource2.getTimeBetweenEvictionRunsMillis());
ds.setMinEvictableIdleTimeMillis(shardDataSource2.getMinEvictableIdleTimeMillis());
ds.setValidationQuery(shardDataSource2.getValidationQuery());
ds.setTestWhileIdle(shardDataSource2.isTestWhileIdle());
ds.setTestOnBorrow(shardDataSource2.isTestOnBorrow());
ds.setTestOnReturn(shardDataSource2.isTestOnReturn());
ds.setPoolPreparedStatements(shardDataSource2.isPoolPreparedStatements());
ds.setMaxPoolPreparedStatementPerConnectionSize(
shardDataSource2.getMaxPoolPreparedStatementPerConnectionSize());
ds.setRemoveAbandoned(shardDataSource2.isRemoveAbandoned());
ds.setRemoveAbandonedTimeout(shardDataSource2.getRemoveAbandonedTimeout());
ds.setLogAbandoned(shardDataSource2.isLogAbandoned());
ds.setConnectionInitSqls(shardDataSource2.getConnectionInitSqls());
ds.setConnectionProperties(shardDataSource2.getConnectionProperties());
return ds;
} }

  

接口测试代码

entity层

User.java

@Data
@TableName("t_user")
public class User extends Model<User> { private static final long serialVersionUID = 1L; /**
* 主键id
*/
@TableId(value = "id", type = IdType.AUTO)
private Long id;
/**
* 顺序编号
*/
@TableField("order_id")
private Long orderId;
/**
* 用户编号
*/
@TableField("user_id")
private Long userId;
/**
* 用户名
*/
@TableField("user_name")
private String userName;
/**
* 密码
*/
@TableField("pass_word")
private String passWord;
/**
* 倪名
*/
@TableField("nick_name")
private String nickName; @Override
protected Serializable pkVal() {
return this.id;
} @Override
public String toString() {
return "User{" +
"id=" + id +
", orderId=" + orderId +
", userId=" + userId +
", userName=" + userName +
", passWord=" + passWord +
", nickName=" + nickName +
"}";
}
}

mapper层

User.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.mazhq.web.mapper.UserMapper"> <!-- 通用查询映射结果 -->
<resultMap id="BaseResultMap" type="com.mazhq.web.entity.User">
<id column="id" property="id" />
<result column="order_id" property="orderId" />
<result column="user_id" property="userId" />
<result column="user_name" property="userName" />
<result column="pass_word" property="passWord" />
<result column="nick_name" property="nickName" />
</resultMap> <!-- 通用查询结果列 -->
<sql id="Base_Column_List">
id, order_id AS orderId, user_id AS userId, user_name AS userName, pass_word AS passWord, nick_name AS nickName
</sql> </mapper>

UserMapper.java

/**
* <p>
* Mapper 接口
* </p>
*
* @author mazhq123
* @since 2019-08-20
*/
public interface UserMapper extends BaseMapper<User> { }

service层

IUserService.java

/**
* <p>
* 服务类
* </p>
*
* @author mazhq123
* @since 2019-08-20
*/
public interface IUserService extends IService<User> { }

UserServiceImpl.java

/**
* <p>
* 服务实现类
* </p>
*
* @author mazhq123
* @since 2019-08-20
*/
@Service
public class UserServiceImpl extends ServiceImpl<UserMapper, User> implements IUserService { }

controller层

UserController.java

/**
* <p>
* 前端控制器
* </p>
* @author mazhq123
* @since 2019-08-20
*/
@RestController
@RequestMapping("/web/user")
public class UserController {
@Autowired
IUserService userService; @RequestMapping("/save")
public String save(){
User user2 = new User();
for (int i = 0; i < 40; i++) {
user2.setId((long)i);
user2.setUserId((long)i);
Random r = new Random();
user2.setOrderId((long)r.nextInt(100));
user2.setNickName("owenma"+i);
user2.setPassWord("password"+i);
user2.setUserName("userName"+i);
userService.insert(user2);
}
return "success";
} @RequestMapping("/findAll")
public String findAll(){
Wrapper<User> userWrapper = new Wrapper<User>() {
@Override
public String getSqlSegment() {
return "order by order_id desc";
}
};
return JSONObject.toJSONString(userService.selectList(userWrapper));
}
}

测试方式:

 先新增: http://localhost:8080/web/user/save

 再查询: http://localhost:8080/web/user/findAll

springboot+mybatisplus+sharding-jdbc分库分表实例的更多相关文章

  1. SpringBoot+MybatisPlus+Mysql+Sharding-JDBC分库分表实践

    一.序言 在实际业务中,单表数据增长较快,很容易达到数据瓶颈,比如单表百万级别数据量.当数据量继续增长时,数据的查询性能即使有索引的帮助下也不尽如意,这时可以引入数据分库分表技术. 本文将基于Spri ...

  2. sharding-jdbc之——分库分表实例

    转载请注明出处:http://blog.csdn.net/l1028386804/article/details/79368021 一.概述 之前,我们介绍了利用Mycat进行分库分表操作,Mycat ...

  3. SpringBoot 使用Sharding-JDBC进行分库分表及其分布式ID的生成

    为解决关系型数据库面对海量数据由于数据量过大而导致的性能问题时,将数据进行分片是行之有效的解决方案,而将集中于单一节点的数据拆分并分别存储到多个数据库或表,称为分库分表. 分库可以有效分散高并发量,分 ...

  4. 分布式事务-Sharding 数据库分库分表

      Sharding (转)大型互联网站解决海量数据的常见策略 - - ITeye技术网站 阿里巴巴Cobar架构设计与实践 - 机械机电 - 道客巴巴 阿里分布式数据库服务原理与实践:沈询_文档下载 ...

  5. mysql、oracle分库分表方案之sharding-jdbc使用(非demo示例)

    选择开源核心组件的一个非常重要的考虑通常是社区活跃性,一旦项目团队无法进行自己后续维护和扩展的情况下更是如此. 至于为什么选择sharding-jdbc而不是Mycat,可以参考知乎讨论帖子https ...

  6. 分库分表后跨分片查询与Elastic Search

    携程酒店订单Elastic Search实战:http://www.lvesu.com/blog/main/cms-610.html 为什么分库分表后不建议跨分片查询:https://www.jian ...

  7. 【大数据和云计算技术社区】分库分表技术演进&最佳实践笔记

    1.需求背景 移动互联网时代,海量的用户每天产生海量的数量,这些海量数据远不是一张表能Hold住的.比如 用户表:支付宝8亿,微信10亿.CITIC对公140万,对私8700万. 订单表:美团每天几千 ...

  8. 分库分表技术演进&最佳实践

    每个优秀的程序员和架构师都应该掌握分库分表,这是我的观点. 移动互联网时代,海量的用户每天产生海量的数量,比如: 用户表 订单表 交易流水表 以支付宝用户为例,8亿:微信用户更是10亿.订单表更夸张, ...

  9. Sharding JDBC整合SpringBoot 2.x 和 MyBatis Plus 进行分库分表

    Sharding JDBC整合SpringBoot 2.x 和 MyBatis Plus 进行分库分表 交易所流水表的单表数据量已经过亿,选用Sharding-JDBC进行分库分表.MyBatis-P ...

随机推荐

  1. "(error during evaluation)" computed

    在vue-cli搭建的去哪网app项目中使用了  computed  计算属性 computed计算属性在chrome插件中的 vue devtools 插件中报错 应该显示出来 computed 属 ...

  2. 线上cpu使用率过高解决方案

    一个应用占用CPU很高,除了确实是计算密集型应用之外,通常原因都是出现了死循环. 下面我们将一步步定位问题,详尽的介绍每一步骤的相关知识. 一.通过top命令定位占用cpu高的进程 执行top命令得到 ...

  3. Python的map方法的应用

    Map方法,第一个参数要写一个匿名函数表达式,或者是一个函数引用,第二个第三个往后都是表达式用到的参数,参数一般是可迭代的 1.比如下面这个map方法,两个参数,第一个 lambda x: x*x是匿 ...

  4. Notepad++ 异常崩溃 未保存的new *文件列表没了怎么办?

    今天就遇到这种问题了,把之前写的临时代码拷贝到Notepad++,不知道啥时候脑袋一抽风强迫症犯了就把所有临时代码给未保存关闭了,然后懊恼不已,百度了一下解决办法,一下就搜到了. Notepad++是 ...

  5. java之对象类型转换

    基本数据类型之间的转换: 自动类型转换:小的数据类型可以自动转换成大的数据类型: 强制类型转换:可以把大的数据类型转换成小的数据类型:float = (float)32.0; public class ...

  6. Hyperledger Fabric相关文件解析

    1相关文件说明 这一部分涉及相关配置文件的解析, 网络的启动涉及到多个文件,本文按以下顺序进行分析: . ├── base │   ├── docker-compose-base.yaml #1 │  ...

  7. CAD转PDF的软件哪个比较好用?用这两个很方便

    大家都知道编辑CAD图纸是需要借助CAD制图软件来进行绘制的,而且CAD制图软件很多的设计师们都在使用.但是CAD中的图纸格式为dwg格式的,不想要使用CAD软件来查看图纸的话,就需要将CAD转换成P ...

  8. Cobalt Strike系列教程第三章:菜单栏与视图

    通过前两章的学习,我们掌握了Cobalt Strike教程的基础知识,及软件的安装使用. Cobalt Strike系列教程第一章:简介与安装 Cobalt Strike系列教程第二章:Beacon详 ...

  9. PEMDAS 操作順序

    關於計算子 Operator 的操作順序,在"像計算機科學家一樣思考Python"這書 [1] 寫的明白扼要.它以 PEMDAS 這幾個簡單的英文字開頭表明: P (Parenth ...

  10. 甲方安全之安卓App第三方加固对比

    前段时间公司要给 Android 应用进行加固,由笔者来选一家加固产品.然后发现,加固产品何其之多,且鱼龙混杂.各种问题也是层出不穷,比如,有些加固时间非常久.有些加固会失败.有些运行会崩溃等等问题. ...