Hikari Connection Pool Hikari 连接池

HikariCP 官方文档 https://github.com/brettwooldridge/HikariCP

Maven依赖

一般都用8版本

Maven仓库所在地址

https://mvnrepository.com/artifact/com.zaxxer/HikariCP/3.4.5

  1. <dependency>
  2.   <groupId>com.zaxxer</groupId>
      <artifactId>HikariCP</artifactId>
  3.   <version>3.4.2</version>
  4. </dependency>

官方硬编码获取连接

  1. public class HikariTest {
  2.  
  3. @Test
  4. public void hikariTest() throws Exception {
  5. HikariConfig config = new HikariConfig();
  6. config.setJdbcUrl("jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai");
  7. config.setUsername("root");
  8. config.setPassword("123456");
  9. config.addDataSourceProperty("cachePrepStmts", "true");
  10. config.addDataSourceProperty("prepStmtCacheSize", "300");
  11. config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
  12.  
  13. HikariDataSource dataSource = new HikariDataSource(config);
  14.  
  15. Connection connection = dataSource.getConnection();
  16. System.out.println(connection);
  17. connection.close();
  18. }
  19. }

因为没有配置SLF4J日志工厂,这里报加载失败信息

上面的硬编码配置还支持了设置连接池的SQL预编译相关

- 开启与编译对象缓存

- 设置缓存个数300

- 设置缓存上限2048个

Hikari也是支持对配置文件读取方式的

但是啊但是啊,官方解释的驱动名很费解,我试了半天都不行

然后在这个博客看到了:http://zetcode.com/articles/hikaricp/

就是说你连MySQL不需要配置驱动的,直接写Url就好了

hikari.properties

  1. jdbcUrl = jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai
  2. dataSource.user = root
  3. dataSource.password = 123456
  4. # 开启SQL预编译对象缓存
  5. dataSource.cachePrepStmts = true
  6. # SQL预编译对象缓存个数 256
  7. dataSource.prepStmtCacheSize = 256
  8. # SQL预编译对象缓存个数上限 512
  9. dataSource.prepStmtCacheSqlLimit = 512

连接测试

  1. @Test
  2. public void hikariTest2() throws Exception {
  3. final String configureFile = "src/main/resources/hikari.properties";
  4. HikariConfig configure = new HikariConfig(configureFile);
  5. HikariDataSource dataSource = new HikariDataSource(configure);
  6. Connection connection = dataSource.getConnection();
  7. System.out.println(connection);
  8. connection.close();
  9. }

封装工具类

  1. public class JdbcHikariUtil {
  2.  
  3. private static final DataSource dataSource = new HikariDataSource(new HikariConfig("src/main/resources/hikari.properties"));
  4.  
  5. public static Connection getConnection(){
  6. try {
  7. return dataSource.getConnection();
  8. } catch (SQLException sqlException) {
  9. sqlException.printStackTrace();
  10. }
  11. return null;
  12. }
  13. }

做一个全连接池和原生JDBC的封装工具类

首先是Maven依赖

  1. <!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
  2. <dependency>
  3. <groupId>mysql</groupId>
  4. <artifactId>mysql-connector-java</artifactId>
  5. <version>8.0.19</version>
  6. </dependency>
  7.  
  8. <!-- https://mvnrepository.com/artifact/junit/junit -->
  9. <dependency>
  10. <groupId>junit</groupId>
  11. <artifactId>junit</artifactId>
  12. <version>4.13</version>
  13. <scope>test</scope>
  14. </dependency>
  15.  
  16. <!-- https://mvnrepository.com/artifact/com.mchange/c3p0 -->
  17. <dependency>
  18. <groupId>com.mchange</groupId>
  19. <artifactId>c3p0</artifactId>
  20. <version>0.9.5.5</version>
  21. </dependency>
  22.  
  23. <!-- https://mvnrepository.com/artifact/commons-dbutils/commons-dbutils -->
  24. <dependency>
  25. <groupId>commons-dbutils</groupId>
  26. <artifactId>commons-dbutils</artifactId>
  27. <version>1.7</version>
  28. </dependency>
  29.  
  30. <!-- https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 -->
  31. <dependency>
  32. <groupId>org.apache.commons</groupId>
  33. <artifactId>commons-dbcp2</artifactId>
  34. <version>2.7.0</version>
  35. </dependency>
  36.  
  37. <!-- https://mvnrepository.com/artifact/org.apache.commons/commons-pool2 -->
  38. <dependency>
  39. <groupId>org.apache.commons</groupId>
  40. <artifactId>commons-pool2</artifactId>
  41. <version>2.8.0</version>
  42. </dependency>
  43.  
  44. <!-- https://mvnrepository.com/artifact/com.alibaba/druid -->
  45. <dependency>
  46. <groupId>com.alibaba</groupId>
  47. <artifactId>druid</artifactId>
  48. <version>1.1.22</version>
  49. </dependency>
  50.  
  51. <dependency>
  52. <groupId>com.zaxxer</groupId>
  53. <artifactId>HikariCP</artifactId>
  54. <version>3.4.2</version>
  55. </dependency>

然后是各个连接池的配置

原生JDBC  jdbc.properties

  1. driverClass = com.mysql.cj.jdbc.Driver
  2. url = jdbc:mysql://localhost:3306/jdbc_db?serverTimezone=Asia/Shanghai
  3. user = root
  4. password = 123456

C3P0  c3p0-config.xml

  1. <?xml version="1.0" encoding="UTF-8" ?>
  2. <c3p0-config>
  3. <!-- 自定义的配置命名-->
  4. <named-config name="c3p0_xml_config">
  5.  
  6. <!-- 四个基本信息 -->
  7. <property name="driverClass">com.mysql.cj.jdbc.Driver</property>
  8. <!-- 默认本地可以省略 jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai -->
  9. <property name="jdbcUrl">jdbc:mysql://localhost:3306/jdbc_db?serverTimezone=Asia/Shanghai</property>
  10. <property name="user">root</property>
  11. <property name="password">123456</property>
  12.  
  13. <!-- 连接池管理信息 -->
  14.  
  15. <!-- 连接对象数不够时,每次申请 迭增的连接数 -->
  16. <property name="acquireIncrement">5</property>
  17. <!-- 初始池大小存放的连接对象数 -->
  18. <property name="initialPoolSize">10</property>
  19. <!-- 最小连接对象数 -->
  20. <property name="minPoolSize">10</property>
  21. <!-- 最大连接对象数,不可超出的范围 -->
  22. <property name="maxPoolSize">100</property>
  23. <!-- 最多维护的SQL编译对象个数-->
  24. <property name="maxStatements">50</property>
  25. <!-- 每个连接对象最多可使用的SQL编译对象的个数 -->
  26. <property name="maxStatementsPerConnection">2</property>
  27. </named-config>
  28. </c3p0-config>

DBCP  dbcp.properties

  1. driverClassName = com.mysql.cj.jdbc.Driver
  2. url = jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai
  3. username = root
  4. password = 123456

Druid  druid.properties ,可以共用c3p0,基本一样

  1. driverClassName = com.mysql.cj.jdbc.Driver
  2. url = jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai
  3. username = root
  4. password = 123456

Hikari  hikari.properties

  1. jdbcUrl = jdbc:mysql:///jdbc_db?serverTimezone=Asia/Shanghai
  2. dataSource.user = root
  3. dataSource.password = 123456
  4.  
  5. # 开启SQL预编译对象缓存
  6. dataSource.cachePrepStmts = true
  7. # SQL预编译对象缓存个数 256
  8. dataSource.prepStmtCacheSize = 256
  9. # SQL预编译对象缓存个数上限 512
  10. dataSource.prepStmtCacheSqlLimit = 512

完整封装工具类

  1. package cn.dai.util;
  2.  
  3. import com.alibaba.druid.pool.DruidDataSourceFactory;
  4. import com.mchange.v2.c3p0.ComboPooledDataSource;
  5. import com.zaxxer.hikari.HikariConfig;
  6. import com.zaxxer.hikari.HikariDataSource;
  7. import org.apache.commons.dbcp2.BasicDataSourceFactory;
  8.  
  9. import javax.sql.DataSource;
  10. import java.io.InputStream;
  11. import java.sql.*;
  12. import java.util.Properties;
  13.  
  14. /**
  15. * @author ArkD42
  16. * @file Jdbc
  17. * @create 2020 - 04 - 24 - 22:04
  18. */
  19. public class CompleteJdbcUtils {
  20. private CompleteJdbcUtils(){}
  21.  
  22. //private static String driverClass;
  23. private static String url;
  24. private static String user;
  25. private static String password;
  26.  
  27. private static DataSource dataSourceFromDBCP;
  28. private static DataSource dataSourceFromDruid;
  29. private static final DataSource dataSourceFromC3P0 = new ComboPooledDataSource("c3p0_xml_config");
  30. private static final DataSource dataSourceFromHikari = new HikariDataSource(new HikariConfig("src/main/resources/hikari.properties"));
  31.  
  32. static {
  33. InputStream originalJdbcStream = CompleteJdbcUtils.class.getClassLoader().getResourceAsStream("jdbc.properties");
  34. Properties originalJdbcProperties = new Properties();
  35.  
  36. InputStream dbcpStream = CompleteJdbcUtils.class.getClassLoader().getResourceAsStream("dbcp.properties");
  37. Properties dbcpProperties = new Properties();
  38.  
  39. InputStream druidStream = CompleteJdbcUtils.class.getClassLoader().getResourceAsStream("druid.properties");
  40. Properties druidProperties = new Properties();
  41. try {
  42. originalJdbcProperties.load(originalJdbcStream);
  43. //driverClass = originalJdbcProperties.getProperty("driverClass");
  44. url = originalJdbcProperties.getProperty("url");
  45. user = originalJdbcProperties.getProperty("user");
  46. password = originalJdbcProperties.getProperty("password");
  47. //Class.forName(driverClass);
  48. //--------------------------------------------------------------------------\\
  49. dbcpProperties.load( dbcpStream );
  50. dataSourceFromDBCP = BasicDataSourceFactory.createDataSource(dbcpProperties);
  51. druidProperties.load( druidStream );
  52. dataSourceFromDruid = DruidDataSourceFactory.createDataSource(druidProperties);
  53. } catch ( Exception e) {
  54. e.printStackTrace();
  55. }
  56. }
  57.  
  58. // 原生JDBC
  59. public static Connection getConnectionByOriginalJdbc(){
  60. try {
  61. return DriverManager.getConnection(url,user,password);
  62. } catch (SQLException sqlException) {
  63. sqlException.printStackTrace();
  64. }
  65. return null;
  66. }
  67.  
  68. // C3P0
  69. public static Connection getConnectionByC3P0(){
  70. try {
  71. return dataSourceFromC3P0.getConnection();
  72. } catch (SQLException sqlException) {
  73. sqlException.printStackTrace();
  74. }
  75. return null;
  76. }
  77.  
  78. // DBCP
  79. public static Connection getConnectionByDBCP(){
  80. try {
  81. return dataSourceFromDBCP.getConnection();
  82. } catch (SQLException sqlException) {
  83. sqlException.printStackTrace();
  84. }
  85. return null;
  86. }
  87.  
  88. // Druid
  89. public static Connection getConnectionByDruid(){
  90. try {
  91. return dataSourceFromDruid.getConnection();
  92. } catch (SQLException sqlException) {
  93. sqlException.printStackTrace();
  94. }
  95. return null;
  96. }
  97.  
  98. // Hikari
  99. public static Connection getConnectionByHikari(){
  100. try {
  101. return dataSourceFromHikari.getConnection();
  102. } catch (SQLException sqlException) {
  103. sqlException.printStackTrace();
  104. }
  105. return null;
  106. }
  107.  
  108. // 资源释放
  109. public static void releaseResource(Connection connection, PreparedStatement preparedStatement, ResultSet resultSet){
  110. try{
  111. if (resultSet != null) resultSet.close();
  112. if (preparedStatement != null) preparedStatement.close();
  113. if (connection != null) connection.close();
  114. } catch (SQLException sqlException){
  115. sqlException.printStackTrace();
  116. }
  117. }
  118. }

测试

  1. @Test
  2. public void te5() throws SQLException {
  3. Connection connectionByOriginalJdbc = CompleteJdbcUtils.getConnectionByOriginalJdbc();
  4. Connection connectionByC3P0 = CompleteJdbcUtils.getConnectionByC3P0();
  5. Connection connectionByDBCP = CompleteJdbcUtils.getConnectionByDBCP();
  6. Connection connectionByDruid = CompleteJdbcUtils.getConnectionByDruid();
  7. Connection connectionByHikari = CompleteJdbcUtils.getConnectionByHikari();
  8.  
  9. Connection[]connections = new Connection[]{
  10. connectionByOriginalJdbc,
  11. connectionByC3P0,
  12. connectionByDBCP,
  13. connectionByDruid,
  14. connectionByHikari
  15. };
  16.  
  17. for (Connection connection: connections) {
  18. System.out.println(connection);
  19. connection.close();
  20. }
  21. }

对应完整工具类封装的一些常用方法

- 增删改  Update

- 查询单个结果  queryOne

- 查询多个结果  queryToList

- 对注入条件的封装

  1. // 获取SQL预编译对象
  2. public static PreparedStatement getPreparedStatement(Connection connection,String sql){
  3. try {
  4. return connection.prepareStatement(sql);
  5. } catch (SQLException sqlException) {
  6. sqlException.printStackTrace();
  7. }
  8. return null;
  9. }
  10.  
  11. // 参数注入
  12. public static void argumentsInject(PreparedStatement preparedStatement,Object[] args){
  13. for (int i = 0; i < args.length; i++) {
  14. try {
  15. preparedStatement.setObject(i+1,args[i]);
  16. } catch (SQLException sqlException) {
  17. sqlException.printStackTrace();
  18. }
  19. }
  20. }
  21.  
  22. // 转换日期格式
  23. public static java.sql.Date parseToSqlDate(String patternTime){
  24. SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd");
  25. java.util.Date date = null;//"1987-09-01"
  26. try {
  27. date = simpleDateFormat.parse(patternTime);
  28. } catch (ParseException e) {
  29. e.printStackTrace();
  30. }
  31. return new java.sql.Date(date.getTime());
  32. }
  33.  
  34. // 更新操作
  35. public static int update(Connection connection,String sql,Object[] args) {
  36. PreparedStatement preparedStatement = getPreparedStatement(connection, sql);
  37. if (args != null) argumentsInject(preparedStatement,args);
  38. try { return preparedStatement.executeUpdate(); }
  39. catch (SQLException sqlException) { sqlException.printStackTrace(); }
  40. return 0;
  41. }
  42.  
  43. // 自己封装的查询
  44. public static <T> List<T> queryToList(Connection connection,Class<T> tClass,String sql,Object[] args){
  45. PreparedStatement preparedStatement = getPreparedStatement(connection, sql);
  46. if (args != null) argumentsInject(preparedStatement,args);
  47. ResultSet resultSet = null;
  48. try{
  49. resultSet = preparedStatement.executeQuery();
  50. ResultSetMetaData metaData = resultSet.getMetaData();
  51. int columnCount = metaData.getColumnCount();
  52. List<T> tList = new ArrayList<T>();
  53. while(resultSet.next()){
  54. T t = tClass.newInstance();
  55. for (int i = 0; i < columnCount; i++) {
  56. Object columnValue = resultSet.getObject(i + 1);
  57. String columnLabel = metaData.getColumnLabel(i + 1);
  58. Field field = tClass.getDeclaredField(columnLabel);
  59. field.setAccessible( true );
  60. field.set(t,columnValue);
  61. }
  62. tList.add(t);
  63. }
  64. return tList;
  65. } catch (Exception e){ e.printStackTrace(); }
  66. finally { releaseResource(connection,preparedStatement,resultSet); }
  67. return null;
  68. }
  69.  
  70. // MapList集合封装
  71. public static List<Map<String, Object>> queryToList(Connection connection,String sql, Object[] args){
  72. PreparedStatement preparedStatement = getPreparedStatement(connection, sql);
  73. if (args != null) argumentsInject(preparedStatement,args);
  74. ResultSet resultSet = null;
  75. try{
  76. resultSet = preparedStatement.executeQuery();
  77. ResultSetMetaData metaData = resultSet.getMetaData();
  78. int columnCount = metaData.getColumnCount();
  79. List<Map<String,Object>> mapList = new ArrayList<Map<String, Object>>();
  80. while(resultSet.next()){
  81. Map<String,Object> row = new HashMap<String, Object>();
  82. for (int i = 0; i < columnCount; i++) {
  83. String columnLabel = metaData.getColumnLabel(i + 1);
  84. Object columnValue = resultSet.getObject(i + 1);
  85. row.put(columnLabel,columnValue);
  86. }
  87. mapList.add(row);
  88. }
  89. return mapList;
  90. }catch (Exception e){ e.printStackTrace(); }
  91. finally { releaseResource(connection,preparedStatement,resultSet);}
  92. return null;
  93. }
  94.  
  95. // 反射实现单个查询
  96. public static <T> T queryOne(Connection connection,Class<T> tClass,String sql,Object[] args){
  97. PreparedStatement preparedStatement = getPreparedStatement(connection,sql);
  98. argumentsInject(preparedStatement,args);
  99. ResultSet resultSet = null;
  100. try{
  101. resultSet = preparedStatement.executeQuery();
  102.  
  103. ResultSetMetaData metaData = resultSet.getMetaData();
  104. int columnCount = metaData.getColumnCount();
  105. if (resultSet.next()){
  106. T t = tClass.newInstance();
  107. for (int i = 0; i < columnCount; i++) {
  108. Object columnValue = resultSet.getObject(i + 1);
  109. String columnLabel = metaData.getColumnLabel(i + 1);
  110. Field field = tClass.getDeclaredField(columnLabel);
  111. field.setAccessible( true );
  112. field.set(t,columnValue);
  113. }
  114. return t;
  115. }
  116. }catch (Exception e){ e.printStackTrace();}
  117. finally { releaseResource(connection,preparedStatement,resultSet);}
  118. return null;
  119. }
  120.  
  121. // Map单个查询
  122. public static Map<String,Object> queryOne(Connection connection,String sql,Object[] args){
  123. PreparedStatement preparedStatement = getPreparedStatement(connection,sql);
  124. argumentsInject(preparedStatement,args);
  125. ResultSet resultSet = null;
  126. try{
  127. resultSet = preparedStatement.executeQuery();
  128. ResultSetMetaData metaData = resultSet.getMetaData();
  129. int columnCount = metaData.getColumnCount();
  130. if (resultSet.next()){
  131. Map<String,Object> map = new HashMap<String, Object>();
  132. for (int i = 0; i < columnCount; i++) {
  133. Object columnValue = resultSet.getObject(i + 1);
  134. String columnLabel = metaData.getColumnLabel(i + 1);
  135. map.put(columnLabel,columnValue);
  136. }
  137. return map;
  138. }
  139. }catch (Exception e){ e.printStackTrace();}
  140. finally { releaseResource(connection,preparedStatement,resultSet);}
  141. return null;
  142. }
  143.  
  144. // 求特殊值的通用方法 聚合函数
  145. public <E> E getValue(Connection connection,String sql,Object[] args){
  146. PreparedStatement preparedStatement = getPreparedStatement(connection, sql);
  147. if (args != null) argumentsInject(preparedStatement,args);
  148. ResultSet resultSet = null;
  149. try{
  150. resultSet = preparedStatement.executeQuery();
  151. if (resultSet.next())return (E)resultSet.getObject(1);;
  152. } catch (Exception e){ e.printStackTrace(); }
  153. finally{ CompleteJdbcUtils.releaseResource(connection,preparedStatement,resultSet);}
  154. return null;
  155. }

【Java】JDBC Part5.1 Hikari连接池补充的更多相关文章

  1. DB数据源之SpringBoot+MyBatis踏坑过程(五)手动使用Hikari连接池

    DB数据源之SpringBoot+MyBatis踏坑过程(五)手动使用Hikari连接池 liuyuhang原创,未经允许禁止转载  系列目录连接 DB数据源之SpringBoot+Mybatis踏坑 ...

  2. hikari连接池属性详解

    hikari连接池属性详解 一.主要配置 1.dataSourceClassName 这是DataSourceJDBC驱动程序提供的类的名称.请查阅您的特定JDBC驱动程序的文档以获取此类名称,或参阅 ...

  3. spring boot:使用mybatis访问多个mysql数据源/查看Hikari连接池的统计信息(spring boot 2.3.1)

    一,为什么要访问多个mysql数据源? 实际的生产环境中,我们的数据并不会总放在一个数据库, 例如:业务数据库:存放了用户/商品/订单 统计数据库:按年.月.日的针对用户.商品.订单的统计表 因为统计 ...

  4. java基础(30):DBUtils、连接池

    1. DBUtils 如果只使用JDBC进行开发,我们会发现冗余代码过多,为了简化JDBC开发,本案例我们讲采用apache commons组件一个成员:DBUtils. DBUtils就是JDBC的 ...

  5. 微服务架构 ------ 插曲 hikari连接池的配置

    开胃菜:据说hikari连接池很快,快到让另一个连接池的作者抛弃对自己连接池的维护,并且强烈推荐使用hikari 连接池目前我们项目使用的有两个 一个是Druid , 一个是 Hikari, 其中Dr ...

  6. JAVA基础之DBUtils与连接池

    利用DBUtils进一步简化JDBC数据库的增删改查的代码,同时利用从连接池中接取连接,进而进行简化和减少资源的消耗! 一.DBUtils: 1.DBUtils就是JDBC的简化开发工具包.需要项目导 ...

  7. 走进JavaWeb技术世界3:JDBC的进化与连接池技术

    走进JavaWeb技术世界3:JDBC的进化与连接池技术 转载公众号[码农翻身] 网络访问 随着 Oracle, Sybase, SQL Server ,DB2,  Mysql 等人陆陆续续住进数据库 ...

  8. java jdbc使用SSH隧道连接mysql数据库demo

    java jdbc使用SSH隧道连接mysql数据库demo   本文链接:https://blog.csdn.net/earbao/article/details/50216999   packag ...

  9. Java的JDBC原生态学习以及连接池的用法

    JDBC是什么 JDBC(Java Data Base Connectivity)是Java访问数据库的桥梁,但它只是接口规范,具体实现是各数据库厂商提供的驱动程序(Driver). 应用程序.JDB ...

  10. java基础之JDBC八:Druid连接池的使用

    基本使用代码: /** * Druid连接池及简单工具类的使用 */ public class Test{ public static void main(String[] args) { Conne ...

随机推荐

  1. k8s——daemonset

    daemonset 为每一个匹配的node都部署一个守护进程 # daemonset node:type=logs daemonset 选择节点 - nadeSelector: 只调度到匹配指定的la ...

  2. react的反向代理

    在配置在src文件夹中setupProxy.js文件,并通过npm安装http-proxy-middleware,代理中间件模块 npm i -S http-proxy-middleware 配置反向 ...

  3. SELinux 安全模型——MLS

    首发公号:Rand_cs SELinux 安全模型--MLS BLP 模型:于1973年被提出,是一种模拟军事安全策略的计算机访问控制模型,它是最早也是最常用的一种多级访问控制模型,主要用于保证系统信 ...

  4. 发现XWPFDocument写入Word文档时的小BUG:两天的探索与解决之旅

    引言 最近在使用XWPFDocument生成Word文档时,遇到一个错误:"未将对象引用设置到对象的实例".这个平常很容易找到原因的问题却困扰了我两天,最终发现问题出在设置段落时赋 ...

  5. http请求方式-OkHttpClient

    http请求方式-OkHttpClient import com.example.core.mydemo.http.OrderReqVO; import okhttp3.*; import org.s ...

  6. VUE CLI中使用Jquery无法获取到dom节点

    mounted 类型:Function 详细: 实例被挂载后调用,这时 el 被新创建的 vm.$el 替换了.如果根实例挂载到了一个文档内的元素上,当 mounted 被调用时 vm.$el 也在文 ...

  7. RedHat 6.9 操作系统安装

    重启服务器--按F11--bios boot manager ---选择自己的U盘 通过U盘启动RedHat6.9系统,如图安装界面: 选择Install or upgrade an exising ...

  8. HDU1010第一道DFS

    DFS就是深度搜索算法....感觉就像破案一样.... #include<iostream> #include<cstdio> #include<cstring> ...

  9. macOS Big Sur 11.0.1光盘镜像文件制作

    https://blog.csdn.net/hymnal/article/details/110393501

  10. 2个qubit的量子门

    量子计算机就是基于单qubit门和双qubit门的,再多的量子操作都是基于这两种门.双qubit门比单qubit门难理解得多,不过也重要得多.它可以用来创建纠缠,没有纠缠,量子机就不可能有量子霸权. ...