HBase基准测试
执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation
返回信息:
[root@node1 /]# hbase org.apache.hadoop.hbase.PerformanceEvaluation Java HotSpot(TM) -Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release // :: INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available Usage: java org.apache.hadoop.hbase.PerformanceEvaluation \ <OPTIONS> [-D<property=value>]* <command> <nclients> Options: nomapred Run multiple clients using threads (rather than use mapreduce) rows Rows each client runs. Default: size Total size in GiB. Mutually exclusive with --rows. Default: 1.0. sampleRate Execute test on a sample of total rows. Only supported by randomRead. Default: 1.0 traceRate Enable HTrace spans. Initiate tracing every N rows. Default: table Alternate table name. Default: 'TestTable' multiGet If >, when doing RandomRead, perform multiple gets instead of single gets. Default: compress Compression type to use (GZ, LZO, ...). Default: 'NONE' flushCommits Used to determine if the test should flush the table. Default: false writeToWAL Set writeToWAL on puts. Default: True autoFlush Set autoFlush on htable. Default: False oneCon all the threads share the same connection. Default: False presplit Create presplit table. If a table with same name exists, it'll be deleted and recreated (instead of verifying count of its existing regions). Recommended for accurate perf analysis (see guide). Default: disabled inmemory Tries to keep the HFiles of the CF inmemory as far as possible. Not guaranteed that reads are always served from memory. Default: false usetags Writes tags along with KVs. Use with HFile V3. Default: false numoftags Specify the no of tags that would be needed. This works only filterAll Helps to filter out all the rows on the server side there by not returning any thing back to the client. Helps to check the server side performance. Uses FilterAllFilter internally. latency Set to report operation latencies. Default: False bloomFilter Bloom filter type, one of [NONE, ROW, ROWCOL] blockEncoding Block encoding to use. Value should be one of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE]. Default: NONE valueSize Pass value size to use: Default: valueRandom Set and 'valueSize'; set on read for stats on size: Default: Not set. valueZipf Set and 'valueSize' in zipf form: Default: Not set. period Report every = multiGet Batch gets together into groups of N. Only supported by randomRead. Default: disabled addColumns Adds columns to scans/gets explicitly. Default: true replicas Enable region replica testing. Defaults: . splitPolicy Specify a custom RegionSplitPolicy for the table. randomSleep Do a random and entered value. Defaults: columns Columns to caching Scan caching to use. Default: Note: -D properties will be applied to the conf used. For example: -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.task.timeout= Command: append Append on each row; clients overlap on keyspace so some concurrent operations checkAndDelete CheckAndDelete on each row; clients overlap on keyspace so some concurrent operations checkAndMutate CheckAndMutate on each row; clients overlap on keyspace so some concurrent operations checkAndPut CheckAndPut on each row; clients overlap on keyspace so some concurrent operations filterScan Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20) increment Increment on each row; clients overlap on keyspace so some concurrent operations randomRead Run random read test randomSeekScan Run random seek and scan test randomWrite Run random write test scan Run scan test (read every row) scanRange10 Run random seek scan with both start and stop row (max rows) scanRange100 Run random seek scan with both start and stop row (max rows) scanRange1000 Run random seek scan with both start and stop row (max rows) scanRange10000 Run random seek scan with both start and stop row (max rows) sequentialRead Run sequential read test sequentialWrite Run sequential write test Args: nclients Integer. Required. Total number of clients (and HRegionServers) running. <= value <= Examples: To run a single client doing the default 1M sequentialWrites: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite To run clients doing increments over ten rows: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows= --nomapred increment
执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows= --presplit= sequentialWrite
返回信息:
…… // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node3/ // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid22929.hprof ... // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node5/, sessionid = // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node4/ // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node4/ // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node5/, sessionid = Heap dump bytes in 0.962 secs] # # java.lang.OutOfMemoryError: Java heap space # -XX:OnOutOfMemoryError="kill -9 %p" # Executing /bin/sh -c "kill -9 22929"... Killed
分析内存使用情况,执行命令 free ,返回如下信息:
[root@node1 ~]# free total used free shared buff/cache available Mem: 65398900 13711168 26692112 115096 24995620 50890860 Swap: 29200380 0 29200380
第1行 Mem:
total:表示物理内存总量。65398900KB/1024=63866MB/1024=62GB 约等于64GB
total = used + free + buff/cache
available = free + buff/cache(部分)
buff:写 IO 缓存
cache:读 IO 缓存
查看哪些进程使用了内存,执行命令 ps aux ,返回如下信息(抽取部分与大数据有关的进程):
[root@node1 ~]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root ? Ss Mar06 : /usr/lib/systemd/systemd --switched apache ? S Apr08 : /usr/sbin/httpd -DFOREGROUND ntp ? Ss Mar06 : /usr/sbin/ntpd -u ntp:ntp -g clouder+ ? Ssl Mar11 : /usr/java/jdk1..0_67-cloudera/bin/ apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? S Mar20 : /bin/bash /usr/lib64/cmf/service/hb flume ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Xm flume ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hive ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Xm hive ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil oozie ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -D spark ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -cp oozie ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil spark ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil mysql ? Ss Mar06 : /bin/sh /usr/bin/mysqld_safe --base mysql ? Sl Mar06 : /usr/libexec/mysqld --basedir=/usr hue ? S Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : python2. /opt/cloudera/parcels/CDH hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? S Mar20 : /bin/bash /usr/lib64/cmf/service/hb rpc ? Ss Mar13 : /sbin/rpcbind -w hue ? Sl Mar13 : /usr/sbin/httpd -f /run/cloudera-sc hdfs ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp hdfs ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? S : : mapred ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp mapred ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil yarn ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp yarn ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil
其中RSS列,就是物理内存使用量
VSZ:占用的虚拟内存大小
RSS:占用的物理内存大小
执行命令:
ps aux --sort -rss
根据占用的物理内存大小对进程进行排序,返回如下信息(截取):
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND clouder+ ? Ssl Mar11 : /usr/java/jdk1..0_67-cloudera/bin/ hive ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Xm hdfs ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp oozie ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -D root pts/ Sl+ : : /usr/java/jdk1..0_131/bin/java -cp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp mapred ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp yarn ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp flume ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Xm hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp spark ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -cp gnome-i+ ? Sl Mar06 : gnome-shell --mode=initial-setup mysql ? Sl Mar06 : /usr/libexec/mysqld --basedir=/usr hue ? Sl Mar12 : python2. /opt/cloudera/parcels/CDH gnome-i+ ? Sl Mar06 : /usr/libexec/gnome-initial-setup root ? Ssl Mar07 : python2. /usr/lib64/cmf/agent/buil root ? S<l Mar07 : /root/vpnserver/vpnserver execsvc root ? Sl Mar07 : python2. /usr/lib64/cmf/agent/buil root tty1 Ssl+ Mar06 : /usr/bin/Xorg : -background none - polkitd ? Ssl Mar06 : /usr/lib/polkit-/polkitd --no-debu gnome-i+ ? Sl Mar06 : /usr/libexec/gnome-settings-daemon gnome-i+ ? Sl Mar06 : /usr/libexec/goa-daemon root ? Ssl Mar06 : /usr/bin/python -Es /usr/sbin/tuned root ? Ss Mar07 : /usr/lib64/cmf/agent/build/env/bin/ root ? Ssl Mar06 : /usr/sbin/libvirtd geoclue ? Ssl Mar06 : /usr/libexec/geoclue -t root ? S Mar07 : python2. /usr/lib64/cmf/agent/buil root ? Ssl Mar06 : /usr/sbin/NetworkManager --no-daemo hive ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil gnome-i+ ? Sl Mar06 : /usr/libexec/caribou gnome-i+ ? Sl Mar06 : /usr/libexec/ibus-x11 --kill-daemon gnome-i+ ? Ssl Mar06 : /usr/bin/gnome-session --autostart root ? Ss Mar06 : /usr/lib/systemd/systemd-journald colord ? Ssl Mar06 : /usr/libexec/colord hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil mapred ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil spark ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil flume ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hdfs ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil oozie ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil yarn ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil gnome-i+ ? Sl Mar06 : ibus-daemon --xim --panel disable root ? Ssl Mar06 : /usr/lib/udisks2/udisksd --no-debug gnome-i+ ? Sl Mar06 : /usr/libexec/mission-control- root ? Ss Mar11 : /usr/sbin/httpd -DFOREGROUND root pts/ S+ : : python /opt/cloudera/parcels/CLABS_ root ? Ss : : sshd: root@notty root ? Ssl Mar06 : /usr/libexec/packagekitd root ? Ssl Mar06 : /usr/sbin/rsyslogd -n gnome-i+ ? Sl Mar06 : /usr/libexec/goa-identity-service root ? Ssl Mar06 : /usr/libexec/upowerd root ? S<Ls Mar06 : /usr/sbin/iscsid root ? Ss Mar06 : /usr/lib/systemd/systemd --switched gnome-i+ ? Sl Mar06 : /usr/libexec/ibus-dconf root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Sl Mar06 : gdm-session-worker [pam/gdm-launch- hue ? Sl Mar13 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Ssl Mar06 : /usr/sbin/ModemManager gnome-i+ ? Sl Mar06 : gnome-keyring-daemon --unlock hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Ss Mar06 : /usr/sbin/abrtd -d -s hue ? S Mar12 : /usr/sbin/httpd -f /run/cloudera-sc gnome-i+ ? Sl Mar06 : /usr/libexec/gvfsd gnome-i+ ? Sl Mar06 : /usr/libexec/gvfs-afc-volume-monito gnome-i+ ? Sl Mar06 : /usr/libexec/gvfs-udisks2-volume-mo root ? Ssl Mar06 : /usr/lib64/realmd/realmd root ? Ss Mar06 : /usr/bin/abrt-watch-log -F BUG: WAR root ? Ss Mar06 : /usr/bin/abrt-watch-log -F Backtrac apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr08 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND
执行命令 jps ,查看JVM中运行的进程:
[root@node1 ~]# jps Bootstrap SqlLine HistoryServer Main RESTServer HMaster ResourceManager RunJar Application Jps JobHistoryServer NameNode
执行命令 jmap -heap 31262 ,查看NameNode进程在JVM中的运行情况,返回如下信息:
[root@node1 ~]# jmap -heap Attaching to process ID , please wait... Debugger attached successfully. Server compiler detected. JVM version is 25.131-b11 using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = MaxHeapFreeRatio = MaxHeapSize = (.0MB) NewSize = (.3125MB) MaxNewSize = (.3125MB) OldSize = (.6875MB) NewRatio = SurvivorRatio = MetaspaceSize = (.796875MB) CompressedClassSpaceSize = (.0MB) MaxMetaspaceSize = MB G1HeapRegionSize = (.0MB) Heap Usage: New Generation (Eden + Survivor Space): capacity = (.8125MB) used = (.81820678710938MB) (.9942932128906MB) 9.913490201890799% used Eden Space: capacity = (.3125MB) used = (.02981567382812MB) (.2826843261719MB) 10.988596731597243% used From Space: capacity = (.5MB) used = (.78839111328125MB) (.71160888671875MB) 1.3101766397664836% used To Space: capacity = (.5MB) used = (.0MB) (.5MB) 0.0% used concurrent mark-sweep generation: capacity = (.6875MB) used = (.6791000366211MB) (.008399963379MB) 2.661567829955683% used interned Strings occupying bytes.
修改上一次测试的参数,减少占用的资源,执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows= --presplit= sequentialWrite
在这个测试中,把PE模式设为了非MapReduuce(--nomapred),即采用起线程的形式。跑的命令是sequentialWrite,即顺序写入、后面跟的10代表起了10个线程来做写入。--rows=1000 代表每个线程会写入1000行数据。presplit,表的预分裂region个数,在做性能测试时一定要设置region个数,不然所有的读写会落在一个region上,严重影响性能。PE工具的所有的输出都会直接写到LOG文件,LOG的位置需要参照HBase的设置。运行结束后,PE会分别打出每个线程的延迟状况。如下面是其中一个线程的结果:
// :: INFO hbase.PerformanceEvaluation: Latency (us) : mean=.9th=.99th=.999th=347.00 // :: INFO hbase.PerformanceEvaluation: Num measures (latency) : // :: INFO hbase.PerformanceEvaluation: Mean = 56.74 Min = 8.00 Max = 347.00 StdDev = 84.51 50th = 25.00 75th = 35.75 95th = 283.00 99th = 305.98 .9th = 346.99 .99th = 347.00 .999th = 347.00 // :: INFO hbase.PerformanceEvaluation: ValueSize (bytes) : mean=.9th=.99th=.999th=0.00 // :: INFO hbase.PerformanceEvaluation: Num measures (ValueSize): // :: INFO hbase.PerformanceEvaluation: Mean = 0.00 Min = 0.00 Max = 0.00 StdDev = 0.00 50th = 0.00 75th = 0.00 95th = 0.00 99th = 0.00 .9th = 0.00 .99th = 0.00 .999th = 0.00 // :: INFO hbase.PerformanceEvaluation: Test : SequentialWriteTest, Thread : TestClient- // :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c336b
以及如下信息:
// :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c3368 // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.94 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO zookeeper.ZooKeeper: Session: 0x1696fc9820c3368 closed // :: INFO zookeeper.ClientCnxn: EventThread shut down // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (3.03 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.97 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.93 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.94 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.87 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Summary of timings (ms): [, , , , , , , , , ] // :: INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Min: 314ms Max: 342ms Avg: 328ms // :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3696fc9821c31b9 // :: INFO zookeeper.ZooKeeper: Session: 0x3696fc9821c31b9 closed // :: INFO zookeeper.ClientCnxn: EventThread shut down
HBase基准测试的更多相关文章
- HBase 管理,性能调优
设置 Hadoop 来扩展磁盘 I/O 现代服务器通常有多个磁盘硬件来提供大存储能力.这些磁盘通常配置成 RAID 阵列,作为它们的出厂设置.这在很多情况下是有益的,但对 Hadoop 却不是. Ha ...
- 公司HBase基准性能测试之准备篇
本次测试主要评估线上HBase的整体性能,量化当前HBase的性能指标,对各种场景下HBase性能表现进行评估,为业务应用提供参考. 测试环境 测试环境包括测试过程中HBase集群的拓扑结构.以及需要 ...
- HBase基准性能测试报告
作者:范欣欣 本次测试主要评估线上HBase的整体性能,量化当前HBase的性能指标,对各种场景下HBase性能表现进行评估,为业务应用提供参考.本篇文章主要介绍此次测试的基本条件,HBase在各种测 ...
- Hadoop基准测试(二)
Hadoop Examples 除了<Hadoop基准测试(一)>提到的测试,Hadoop还自带了一些例子,比如WordCount和TeraSort,这些例子在hadoop-example ...
- 去 HBase,Kylin on Parquet 性能表现如何?
Kylin on HBase 方案经过长时间的发展已经比较成熟,但也存在着局限性,因此,Kyligence 推出了 Kylin on Parquet 方案(了解详情戳此处).通过标准数据集测试,与仍采 ...
- 探究Go-YCSB做数据库基准测试
本篇文章开篇会介绍一下Go-YCSB是如何使用,然后按照惯例会分析一下它是如何做基准测试,看看它有什么优缺点. 转载请声明出处哦~,本篇文章发布于luozhiyun的博客: https://www.l ...
- Mapreduce的文件和hbase共同输入
Mapreduce的文件和hbase共同输入 package duogemap; import java.io.IOException; import org.apache.hadoop.co ...
- Redis/HBase/Tair比较
KV系统对比表 对比维度 Redis Redis Cluster Medis Hbase Tair 访问模式 支持Value大小 理论上不超过1GB(建议不超过1MB) 理论上可配置(默认配置1 ...
- 一篇文章看懂TPCx-BB(大数据基准测试工具)源码
TPCx-BB是大数据基准测试工具,它通过模拟零售商的30个应用场景,执行30个查询来衡量基于Hadoop的大数据系统的包括硬件和软件的性能.其中一些场景还用到了机器学习算法(聚类.线性回归等).为了 ...
随机推荐
- python导入openpyxl报错问题,终于解决啦
问题:折腾了一上午,安装.卸载openpyxl多次,cmd中明明显示安装成功,可python文件import时就是报错 1.安装openpyxl后,python文件导入一直报错,经过一上午的努力,终于 ...
- Bugku-CTF加密篇之easy_crypto(0010 0100 01 110 1111011 11 11111 010 000 0 001101 1010 111 100 0 001101 01111 000 001101 00 10 1 0 010 0 000 1 01111 10 11110 101011 1111101)
easy_crypto 0010 0100 01 110 1111011 11 11111 010 000 0 001101 1010 111 100 0 001101 01111 000 00110 ...
- 【快学SpringBoot】快速上手好用方便的Spring Cache缓存框架
前言 缓存,在开发中是非常常用的.在高并发系统中,如果没有缓存,纯靠数据库来扛,那么数据库压力会非常大,搞不好还会出现宕机的情况.本篇文章,将会带大家学习Spring Cache缓存框架. 原创声明 ...
- pom.xml文件中properties有什么用
properties标签的作用: 在标签内可以把版本号作为变量进行声明,后面dependency中用到版本号时可以用${变量名}的形式代替,这样做的好处是:当版本号发生改变时,只有更新properti ...
- 软件版本 Alpha、Beta、Rc
软件版本的周期 α.β.γ 表示软件测试中的三个阶段 α :第一阶段,内部测试使用 β: 第二阶段,消除了大部分不完善的地方,仍可能存在漏洞,一般提供给特定的用户使用 γ: 第三阶段,产品成熟,个别地 ...
- 【原】从浏览器数据一个URL的全过程
1.根据域名到DNS找到IP 2.根据IP建立TCP三次握手连接 3.连接成功发出http请求 4.服务器响应http请求 5.浏览器解析html代码并请求html中的静态资源(js/css) 6.关 ...
- sort、uniq 、 join 、 comm、diff 、 patch 、df、du 和 time 命令的用法
1 sort 命令 同文本文件打交道时,总避不开排序,那是因为对于文本处理任务而言,排序(sort)可以起到不小的作用.sort 命令能够帮助我们对文本文件和 stdin 进行排序操作.通常,它会结合 ...
- T4 多文件生成说明
1.安装T4,自动生成文件 Manager.ttinclude <#@ assembly name="System.Core"#> <#@ assembly na ...
- Community Cloud零基础学习(一)启用以及简单配置
本篇参考: https://trailhead.salesforce.com/en/content/learn/trails/communities https://trailhead.salesfo ...
- idea选择主题
主题下载地址1:http://color-themes.com/?view=index 主题下载地址2:http://www.themesmap.com/ 主题下载地址3:http://www.ria ...