Hadoop基准测试(一)
测试对于验证系统的正确性、分析系统的性能来说非常重要,但往往容易被我们所忽视。为了能对系统有更全面的了解、能找到系统的瓶颈所在、能对系统性能做更好的改进,打算先从测试入手,学习Hadoop主要的测试手段。
TestDFSIO
TestDFSIO用于测试HDFS的IO性能,使用一个MapReduce作业来并发地执行读写操作,每个map任务用于读或写每个文件,map的输出用于收集与处理文件相关的统计信息,reduce用于累积统计信息,并产生summary。
NameNode的地址为:10.*.*.131:7180
输入命令 hadoop version,提示hadoop jar包所在路径
进入jar包所在路径,输入命令 hadoop jar hadoop-test-2.6.0-mr1-cdh5.16.1.jar,返回如下信息:
An example program must be given as the first argument. Valid program names are: DFSCIOTest: Distributed i/o benchmark of libhdfs. DistributedFSCheck: Distributed checkup of the file system consistency. MRReliabilityTest: A program that tests the reliability of the MR framework by injecting faults/failures TestDFSIO: Distributed i/o benchmark. dfsthroughput: measure hdfs throughput filebench: Benchmark SequenceFile(Input|Output)Format (block,record compressed and uncompressed), Text(Input|Output)Format (compressed and uncompressed) loadgen: Generic map/reduce load generator mapredtest: A map/reduce test check. minicluster: Single process HDFS and MR cluster. mrbench: A map/reduce benchmark that can create many small jobs nnbench: A benchmark that stresses the namenode. testarrayfile: A test for flat files of binary key/value pairs. testbigmapoutput: A map/reduce program that works on a very big non-splittable file and does identity map/reduce testfilesystem: A test for FileSystem read/write. testmapredsort: A map/reduce program that validates the map-reduce framework's sort. testrpc: A test for rpc. testsequencefile: A test for flat files of binary key value pairs. testsequencefileinputformat: A test for sequence file input format. testsetfile: A test for flat files of binary key/value pairs. testtextinputformat: A test for text input format. threadedmapbench: A map/reduce benchmark that compares the performance of maps with multiple spills over maps with spill
输入并执行命令 hadoop jar hadoop-test-2.6.0-mr1-cdh5.16.1.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
返回如下信息:
// :: INFO fs.TestDFSIO: TestDFSIO.1.7 // :: INFO fs.TestDFSIO: nrFiles = // :: INFO fs.TestDFSIO: nrBytes (MB) = 1000.0 // :: INFO fs.TestDFSIO: bufferSize = // :: INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO // :: INFO fs.TestDFSIO: creating control bytes, files java.io.IOException: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
报错! java.io.IOException: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
执行命令 su hdfs 切换用户为 hdfs
输入并执行命令 hadoop jar hadoop-test-2.6.0-mr1-cdh5.16.1.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
返回如下信息:
bash--mr1-cdh5. -fileSize // :: INFO fs.TestDFSIO: TestDFSIO.1.7 // :: INFO fs.TestDFSIO: nrFiles = // :: INFO fs.TestDFSIO: nrBytes (MB) = 1000.0 // :: INFO fs.TestDFSIO: bufferSize = // :: INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO // :: INFO fs.TestDFSIO: creating control bytes, files // :: INFO fs.TestDFSIO: created control files files // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO mapred.FileInputFormat: Total input paths to process : // :: INFO mapreduce.JobSubmitter: number of splits: // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO Configuration.deprecation: dfs.https.address is deprecated. Instead, use dfs.namenode.https-address // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0002 // :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0002 // :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0002/ // :: INFO mapreduce.Job: Running job: job_1552358721447_0002 // :: INFO mapreduce.Job: Job job_1552358721447_0002 running in uber mode : false // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: Job job_1552358721447_0002 completed successfully // :: INFO mapreduce.Job: Counters: File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations= FILE: Number of large read operations= FILE: Number of HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations= HDFS: Number of large read operations= HDFS: Number of Job Counters Launched map tasks= Launched reduce tasks= Data-local map tasks= Total Total Total Total Total vcore-milliseconds taken by all map tasks= Total vcore-milliseconds taken by all reduce tasks= Total megabyte-milliseconds taken by all map tasks= Total megabyte-milliseconds taken by all reduce tasks= Map-Reduce Framework Map input records= Map output records= Map output bytes= Map output materialized bytes= Input Combine input records= Combine output records= Reduce input Reduce shuffle bytes= Reduce input records= Reduce output records= Spilled Records= Shuffled Maps = Failed Shuffles= Merged Map outputs= GC CPU Physical memory (bytes) snapshot= Virtual memory (bytes) snapshot= Total committed heap usage (bytes)= Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= java.io.FileNotFoundException: TestDFSIO_results.log (Permission denied)
报错! java.io.FileNotFoundException: TestDFSIO_results.log (Permission denied)
这是由于用户hdfs对当前所在文件夹没有足够的访问权限,参考: https://blog.csdn.net/qq_15547319/article/details/53543587 中的评论
解决:新建文件夹 ** (命令:mkdir **),并授予用户hdfs对文件夹**的访问权限(命令:sudo chmod -R 777 **),进入文件夹**,执行命令 hadoop jar ../jars/hadoop-test-2.6.0-mr1-cdh5.16.1.jar TestDFSIO -write -nrFiles 10 -fileSize 1000 ,返回如下信息:
bash--mr1-cdh5. -fileSize // :: INFO fs.TestDFSIO: TestDFSIO.1.7 // :: INFO fs.TestDFSIO: nrFiles = // :: INFO fs.TestDFSIO: nrBytes (MB) = 1000.0 // :: INFO fs.TestDFSIO: bufferSize = // :: INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO // :: INFO fs.TestDFSIO: creating control bytes, files // :: INFO fs.TestDFSIO: created control files files // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO mapred.FileInputFormat: Total input paths to process : // :: INFO mapreduce.JobSubmitter: number of splits: // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO Configuration.deprecation: dfs.https.address is deprecated. Instead, use dfs.namenode.https-address // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0006 // :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0006 // :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0006/ // :: INFO mapreduce.Job: Running job: job_1552358721447_0006 // :: INFO mapreduce.Job: Job job_1552358721447_0006 running in uber mode : false // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: Job job_1552358721447_0006 completed successfully // :: INFO mapreduce.Job: Counters: File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations= FILE: Number of large read operations= FILE: Number of HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations= HDFS: Number of large read operations= HDFS: Number of Job Counters Launched map tasks= Launched reduce tasks= Data-local map tasks= Total Total Total Total Total vcore-milliseconds taken by all map tasks= Total vcore-milliseconds taken by all reduce tasks= Total megabyte-milliseconds taken by all map tasks= Total megabyte-milliseconds taken by all reduce tasks= Map-Reduce Framework Map input records= Map output records= Map output bytes= Map output materialized bytes= Input Combine input records= Combine output records= Reduce input Reduce shuffle bytes= Reduce input records= Reduce output records= Spilled Records= Shuffled Maps = Failed Shuffles= Merged Map outputs= GC CPU Physical memory (bytes) snapshot= Virtual memory (bytes) snapshot= Total committed heap usage (bytes)= Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= // :: INFO fs.TestDFSIO: ----- TestDFSIO ----- : write // :: INFO fs.TestDFSIO: Date & :: CST // :: INFO fs.TestDFSIO: Number of files: // :: INFO fs.TestDFSIO: Total MBytes processed: 10000.0 // :: INFO fs.TestDFSIO: Throughput mb/sec: 114.77630098937172 // :: INFO fs.TestDFSIO: Average IO rate mb/sec: 115.29634094238281 // :: INFO fs.TestDFSIO: IO rate std deviation: 7.880011777295818 // :: INFO fs.TestDFSIO: Test exec time sec: 27.05 // :: INFO fs.TestDFSIO: bash-4.2$
测试命令正确执行以后会在Hadoop File System中创建文件夹存放生成的测试文件,如下所示:
并生成了一系列小文件:
将小文件下载到本地,查看文件大小为1KB
用Notepad++打开后,查看内容为:
并不是可读的内容
执行命令: hadoop jar ../jars/hadoop-test-2.6.0-mr1-cdh5.16.1.jar TestDFSIO -read -nrFiles 10 -fileSize 1000
返回如下信息:
bash--mr1-cdh5. -fileSize // :: INFO fs.TestDFSIO: TestDFSIO.1.7 // :: INFO fs.TestDFSIO: nrFiles = // :: INFO fs.TestDFSIO: nrBytes (MB) = 1000.0 // :: INFO fs.TestDFSIO: bufferSize = // :: INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO // :: INFO fs.TestDFSIO: creating control bytes, files // :: INFO fs.TestDFSIO: created control files files // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO mapred.FileInputFormat: Total input paths to process : // :: INFO mapreduce.JobSubmitter: number of splits: // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO Configuration.deprecation: dfs.https.address is deprecated. Instead, use dfs.namenode.https-address // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0007 // :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0007 // :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0007/ // :: INFO mapreduce.Job: Running job: job_1552358721447_0007 // :: INFO mapreduce.Job: Job job_1552358721447_0007 running in uber mode : false // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: Job job_1552358721447_0007 completed successfully // :: INFO mapreduce.Job: Counters: File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations= FILE: Number of large read operations= FILE: Number of HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations= HDFS: Number of large read operations= HDFS: Number of Job Counters Launched map tasks= Launched reduce tasks= Data-local map tasks= Total Total Total Total Total vcore-milliseconds taken by all map tasks= Total vcore-milliseconds taken by all reduce tasks= Total megabyte-milliseconds taken by all map tasks= Total megabyte-milliseconds taken by all reduce tasks= Map-Reduce Framework Map input records= Map output records= Map output bytes= Map output materialized bytes= Input Combine input records= Combine output records= Reduce input Reduce shuffle bytes= Reduce input records= Reduce output records= Spilled Records= Shuffled Maps = Failed Shuffles= Merged Map outputs= GC CPU Physical memory (bytes) snapshot= Virtual memory (bytes) snapshot= Total committed heap usage (bytes)= Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= // :: INFO fs.TestDFSIO: ----- TestDFSIO ----- : read // :: INFO fs.TestDFSIO: Date & :: CST // :: INFO fs.TestDFSIO: Number of files: // :: INFO fs.TestDFSIO: Total MBytes processed: 10000.0 // :: INFO fs.TestDFSIO: Throughput mb/sec: 897.4243919949744 // :: INFO fs.TestDFSIO: Average IO rate mb/sec: 898.6844482421875 // :: INFO fs.TestDFSIO: IO rate std deviation: 33.68623587810037 // :: INFO fs.TestDFSIO: Test exec time sec: 19.035 // :: INFO fs.TestDFSIO: bash-4.2$
执行命令: hadoop jar ../jars/hadoop-test-2.6.0-mr1-cdh5.16.1.jar TestDFSIO -clean
返回如下信息:
bash--mr1-cdh5.16.1.jar TestDFSIO -clean // :: INFO fs.TestDFSIO: TestDFSIO.1.7 // :: INFO fs.TestDFSIO: nrFiles = // :: INFO fs.TestDFSIO: nrBytes (MB) = 1.0 // :: INFO fs.TestDFSIO: bufferSize = // :: INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO // :: INFO fs.TestDFSIO: Cleaning up test files bash-4.2$
同时Hadoop File System中删除了TestDFSIO文件夹
nnbench
nnbench用于测试NameNode的负载,它会生成很多与HDFS相关的请求,给NameNode施加较大的压力。这个测试能在HDFS上模拟创建、读取、重命名和删除文件等操作。
nnbench命令的参数说明如下:
NameNode Benchmark 0.4 Usage: nnbench <options> Options: -operation <Available operations are create_write open_read rename delete. This option is mandatory> * NOTE: The open_read, rename and delete operations assume that the files they operate on, are already available. The create_write operation must be run before running the other operations. -maps <number of maps. default is . This is not mandatory> -reduces <number of reduces. default is . This is not mandatory> -startTime < mins. This is not mandatory -blockSize <Block size . This is not mandatory> -bytesToWrite <Bytes to . This is not mandatory> -bytesPerChecksum <Bytes per checksum . This is not mandatory> -numberOfFiles <number of files to create. default is . This is not mandatory> -replicationFactorPerFile <Replication factor . This is not mandatory> -baseDir <base DFS path. default is /becnhmarks/NNBench. This is not mandatory> -readFileAfterOpen <true or false. if true, it reads the file and reports the average time to read. This is valid with the open_read operation. default is false. This is not mandatory> -help: Display the help statement
为了使用12个mapper和6个reducer来创建1000个文件,执行命令: hadoop jar ../jars/hadoop-test-2.6.0-mr1-cdh5.16.1.jar nnbench -operation create_write -maps 12 -reduces 6 -blockSize 1 -bytesToWrite 0 -numberOfFiles 1000 -replicationFactorPerFile 3 -readFileAfterOpen true -baseDir /benchmarks/NNBench
返回如下信息:
bash--mr1-cdh5. -reduces -blockSize -bytesToWrite -numberOfFiles -replicationFactorPerFile hadoop jar ../jars/hadoop-test--mr1-cdh5. -reduces -blockSize -bytesToWrite -numberOfFiles -replicationFactorPerFile -readFileAfterOpen true -baseDir /benchmarks/NNBench NameNode Benchmark 0.4 // :: INFO hdfs.NNBench: Test Inputs: // :: INFO hdfs.NNBench: Test Operation: create_write // :: INFO hdfs.NNBench: Start -- ::, // :: INFO hdfs.NNBench: Number of maps: // :: INFO hdfs.NNBench: Number of reduces: // :: INFO hdfs.NNBench: Block Size: // :: INFO hdfs.NNBench: Bytes to // :: INFO hdfs.NNBench: Bytes per checksum: // :: INFO hdfs.NNBench: Number of files: // :: INFO hdfs.NNBench: Replication factor: // :: INFO hdfs.NNBench: Base dir: /benchmarks/NNBench // :: INFO hdfs.NNBench: Read file after open: true // :: INFO hdfs.NNBench: Deleting data directory // :: INFO hdfs.NNBench: Creating control files // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. // :: INFO mapred.FileInputFormat: Total input paths to process : // :: INFO mapreduce.JobSubmitter: number of splits: // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0009 // :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0009 // :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0009/ // :: INFO mapreduce.Job: Running job: job_1552358721447_0009 // :: INFO mapreduce.Job: Job job_1552358721447_0009 running in uber mode : false // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: Job job_1552358721447_0009 completed successfully // :: INFO mapreduce.Job: Counters: File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations= FILE: Number of large read operations= FILE: Number of HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations= HDFS: Number of large read operations= HDFS: Number of Job Counters Launched map tasks= Launched reduce tasks= Data-local map tasks= Total Total Total Total Total vcore-milliseconds taken by all map tasks= Total vcore-milliseconds taken by all reduce tasks= Total megabyte-milliseconds taken by all map tasks= Total megabyte-milliseconds taken by all reduce tasks= Map-Reduce Framework Map input records= Map output records= Map output bytes= Map output materialized bytes= Input Combine input records= Combine output records= Reduce input Reduce shuffle bytes= Reduce input records= Reduce output records= Spilled Records= Shuffled Maps = Failed Shuffles= Merged Map outputs= GC CPU Physical memory (bytes) snapshot= Virtual memory (bytes) snapshot= Total committed heap usage (bytes)= Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= // :: INFO hdfs.NNBench: -------------- NNBench -------------- : // :: INFO hdfs.NNBench: Version: NameNode Benchmark 0.4 // :: INFO hdfs.NNBench: Date & -- ::, // :: INFO hdfs.NNBench: // :: INFO hdfs.NNBench: Test Operation: create_write // :: INFO hdfs.NNBench: Start -- ::, // :: INFO hdfs.NNBench: Maps to run: // :: INFO hdfs.NNBench: Reduces to run: // :: INFO hdfs.NNBench: Block Size (bytes): // :: INFO hdfs.NNBench: Bytes to // :: INFO hdfs.NNBench: Bytes per checksum: // :: INFO hdfs.NNBench: Number of files: // :: INFO hdfs.NNBench: Replication factor: // :: INFO hdfs.NNBench: Successful // :: INFO hdfs.NNBench: // :: INFO hdfs.NNBench: # maps that missed the barrier: // :: INFO hdfs.NNBench: # exceptions: // :: INFO hdfs.NNBench: // :: INFO hdfs.NNBench: TPS: Create/Write/Close: // :: INFO hdfs.NNBench: Avg exec time (ms): Create/Write/Close: 0.0 // :: INFO hdfs.NNBench: Avg Lat (ms): Create/Write: NaN // :: INFO hdfs.NNBench: Avg Lat (ms): Close: NaN // :: INFO hdfs.NNBench: // :: INFO hdfs.NNBench: RAW DATA: AL Total #: // :: INFO hdfs.NNBench: RAW DATA: AL Total #: // :: INFO hdfs.NNBench: RAW DATA: TPS Total (ms): // :: INFO hdfs.NNBench: RAW DATA: Longest Map Time (ms): 0.0 // :: INFO hdfs.NNBench: RAW DATA: Late maps: // :: INFO hdfs.NNBench: RAW DATA: # of exceptions: // :: INFO hdfs.NNBench: bash-4.2$
任务执行完以后可以到页面 http://*.*.*.*:19888/jobhistory/job/job_1552358721447_0009 查看任务执行详情,如下:
并且在Hadoop File System中生成文件夹NNBench存储任务产生的文件:
进入目录/benchmarks/NNBench/control,查看某个文件 NNBench_Controlfile_0的元信息,发现文件存在三个节点上:
下载下来用Notepad++打开,发现内容是乱码:
mrbench
mrbench会多次重复执行一个小作业,用于检查在机群上小作业的运行是否可重复以及运行是否高效。mrbench的用法如下:
Usage: mrbench [-baseDir <base DFS path >] [-maps <number of maps >] [-reduces <number of reduces >] [-inputLines <number of input lines to generate, default is >] [-inputType <type of input to generate, one of ascending (default), descending, random>] [-verbose]
执行命令: hadoop jar ../jars/hadoop-test-2.6.0-mr1-cdh5.16.1.jar mrbench -numRuns 50
返回如下信息:
…… Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= // :: INFO mapred.MRBench: Running job : input=hdfs://node1:8020/benchmarks/MRBench/mr_input output=hdfs://node1:8020/benchmarks/MRBench/mr_output/output_299739316 // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO client.RMProxy: Connecting to ResourceManager at node1/ // :: INFO mapred.FileInputFormat: Total input paths to process : // :: INFO mapreduce.JobSubmitter: number of splits: // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0059 // :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0059 // :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0059/ // :: INFO mapreduce.Job: Running job: job_1552358721447_0059 // :: INFO mapreduce.Job: Job job_1552358721447_0059 running in uber mode : false // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: map % reduce % // :: INFO mapreduce.Job: Job job_1552358721447_0059 completed successfully // :: INFO mapreduce.Job: Counters: File System Counters FILE: Number of bytes read= FILE: Number of bytes written= FILE: Number of read operations= FILE: Number of large read operations= FILE: Number of HDFS: Number of bytes read= HDFS: Number of bytes written= HDFS: Number of read operations= HDFS: Number of large read operations= HDFS: Number of Job Counters Launched map tasks= Launched reduce tasks= Data-local map tasks= Total Total Total Total Total vcore-milliseconds taken by all map tasks= Total vcore-milliseconds taken by all reduce tasks= Total megabyte-milliseconds taken by all map tasks= Total megabyte-milliseconds taken by all reduce tasks= Map-Reduce Framework Map input records= Map output records= Map output bytes= Map output materialized bytes= Input Combine input records= Combine output records= Reduce input Reduce shuffle bytes= Reduce input records= Reduce output records= Spilled Records= Shuffled Maps = Failed Shuffles= Merged Map outputs= GC CPU Physical memory (bytes) snapshot= Virtual memory (bytes) snapshot= Total committed heap usage (bytes)= Shuffle Errors BAD_ID= CONNECTION= IO_ERROR= WRONG_LENGTH= WRONG_MAP= WRONG_REDUCE= File Input Format Counters Bytes Read= File Output Format Counters Bytes Written= DataLines Maps Reduces AvgTime (milliseconds) bash-4.2$
以上结果表示平均作业完成时间是15秒
打开网址 http://*.*.*.*:8088/cluster ,可以查看执行的任务信息:
Hadoop File System也生成了相应的目录,但是目录里面的内容是空的,如下:
参考内容: https://blog.51cto.com/7543154/1243883 ; https://www.cnblogs.com/0xcafedaddy/p/8477818.html ; https://blog.csdn.net/flygoa/article/details/52127382
Hadoop基准测试(一)的更多相关文章
- Hadoop基准测试(二)
Hadoop Examples 除了<Hadoop基准测试(一)>提到的测试,Hadoop还自带了一些例子,比如WordCount和TeraSort,这些例子在hadoop-example ...
- Hadoop 基准测试与example
#pi值示例 hadoop jar /app/cdh23502/share/hadoop/mapreduce2/hadoop-mapreduce-examples--cdh5. #生成数据 第一个参数 ...
- Hadoop基准测试(转载)
<hadoop the definitive way>(third version)中的Benchmarking a Hadoop Cluster Test Cases的class在新的版 ...
- Hadoop基准测试
其实就是从网络上copy的吧,在这里做一下记录 这个是看一下有哪些测试方式: hadoop jar /opt/cloudera/parcels/CDH-5.3.6-1.cdh5.3.6.p0.11/ ...
- Hadoop学习笔记四
一.fsimage,edits和datanode的block在本地文件系统中位置的配置 fsimage:hdfs-site.xml中的dfs.namenode.name.dir 值例如file:// ...
- 几个有关Hadoop自带的性能测试工具的应用
http://www.talkwithtrend.com/Question/177983-1247453 一些测试的描述如下内容最为详细,供你参考: 测试对于验证系统的正确性.分析系统的性能来说非常重 ...
- Hadoop理论基础
Hadoop是 Apache 旗下的一个用 java 语言实现开源软件框架,是一个开发和运行处理大规模数据的软件平台.允许使用简单的编程模型在大量计算机集群上对大型数据集进行分布式处理. 特性:扩 ...
- 【Hadoop 分布式部署 六:环境问题解决和集群基准测试】
环境问题: 出现Temporary failure in name resolutionp-senior-zuoyan.com 的原因有很多,主要就是主机没有解析到, 那就在hadoop的sl ...
- hadoop的基准测试
hadoop的基准测试 实际生产环境当中,hadoop的环境搭建完成之后,第一件事情就是进行压力测试,测试我们的集群的读取和写入速度,测试我们的网络带宽是否足够等一些基准测试 测试写入速度 向HDFS ...
随机推荐
- AI人工智能之基于OpenCV+face_recognition实现人脸识别
因近期公司项目需求,需要从监控视频里识别出人脸信息.OpenCV非常庞大,其中官方提供的人脸模型分类器也可以满足基本的人脸识别,当然我们也可以训练自己的人脸模型数据,但是从精确度和专业程度上讲Open ...
- php将数据写入另外一个文件
有时候,为了验证PHP的运行过程或者了解代码中的变量的使用情况,需要将变量写到另外一个文件中,方便我们查看.最近也是经常用到file_put_contents这个函数,因为只是试验用,暂时还不需要考虑 ...
- Bugku-CTF加密篇之凯撒部长的奖励(就在8月,超师傅出色地完成了上级的特遣任务,凯撒部长准备给超师傅一份特殊的奖励.......)
凯撒部长的奖励 就在8月,超师傅出色地完成了上级的特遣任务,凯撒部长准备给超师傅一份特殊的奖励,兴高采烈的超师傅却只收到一长串莫名的密文,超师傅看到英语字串便满脸黑线,帮他拿到这份价值不菲的奖励吧. ...
- html学习-第二集(CSS)
head标签里面添加style标签,可以为标签添加样式 id选择器 类选择器 标签选择器 层级选择器 组合选择器 属性选择器 <!DOCTYPE html> <html lang=& ...
- Linux下Nginx1.9.9的安装
1.环境安装 yum install gcc-c++ .yum -y install pcre*.yum -y install openssl* (安装顺序安装) 2.下载压缩包(这里我使用的是老本 ...
- C语言:将3*5矩阵中第k列的元素左移到第0列,第k列以后的每列元素依次左移,原来左边的各列依次绕到右边。-在m行m列的二维数组中存放如下规律的数据,
//将3*5矩阵中第k列的元素左移到第0列,第k列以后的每列元素依次左移,原来左边的各列依次绕到右边. #include <stdio.h> #define M 3 #define N 5 ...
- java 工程idea 添加依赖几种方式:
1.add jar and dependecy derictory: 2.add Libary: 点击new library 选取java: 选择libs文件夹作为library: 选择 maven ...
- ASP.NET Core搭建多层网站架构【7-使用NLog日志记录器】
2020/01/29, ASP.NET Core 3.1, VS2019, NLog.Web.AspNetCore 4.9.0 摘要:基于ASP.NET Core 3.1 WebApi搭建后端多层网站 ...
- 例题3_4 猜数字游戏的提示(UVa340)
实现一个经典“猜数字”游戏.给定答案序列和用户猜的序列,统计有多少数字位置正确(A),有多少数字在两个序列都出现过但位置不对(B). 输入包含多组数据.每组输入第一行为序列长度n,第二行是答案序列,接 ...
- 【代码学习】PYTHON 文件I/O
一.文件的打开和关闭 open(文件名,访问模式) cloese() 模式 描述 r 以只读方式打开文件.文件的指针将会放在文件的开头.这是默认模式. rb 以二进制格式打开一个文件用于只读.文件指针 ...