SparkBench安装以及快速开始

欢迎关注我的GitHub~

本指南假定已经可以正常使用Spark 2.x,并且可以访问安装它的系统。

系统环境

CentOS 7.7.1908
Ambari-Spark 2.3.2
spark-bench_2.3.0_0.4.0-RELEASE

安装步骤

下载安装包

下载地址获取最新版本的安装包。

解压缩安装包

接着在系统上解压缩安装包:

[root@master opt]# pwd
/opt
[root@master opt]# ll soft/
total 273792
drwxr-xr-x. 2 root root 164 Nov 5 14:51 ambari2.7.3.0+hdp3.1.0.0
drwxr-xr-x 4 root root 36 Nov 6 10:57 etc
-rw-r--r--. 1 root root 194151339 Nov 4 10:37 jdk-8u231-linux-x64.tar.gz
-rw-r--r-- 1 root root 90472 Nov 4 15:25 libtirpc-0.2.4-0.16.el7.i686.rpm
-rw-r--r-- 1 root root 93252 Nov 4 15:25 libtirpc-devel-0.2.4-0.16.el7.i686.rpm
-rw-r--r--. 1 root root 26024 Nov 4 14:19 mysql80-community-release-el7-3.noarch.rpm
-rw-r--r--. 1 root root 846263 Nov 4 14:27 mysql-connector-java-5.1.24.jar
-rw-r--r-- 1 root root 85142775 Nov 13 10:50 spark-bench_2.3.0_0.4.0-RELEASE.tgz
[root@master opt]# tar -zxf soft/spark-bench_2.3.0_0.4.0-RELEASE.tgz -C /opt/
[root@master opt]# ll
total 4
drwxr-xr-x. 2 root root 42 Nov 5 14:29 os
drwxr-xr-x. 4 root root 4096 Nov 13 11:13 soft
drwxr-xr-x 5 2000 2000 61 Mar 23 2018 spark-bench_2.3.0_0.4.0-RELEASE

设置环境变量

有两种方法来设置运行示例所需的Spark的home和master变量。

设置bash环境变量

第一种方式是设置bash环境变量。此处仅作为讲解,不使用此方法进行设置环境变量。

进入安装目录的bin文件夹下,有一个spark-bench-env.sh.template文件,然后修改该文件,设置两个环境变量:

1.SPARK_HOME:这个是Spark集群或者单机Spark的完整安装路径。

2.SPARK_MASTER_HOST:这个变量和spark-submit脚本中--master的输入内容相同。如果你是在独立模式下运行的,可能是local[2],Yarn集群上的yarn,IP地址或者端口号。

这种方式可以在bash配置文件中设置这些环境变量,也可以通过取消注释spark-bench-env.sh.template中的行并将其填充到位来进行设置,注意设置后需要将spark-bench-env.sh.template改为spark-bench-env.sh。

修改后的spark-bench-env.sh.template文件内容如下:

#!/bin/bash

# ############################################################### #
# PLEASE SET THE FOLLOWING VARIABLES TO REFLECT YOUR ENVIRONMENT #
# ############################################################### # # set this to the directory where Spark is installed in your environment, for example: /opt/spark-spark-2.1.0-bin-hadoop2.6
export SPARK_HOME=/usr/hdp/3.1.0.0-78/spark2/ # set this to the master for your environment, such as local[2], yarn, 10.29.0.3, etc.
export SPARK_MASTER_HOST=local[*]

修改示例配置文件以包含环境信息[推荐方式]

例如,在minimal-example.conf中,原本内容如下所示:

[root@master examples]# pwd
/opt/spark-bench_2.3.0_0.4.0-RELEASE/examples
[root@master examples]# cat minimal-example.conf
spark-bench = {
spark-submit-config = [{
workload-suites = [
{
descr = "One run of SparkPi and that's it!"
benchmark-output = "console"
workloads = [
{
name = "sparkpi"
slices = 10
}
]
}
]
}]
}

添加spark-home和master关键字之后,就成了:

spark-bench = {
export SPARK_HOME=/usr/hdp/current/spark2-client
// export SPARK_MASTER_HOST=yarn // 现在已经改成了spark-args的形式了
spark-submit-config = [{
spark-args = {
master = "local[*]" // or whatever the correct master is for your environment
}
workload-suites = [
{
descr = "One run of SparkPi and that's it!"
benchmark-output = "console"
workloads = [
{
name = "sparkpi"
slices = 10
}
]
}
]
}]
}

备注:官方文档推荐使用第二种方式,但是在新版本中,个人认为可以将两种方式结合起来,比如在spark-bench-env.sh.template中设置SPARK_HOME,然后在minimal-example.conf示例文件中只添加SPARK_MASTER_HOST,可能会更加方便一点。

示例运行

基于local[*]运行minimal-example.conf示例

点击查看代码
[root@master spark-bench_2.3.0_0.4.0-RELEASE]# ./bin/spark-bench.sh ./examples/minimal-example.conf
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
19/11/12 22:54:58 INFO CLIKickoff$: args received: {"spark-bench":{"spark-submit-config":[{"spark-args":{"master":"local[*]"},"workload-suites":[{"benchmark-output":"console","descr":"One run of SparkPi and that's it!","workloads":[{"name":"sparkpi","slices":10}]}]}]}}
19/11/12 22:54:58 INFO SparkContext: Running Spark version 2.3.2.3.1.0.0-78
19/11/12 22:54:58 INFO SparkContext: Submitted application: com.ibm.sparktc.sparkbench.cli.CLIKickoff
19/11/12 22:54:58 INFO SecurityManager: Changing view acls to: root
19/11/12 22:54:58 INFO SecurityManager: Changing modify acls to: root
19/11/12 22:54:58 INFO SecurityManager: Changing view acls groups to:
19/11/12 22:54:58 INFO SecurityManager: Changing modify acls groups to:
19/11/12 22:54:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
19/11/12 22:54:58 INFO Utils: Successfully started service 'sparkDriver' on port 44042.
19/11/12 22:54:58 INFO SparkEnv: Registering MapOutputTracker
19/11/12 22:54:58 INFO SparkEnv: Registering BlockManagerMaster
19/11/12 22:54:58 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/11/12 22:54:58 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/11/12 22:54:58 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-49b14697-c68a-41e2-9d75-6bcb853f9082
19/11/12 22:54:58 INFO MemoryStore: MemoryStore started with capacity 413.9 MB
19/11/12 22:54:59 INFO SparkEnv: Registering OutputCommitCoordinator
19/11/12 22:54:59 INFO log: Logging initialized @3319ms
19/11/12 22:54:59 INFO Server: jetty-9.3.z-SNAPSHOT, build timestamp: 2018-06-05T13:11:56-04:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
19/11/12 22:54:59 INFO Server: Started @3427ms
19/11/12 22:54:59 INFO AbstractConnector: Started ServerConnector@72bca894{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/11/12 22:54:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@320e400{/jobs,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7bdf6bb7{/jobs/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1bc53649{/jobs/job,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@475b7792{/jobs/job/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@751e664e{/stages,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@160c3ec1{/stages/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@182b435b{/stages/stage,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7577b641{/stages/stage/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3704122f{/stages/pool,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3153ddfc{/stages/pool/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@60afd40d{/storage,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@28a2a3e7{/storage/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3f2049b6{/storage/rdd,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@10b3df93{/storage/rdd/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@ea27e34{/environment,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@33a2499c{/environment/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@e72dba7{/executors,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@33c2bd{/executors/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1dfd5f51{/executors/threadDump,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3c321bdb{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@24855019{/static,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5a2f016d{/,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1a38ba58{/api,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1deb2c43{/jobs/job/kill,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3bb9efbc{/stages/stage/kill,null,AVAILABLE,@Spark}
19/11/12 22:54:59 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://master.org.cn:4040
19/11/12 22:54:59 INFO SparkContext: Added JAR file:/opt/spark-bench_2.3.0_0.4.0-RELEASE/lib/spark-bench-2.3.0_0.4.0-RELEASE.jar at spark://master.org.cn:44042/jars/spark-bench-2.3.0_0.4.0-RELEASE.jar with timestamp 1573617299281
19/11/12 22:54:59 INFO Executor: Starting executor ID driver on host localhost
19/11/12 22:54:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46637.
19/11/12 22:54:59 INFO NettyBlockTransferService: Server created on master.org.cn:46637
19/11/12 22:54:59 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/11/12 22:54:59 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, master.org.cn, 46637, None)
19/11/12 22:54:59 INFO BlockManagerMasterEndpoint: Registering block manager master.org.cn:46637 with 413.9 MB RAM, BlockManagerId(driver, master.org.cn, 46637, None)
19/11/12 22:54:59 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, master.org.cn, 46637, None)
19/11/12 22:54:59 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, master.org.cn, 46637, None)
19/11/12 22:54:59 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3c89bb12{/metrics/json,null,AVAILABLE,@Spark}
19/11/12 22:55:01 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/local-1573617299355
19/11/12 22:55:02 INFO SparkContext: Starting job: reduce at SparkPi.scala:58
19/11/12 22:55:02 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:58) with 10 output partitions
19/11/12 22:55:02 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:58)
19/11/12 22:55:02 INFO DAGScheduler: Parents of final stage: List()
19/11/12 22:55:02 INFO DAGScheduler: Missing parents: List()
19/11/12 22:55:02 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:54), which has no missing parents
19/11/12 22:55:02 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1960.0 B, free 413.9 MB)
19/11/12 22:55:02 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1271.0 B, free 413.9 MB)
19/11/12 22:55:02 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on master.org.cn:46637 (size: 1271.0 B, free: 413.9 MB)
19/11/12 22:55:02 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039
19/11/12 22:55:02 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:54) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
19/11/12 22:55:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
19/11/12 22:55:02 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:02 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
19/11/12 22:55:02 INFO Executor: Fetching spark://master.org.cn:44042/jars/spark-bench-2.3.0_0.4.0-RELEASE.jar with timestamp 1573617299281
19/11/12 22:55:02 INFO TransportClientFactory: Successfully created connection to master.org.cn/192.168.0.219:44042 after 38 ms (0 ms spent in bootstraps)
19/11/12 22:55:02 INFO Utils: Fetching spark://master.org.cn:44042/jars/spark-bench-2.3.0_0.4.0-RELEASE.jar to /tmp/spark-4460cfe3-6a98-45b2-bf86-01387519d92d/userFiles-a94f5012-b587-4564-80ea-ed23b6f3507b/fetchFileTemp7487710686530246224.tmp
19/11/12 22:55:03 INFO Executor: Adding file:/tmp/spark-4460cfe3-6a98-45b2-bf86-01387519d92d/userFiles-a94f5012-b587-4564-80ea-ed23b6f3507b/spark-bench-2.3.0_0.4.0-RELEASE.jar to class loader
19/11/12 22:55:03 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 867 bytes result sent to driver
19/11/12 22:55:03 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:03 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
19/11/12 22:55:03 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 824 bytes result sent to driver
19/11/12 22:55:03 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, executor driver, partition 2, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:03 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
19/11/12 22:55:04 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 867 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1190 ms on localhost (executor driver) (1/10)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 121 ms on localhost (executor driver) (2/10)
19/11/12 22:55:04 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, executor driver, partition 3, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
19/11/12 22:55:04 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 824 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, executor driver, partition 4, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 93 ms on localhost (executor driver) (3/10)
19/11/12 22:55:04 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
19/11/12 22:55:04 INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 824 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, executor driver, partition 5, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 13 ms on localhost (executor driver) (4/10)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 39 ms on localhost (executor driver) (5/10)
19/11/12 22:55:04 INFO Executor: Running task 5.0 in stage 0.0 (TID 5)
19/11/12 22:55:04 INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 867 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, executor driver, partition 6, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 40 ms on localhost (executor driver) (6/10)
19/11/12 22:55:04 INFO Executor: Running task 6.0 in stage 0.0 (TID 6)
19/11/12 22:55:04 INFO Executor: Finished task 6.0 in stage 0.0 (TID 6). 824 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, executor driver, partition 7, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 10 ms on localhost (executor driver) (7/10)
19/11/12 22:55:04 INFO Executor: Running task 7.0 in stage 0.0 (TID 7)
19/11/12 22:55:04 INFO Executor: Finished task 7.0 in stage 0.0 (TID 7). 824 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, executor driver, partition 8, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 26 ms on localhost (executor driver) (8/10)
19/11/12 22:55:04 INFO Executor: Running task 8.0 in stage 0.0 (TID 8)
19/11/12 22:55:04 INFO Executor: Finished task 8.0 in stage 0.0 (TID 8). 824 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, executor driver, partition 9, PROCESS_LOCAL, 7853 bytes)
19/11/12 22:55:04 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 20 ms on localhost (executor driver) (9/10)
19/11/12 22:55:04 INFO Executor: Running task 9.0 in stage 0.0 (TID 9)
19/11/12 22:55:04 INFO Executor: Finished task 9.0 in stage 0.0 (TID 9). 781 bytes result sent to driver
19/11/12 22:55:04 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 62 ms on localhost (executor driver) (10/10)
19/11/12 22:55:04 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:58) finished in 1.724 s
19/11/12 22:55:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/11/12 22:55:04 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:58, took 1.895388 s
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 15
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 16
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 14
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 7
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 3
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 4
19/11/12 22:55:05 INFO BlockManagerInfo: Removed broadcast_0_piece0 on master.org.cn:46637 in memory (size: 1271.0 B, free: 413.9 MB)
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 17
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 9
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 24
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 18
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 11
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 21
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 5
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 19
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 12
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 22
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 8
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 0
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 23
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 20
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 6
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 13
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 10
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 1
19/11/12 22:55:05 INFO ContextCleaner: Cleaned accumulator 2
19/11/12 22:55:06 INFO SharedState: loading hive config file: file:/etc/spark2/3.1.0.0-78/0/hive-site.xml
19/11/12 22:55:06 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/apps/spark/warehouse').
19/11/12 22:55:06 INFO SharedState: Warehouse path is '/apps/spark/warehouse'.
19/11/12 22:55:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@17884d{/SQL,null,AVAILABLE,@Spark}
19/11/12 22:55:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@68e24e7{/SQL/json,null,AVAILABLE,@Spark}
19/11/12 22:55:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@211a9647{/SQL/execution,null,AVAILABLE,@Spark}
19/11/12 22:55:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@340c5fb6{/SQL/execution/json,null,AVAILABLE,@Spark}
19/11/12 22:55:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@c262f2f{/static/sql,null,AVAILABLE,@Spark}
19/11/12 22:55:07 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/11/12 22:55:08 INFO SuiteKickoff$: One run of SparkPi and that's it!
19/11/12 22:55:10 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
19/11/12 22:55:12 INFO CodeGenerator: Code generated in 724.650089 ms
19/11/12 22:55:12 INFO CodeGenerator: Code generated in 119.823016 ms
19/11/12 22:55:13 INFO CodeGenerator: Code generated in 47.312121 ms
19/11/12 22:55:14 INFO SparkContext: Starting job: show at SparkFuncs.scala:108
19/11/12 22:55:14 INFO DAGScheduler: Got job 1 (show at SparkFuncs.scala:108) with 1 output partitions
19/11/12 22:55:14 INFO DAGScheduler: Final stage: ResultStage 1 (show at SparkFuncs.scala:108)
19/11/12 22:55:14 INFO DAGScheduler: Parents of final stage: List()
19/11/12 22:55:14 INFO DAGScheduler: Missing parents: List()
19/11/12 22:55:14 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[8] at show at SparkFuncs.scala:108), which has no missing parents
19/11/12 22:55:14 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.2 KB, free 413.9 MB)
19/11/12 22:55:14 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.6 KB, free 413.9 MB)
19/11/12 22:55:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on master.org.cn:46637 (size: 4.6 KB, free: 413.9 MB)
19/11/12 22:55:14 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039
19/11/12 22:55:14 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[8] at show at SparkFuncs.scala:108) (first 15 tasks are for partitions Vector(0))
19/11/12 22:55:14 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
19/11/12 22:55:14 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 10, localhost, executor driver, partition 0, PROCESS_LOCAL, 8302 bytes)
19/11/12 22:55:14 INFO Executor: Running task 0.0 in stage 1.0 (TID 10)
19/11/12 22:55:14 INFO Executor: Finished task 0.0 in stage 1.0 (TID 10). 2006 bytes result sent to driver
19/11/12 22:55:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 10) in 38 ms on localhost (executor driver) (1/1)
19/11/12 22:55:14 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
19/11/12 22:55:14 INFO DAGScheduler: ResultStage 1 (show at SparkFuncs.scala:108) finished in 0.119 s
19/11/12 22:55:14 INFO DAGScheduler: Job 1 finished: show at SparkFuncs.scala:108, took 0.125345 s
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-----------------------------------+------------------------+------------------+------------------------+-------------------+--------------------------------+--------------------+
| name| timestamp|total_runtime| pi_approximate|input|output|saveMode|slices|run|spark.sql.warehouse.dir|spark.history.kerberos.keytab|spark.io.compression.lz4.blockSize|spark.executor.extraJavaOptions|spark.driver.host|spark.history.fs.logDirectory|spark.sql.autoBroadcastJoinThreshold|spark.eventLog.enabled|spark.driver.port|spark.driver.extraLibraryPath|spark.yarn.queue| spark.jars|spark.sql.orc.filterPushdown|spark.shuffle.unsafe.file.output.buffer|spark.yarn.historyServer.address| spark.app.name|spark.sql.hive.metastore.jars|spark.history.kerberos.principal|spark.unsafe.sorter.spill.reader.buffer.size|spark.history.fs.cleaner.enabled|spark.shuffle.io.serverThreads|spark.executor.id|spark.sql.hive.convertMetastoreOrc|spark.submit.deployMode|spark.master|spark.history.fs.cleaner.interval|spark.history.fs.cleaner.maxAge|spark.history.provider|spark.executor.extraLibraryPath|spark.shuffle.file.buffer| spark.eventLog.dir|spark.history.ui.port|spark.sql.statistics.fallBackToHdfs|spark.shuffle.io.backLog|spark.sql.orc.impl|spark.history.store.path| spark.app.id|spark.sql.hive.metastore.version| description|
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-----------------------------------+------------------------+------------------+------------------------+-------------------+--------------------------------+--------------------+
|sparkpi|1573617302076| 2168769711|3.1438591438591437| | | error| 10| 0| /apps/spark/wareh...| none| 128kb| -XX:+UseNUMA| master.org.cn| hdfs:///spark2-hi...| 26214400| true| 44042| /usr/hdp/current/...| default|file:/opt/spark-b...| true| 5m| master.org.cn:18081|com.ibm.sparktc.s...| /usr/hdp/current/...| none| 1m| true| 128| driver| true| client| local[*]| 7d| 90d| org.apache.spark....| /usr/hdp/current/...| 1m|hdfs:///spark2-hi...| 18081| true| 8192| native| /var/lib/spark2/s...|local-1573617299355| 3.0|One run of SparkP...|
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-----------------------------------+------------------------+------------------+------------------------+-------------------+--------------------------------+--------------------+ 19/11/12 22:55:15 INFO SparkContext: Invoking stop() from shutdown hook
19/11/12 22:55:15 INFO AbstractConnector: Stopped Spark@72bca894{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/11/12 22:55:15 INFO SparkUI: Stopped Spark web UI at http://master.org.cn:4040
19/11/12 22:55:15 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/11/12 22:55:15 INFO MemoryStore: MemoryStore cleared
19/11/12 22:55:15 INFO BlockManager: BlockManager stopped
19/11/12 22:55:15 INFO BlockManagerMaster: BlockManagerMaster stopped
19/11/12 22:55:15 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/11/12 22:55:15 INFO SparkContext: Successfully stopped SparkContext
19/11/12 22:55:15 INFO ShutdownHookManager: Shutdown hook called
19/11/12 22:55:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-4460cfe3-6a98-45b2-bf86-01387519d92d
19/11/12 22:55:15 INFO ShutdownHookManager: Deleting directory /tmp/spark-bf2afab1-014b-4a2e-a43b-565cd57f4a69

基于yarn运行minimal-example.conf示例

基于yarn运行minimal-example.conf示例,需要使用hdfs用户来运行,否则报错没有写入hdfs文件系统的权限:

点击查看代码
[root@master spark-bench_2.3.0_0.4.0-RELEASE]# su hdfs ./bin/spark-bench.sh examples/minimal-example.conf
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
19/11/13 00:48:37 INFO CLIKickoff$: args received: {"spark-bench":{"spark-submit-config":[{"spark-args":{"master":"yarn"},"workload-suites":[{"benchmark-output":"console","descr":"One run of SparkPi and that's it!","workloads":[{"name":"sparkpi","slices":10}]}]}]}}
19/11/13 00:48:37 INFO SparkContext: Running Spark version 2.3.2.3.1.0.0-78
19/11/13 00:48:37 INFO SparkContext: Submitted application: com.ibm.sparktc.sparkbench.cli.CLIKickoff
19/11/13 00:48:37 INFO SecurityManager: Changing view acls to: hdfs
19/11/13 00:48:37 INFO SecurityManager: Changing modify acls to: hdfs
19/11/13 00:48:37 INFO SecurityManager: Changing view acls groups to:
19/11/13 00:48:37 INFO SecurityManager: Changing modify acls groups to:
19/11/13 00:48:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs); groups with view permissions: Set(); users with modify permissions: Set(hdfs); groups with modify permissions: Set()
19/11/13 00:48:38 INFO Utils: Successfully started service 'sparkDriver' on port 34335.
19/11/13 00:48:38 INFO SparkEnv: Registering MapOutputTracker
19/11/13 00:48:38 INFO SparkEnv: Registering BlockManagerMaster
19/11/13 00:48:38 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/11/13 00:48:38 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/11/13 00:48:38 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-90606671-8b78-4659-86fc-d25ad1c026d2
19/11/13 00:48:38 INFO MemoryStore: MemoryStore started with capacity 413.9 MB
19/11/13 00:48:38 INFO SparkEnv: Registering OutputCommitCoordinator
19/11/13 00:48:38 INFO log: Logging initialized @3234ms
19/11/13 00:48:38 INFO Server: jetty-9.3.z-SNAPSHOT, build timestamp: 2018-06-05T13:11:56-04:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
19/11/13 00:48:38 INFO Server: Started @3478ms
19/11/13 00:48:38 INFO AbstractConnector: Started ServerConnector@50b8ae8d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/11/13 00:48:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6831d8fd{/jobs,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6754ef00{/jobs/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@619bd14c{/jobs/job,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4acf72b6{/jobs/job/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7561db12{/stages,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3301500b{/stages/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@24b52d3e{/stages/stage,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@57a4d5ee{/stages/stage/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5af5def9{/stages/pool,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3a45c42a{/stages/pool/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@36dce7ed{/storage,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@47a64f7d{/storage/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@33d05366{/storage/rdd,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@27a0a5a2{/storage/rdd/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7692cd34{/environment,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@33aa93c{/environment/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@32c0915e{/executors,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@106faf11{/executors/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@70f43b45{/executors/threadDump,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@26d10f2e{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@10ad20cb{/static,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@6b739528{/,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@622ef26a{/api,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3d526ad9{/jobs/job/kill,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@e041f0c{/stages/stage/kill,null,AVAILABLE,@Spark}
19/11/13 00:48:39 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://master.org.cn:4040
19/11/13 00:48:39 INFO SparkContext: Added JAR file:/opt/spark-bench_2.3.0_0.4.0-RELEASE/lib/spark-bench-2.3.0_0.4.0-RELEASE.jar at spark://master.org.cn:34335/jars/spark-bench-2.3.0_0.4.0-RELEASE.jar with timestamp 1573624119133
19/11/13 00:48:41 INFO RMProxy: Connecting to ResourceManager at master.org.cn/192.168.0.219:8050
19/11/13 00:48:41 INFO Client: Requesting a new application from cluster with 1 NodeManagers
19/11/13 00:48:42 INFO Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
19/11/13 00:48:42 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container)
19/11/13 00:48:42 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
19/11/13 00:48:42 INFO Client: Setting up container launch context for our AM
19/11/13 00:48:42 INFO Client: Setting up the launch environment for our AM container
19/11/13 00:48:42 INFO Client: Preparing resources for our AM container
19/11/13 00:48:44 INFO Client: Use hdfs cache file as spark.yarn.archive for HDP, hdfsCacheFile:hdfs://agent1.org.cn:8020/hdp/apps/3.1.0.0-78/spark2/spark2-hdp-yarn-archive.tar.gz
19/11/13 00:48:44 INFO Client: Source and destination file systems are the same. Not copying hdfs://agent1.org.cn:8020/hdp/apps/3.1.0.0-78/spark2/spark2-hdp-yarn-archive.tar.gz
19/11/13 00:48:44 INFO Client: Distribute hdfs cache file as spark.sql.hive.metastore.jars for HDP, hdfsCacheFile:hdfs://agent1.org.cn:8020/hdp/apps/3.1.0.0-78/spark2/spark2-hdp-hive-archive.tar.gz
19/11/13 00:48:44 INFO Client: Source and destination file systems are the same. Not copying hdfs://agent1.org.cn:8020/hdp/apps/3.1.0.0-78/spark2/spark2-hdp-hive-archive.tar.gz
19/11/13 00:48:45 INFO Client: Uploading resource file:/tmp/spark-c1bcf3d3-bcc5-4bdc-bb5b-dd6d28813046/__spark_conf__2183181909830061583.zip -> hdfs://agent1.org.cn:8020/user/hdfs/.sparkStaging/application_1573019412054_0003/__spark_conf__.zip
19/11/13 00:48:45 INFO SecurityManager: Changing view acls to: hdfs
19/11/13 00:48:45 INFO SecurityManager: Changing modify acls to: hdfs
19/11/13 00:48:45 INFO SecurityManager: Changing view acls groups to:
19/11/13 00:48:45 INFO SecurityManager: Changing modify acls groups to:
19/11/13 00:48:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs); groups with view permissions: Set(); users with modify permissions: Set(hdfs); groups with modify permissions: Set()
19/11/13 00:48:45 INFO Client: Submitting application application_1573019412054_0003 to ResourceManager
19/11/13 00:48:46 INFO YarnClientImpl: Submitted application application_1573019412054_0003
19/11/13 00:48:46 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1573019412054_0003 and attemptId None
19/11/13 00:48:47 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:48:47 INFO Client:
client token: N/A
diagnostics: [Wed Nov 13 00:48:46 -0500 2019] Application is Activated, waiting for resources to be assigned for AM. Last Node which was processed for the application : master.org.cn:45454 ( Partition : [], Total resource : <memory:3072, vCores:1>, Available resource : <memory:3072, vCores:1> ). Details : AM Partition = <DEFAULT_PARTITION> ; Partition Resource = <memory:3072, vCores:1> ; Queue's Absolute capacity = 100.0 % ; Queue's Absolute used capacity = 0.0 % ; Queue's Absolute max capacity = 100.0 % ; Queue's capacity (absolute resource) = <memory:3072, vCores:1> ; Queue's used capacity (absolute resource) = <memory:0, vCores:0> ; Queue's max capacity (absolute resource) = <memory:3072, vCores:1> ;
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1573624125718
final status: UNDEFINED
tracking URL: http://master.org.cn:8088/proxy/application_1573019412054_0003/
user: hdfs
19/11/13 00:48:48 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:48:53 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:48:54 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:05 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:10 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:11 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:12 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:13 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:14 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:15 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:17 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:18 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:19 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:20 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:21 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:22 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:23 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:25 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:26 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:26 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master.org.cn, PROXY_URI_BASES -> http://master.org.cn:8088/proxy/application_1573019412054_0003), /proxy/application_1573019412054_0003
19/11/13 00:49:26 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, /jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, /stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, /storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, /executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, /api, /jobs/job/kill, /stages/stage/kill.
19/11/13 00:49:27 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:28 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:30 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:31 INFO Client: Application report for application_1573019412054_0003 (state: ACCEPTED)
19/11/13 00:49:32 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
19/11/13 00:49:32 INFO Client: Application report for application_1573019412054_0003 (state: RUNNING)
19/11/13 00:49:32 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.0.219
ApplicationMaster RPC port: 0
queue: default
start time: 1573624125718
final status: UNDEFINED
tracking URL: http://master.org.cn:8088/proxy/application_1573019412054_0003/
user: hdfs
19/11/13 00:49:32 INFO YarnClientSchedulerBackend: Application application_1573019412054_0003 has started running.
19/11/13 00:49:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39873.
19/11/13 00:49:32 INFO NettyBlockTransferService: Server created on master.org.cn:39873
19/11/13 00:49:32 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/11/13 00:49:32 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, master.org.cn, 39873, None)
19/11/13 00:49:37 INFO BlockManagerMasterEndpoint: Registering block manager master.org.cn:39873 with 413.9 MB RAM, BlockManagerId(driver, master.org.cn, 39873, None)
19/11/13 00:49:37 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, master.org.cn, 39873, None)
19/11/13 00:49:37 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, master.org.cn, 39873, None)
19/11/13 00:49:38 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
19/11/13 00:49:38 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@27898e13{/metrics/json,null,AVAILABLE,@Spark}
19/11/13 00:49:42 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/application_1573019412054_0003
19/11/13 00:49:43 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
19/11/13 00:49:46 INFO SparkContext: Starting job: reduce at SparkPi.scala:58
19/11/13 00:49:46 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:58) with 10 output partitions
19/11/13 00:49:46 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:58)
19/11/13 00:49:46 INFO DAGScheduler: Parents of final stage: List()
19/11/13 00:49:46 INFO DAGScheduler: Missing parents: List()
19/11/13 00:49:46 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:54), which has no missing parents
19/11/13 00:49:48 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1960.0 B, free 413.9 MB)
19/11/13 00:49:48 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1271.0 B, free 413.9 MB)
19/11/13 00:49:48 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on master.org.cn:39873 (size: 1271.0 B, free: 413.9 MB)
19/11/13 00:49:48 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039
19/11/13 00:49:48 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:54) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
19/11/13 00:49:48 INFO YarnScheduler: Adding task set 0.0 with 10 tasks
19/11/13 00:49:58 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.0.219:35044) with ID 1
19/11/13 00:49:58 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, master.org.cn, executor 1, partition 0, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:49:58 INFO BlockManagerMasterEndpoint: Registering block manager master.org.cn:38965 with 413.9 MB RAM, BlockManagerId(1, master.org.cn, 38965, None)
19/11/13 00:50:01 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on master.org.cn:38965 (size: 1271.0 B, free: 413.9 MB)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, master.org.cn, executor 1, partition 1, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, master.org.cn, executor 1, partition 2, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4644 ms on master.org.cn (executor 1) (1/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, master.org.cn, executor 1, partition 3, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 130 ms on master.org.cn (executor 1) (2/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, master.org.cn, executor 1, partition 4, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 253 ms on master.org.cn (executor 1) (3/10)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 207 ms on master.org.cn (executor 1) (4/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, master.org.cn, executor 1, partition 5, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 164 ms on master.org.cn (executor 1) (5/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, master.org.cn, executor 1, partition 6, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 128 ms on master.org.cn (executor 1) (6/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, master.org.cn, executor 1, partition 7, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 27 ms on master.org.cn (executor 1) (7/10)
19/11/13 00:50:03 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, master.org.cn, executor 1, partition 8, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:03 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 185 ms on master.org.cn (executor 1) (8/10)
19/11/13 00:50:04 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, master.org.cn, executor 1, partition 9, PROCESS_LOCAL, 7864 bytes)
19/11/13 00:50:04 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 121 ms on master.org.cn (executor 1) (9/10)
19/11/13 00:50:04 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 183 ms on master.org.cn (executor 1) (10/10)
19/11/13 00:50:04 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/11/13 00:50:04 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:58) finished in 17.191 s
19/11/13 00:50:04 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:58, took 17.753692 s
19/11/13 00:50:06 INFO SharedState: loading hive config file: file:/etc/spark2/3.1.0.0-78/0/hive-site.xml
19/11/13 00:50:06 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/apps/spark/warehouse').
19/11/13 00:50:06 INFO SharedState: Warehouse path is '/apps/spark/warehouse'.
19/11/13 00:50:06 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL.
19/11/13 00:50:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1488a861{/SQL,null,AVAILABLE,@Spark}
19/11/13 00:50:06 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/json.
19/11/13 00:50:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5432dca2{/SQL/json,null,AVAILABLE,@Spark}
19/11/13 00:50:06 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution.
19/11/13 00:50:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@32ba5c65{/SQL/execution,null,AVAILABLE,@Spark}
19/11/13 00:50:06 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution/json.
19/11/13 00:50:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@47797401{/SQL/execution/json,null,AVAILABLE,@Spark}
19/11/13 00:50:06 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /static/sql.
19/11/13 00:50:06 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@22ff1372{/static/sql,null,AVAILABLE,@Spark}
19/11/13 00:50:08 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/11/13 00:50:10 INFO SuiteKickoff$: One run of SparkPi and that's it!
19/11/13 00:50:13 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
19/11/13 00:50:15 INFO CodeGenerator: Code generated in 890.472243 ms
19/11/13 00:50:15 INFO CodeGenerator: Code generated in 115.667666 ms
19/11/13 00:50:15 INFO CodeGenerator: Code generated in 18.551267 ms
19/11/13 00:50:16 INFO SparkContext: Starting job: show at SparkFuncs.scala:108
19/11/13 00:50:16 INFO DAGScheduler: Got job 1 (show at SparkFuncs.scala:108) with 1 output partitions
19/11/13 00:50:16 INFO DAGScheduler: Final stage: ResultStage 1 (show at SparkFuncs.scala:108)
19/11/13 00:50:16 INFO DAGScheduler: Parents of final stage: List()
19/11/13 00:50:16 INFO DAGScheduler: Missing parents: List()
19/11/13 00:50:16 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[8] at show at SparkFuncs.scala:108), which has no missing parents
19/11/13 00:50:16 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.8 KB, free 413.9 MB)
19/11/13 00:50:16 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.7 KB, free 413.9 MB)
19/11/13 00:50:16 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on master.org.cn:39873 (size: 4.7 KB, free: 413.9 MB)
19/11/13 00:50:16 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039
19/11/13 00:50:16 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[8] at show at SparkFuncs.scala:108) (first 15 tasks are for partitions Vector(0))
19/11/13 00:50:16 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
19/11/13 00:50:16 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 10, master.org.cn, executor 1, partition 0, PROCESS_LOCAL, 8313 bytes)
19/11/13 00:50:16 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on master.org.cn:38965 (size: 4.7 KB, free: 413.9 MB)
19/11/13 00:50:18 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 10) in 1994 ms on master.org.cn (executor 1) (1/1)
19/11/13 00:50:18 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
19/11/13 00:50:18 INFO DAGScheduler: ResultStage 1 (show at SparkFuncs.scala:108) finished in 2.047 s
19/11/13 00:50:18 INFO DAGScheduler: Job 1 finished: show at SparkFuncs.scala:108, took 2.059734 s
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+--------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-------------------------+-----------------------------------+----------------------------------------------------------------------------------+------------------------+------------------+--------------------------------------------------------------------------------------+------------------------+--------------------+--------------------------------+--------------------+
| name| timestamp|total_runtime| pi_approximate|input|output|saveMode|slices|run|spark.sql.warehouse.dir|spark.history.kerberos.keytab|spark.io.compression.lz4.blockSize|spark.executor.extraJavaOptions|spark.driver.host|spark.history.fs.logDirectory|spark.sql.autoBroadcastJoinThreshold|spark.eventLog.enabled|spark.driver.port|spark.driver.extraLibraryPath|spark.yarn.queue| spark.jars|spark.sql.orc.filterPushdown|spark.shuffle.unsafe.file.output.buffer|spark.yarn.historyServer.address| spark.app.name|spark.sql.hive.metastore.jars|spark.history.kerberos.principal|spark.unsafe.sorter.spill.reader.buffer.size|spark.history.fs.cleaner.enabled|spark.shuffle.io.serverThreads|spark.executor.id|spark.sql.hive.convertMetastoreOrc|spark.submit.deployMode|spark.master|spark.history.fs.cleaner.interval|spark.history.fs.cleaner.maxAge| spark.ui.filters|spark.history.provider|spark.executor.extraLibraryPath|spark.shuffle.file.buffer| spark.eventLog.dir|spark.history.ui.port|spark.driver.appUIAddress|spark.sql.statistics.fallBackToHdfs|spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS|spark.shuffle.io.backLog|spark.sql.orc.impl|spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES|spark.history.store.path| spark.app.id|spark.sql.hive.metastore.version| description|
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+--------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-------------------------+-----------------------------------+----------------------------------------------------------------------------------+------------------------+------------------+--------------------------------------------------------------------------------------+------------------------+--------------------+--------------------------------+--------------------+
|sparkpi|1573624185473| 18822186073|3.1421791421791423| | | error| 10| 0| /apps/spark/wareh...| none| 128kb| -XX:+UseNUMA| master.org.cn| hdfs:///spark2-hi...| 26214400| true| 34335| /usr/hdp/current/...| default|file:/opt/spark-b...| true| 5m| master.org.cn:18081|com.ibm.sparktc.s...| /usr/hdp/current/...| none| 1m| true| 128| driver| true| client| yarn| 7d| 90d|org.apache.hadoop...| org.apache.spark....| /usr/hdp/current/...| 1m|hdfs:///spark2-hi...| 18081| http://master.org...| true| master.org.cn| 8192| native| http://master.org...| /var/lib/spark2/s...|application_15730...| 3.0|One run of SparkP...|
+-------+-------------+-------------+------------------+-----+------+--------+------+---+-----------------------+-----------------------------+----------------------------------+-------------------------------+-----------------+-----------------------------+------------------------------------+----------------------+-----------------+-----------------------------+----------------+--------------------+----------------------------+---------------------------------------+--------------------------------+--------------------+-----------------------------+--------------------------------+--------------------------------------------+--------------------------------+------------------------------+-----------------+----------------------------------+-----------------------+------------+---------------------------------+-------------------------------+--------------------+----------------------+-------------------------------+-------------------------+--------------------+---------------------+-------------------------+-----------------------------------+----------------------------------------------------------------------------------+------------------------+------------------+--------------------------------------------------------------------------------------+------------------------+--------------------+--------------------------------+--------------------+ 19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 40
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 35
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 42
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 46
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 44
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 31
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 51
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 45
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 41
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 49
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 29
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 39
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 32
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 34
19/11/13 00:50:18 INFO ContextCleaner: Cleaned accumulator 37
19/11/13 00:50:18 INFO SparkContext: Invoking stop() from shutdown hook
19/11/13 00:50:19 INFO AbstractConnector: Stopped Spark@50b8ae8d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/11/13 00:50:19 INFO BlockManagerInfo: Removed broadcast_1_piece0 on master.org.cn:39873 in memory (size: 4.7 KB, free: 413.9 MB)
19/11/13 00:50:19 INFO BlockManagerInfo: Removed broadcast_1_piece0 on master.org.cn:38965 in memory (size: 4.7 KB, free: 413.9 MB)
19/11/13 00:50:19 INFO SparkUI: Stopped Spark web UI at http://master.org.cn:4040
19/11/13 00:50:19 INFO YarnClientSchedulerBackend: Interrupting monitor thread
19/11/13 00:50:19 INFO YarnClientSchedulerBackend: Shutting down all executors
19/11/13 00:50:19 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
19/11/13 00:50:19 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
19/11/13 00:50:19 INFO YarnClientSchedulerBackend: Stopped
19/11/13 00:50:19 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/11/13 00:50:19 INFO MemoryStore: MemoryStore cleared
19/11/13 00:50:19 INFO BlockManager: BlockManager stopped
19/11/13 00:50:19 INFO BlockManagerMaster: BlockManagerMaster stopped
19/11/13 00:50:19 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/11/13 00:50:19 INFO SparkContext: Successfully stopped SparkContext
19/11/13 00:50:19 INFO ShutdownHookManager: Shutdown hook called
19/11/13 00:50:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-c1bcf3d3-bcc5-4bdc-bb5b-dd6d28813046
19/11/13 00:50:19 INFO ShutdownHookManager: Deleting directory /tmp/spark-ea429231-7335-4666-819e-ed9cee4cefa1

SparkBench安装使用入门的更多相关文章

  1. Apache Hadoop2.x 边安装边入门

    完整PDF版本:<Apache Hadoop2.x边安装边入门> 目录 第一部分:Linux环境安装 第一步.配置Vmware NAT网络 一. Vmware网络模式介绍 二. NAT模式 ...

  2. bower安装使用入门详情

    bower安装使用入门详情   bower自定义安装:安装bower需要先安装node,npm,git全局安装bower,命令:npm install -g bower进入项目目录下,新建文件1.tx ...

  3. [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍

    前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...

  4. 虚拟光驱 DAEMON Tools Lite ——安装与入门

    DAEMON Tools Lite 是什么?它不仅仅是虚拟光驱.是的,你可以使用它制作.加载光盘映像,但是 DAEMON Tools 产品那么多,Lite版与其他版本究竟有什么不同呢?或者说,是什么让 ...

  5. Python 3.6.3 官网 下载 安装 测试 入门教程 (windows)

    1. 官网下载 Python 3.6.3 访问 Python 官网 https://www.python.org/ 点击 Downloads => Python 3.6.3 下载 Python ...

  6. 八:Lombok 安装、入门 - 消除冗长的 java 代码

    Lombok 安装.入门 - 消除冗长的 java 代码 前言:    逛开源社区的时候无意发现的,用了一段时间,觉得还可以,特此推荐一下.    lombok 提供了简单的注解的形式来帮助我们简化消 ...

  7. robotframework安装及入门指南

    将很久之前自己在本地记录的一些笔记发表到随笔来,希望能够帮到一些童鞋~ robotframework安装及入门指南 本文主要介绍robotframework在windows环境的安装过程! 安装步骤 ...

  8. pytest_01_安装和入门

    目录 pytest 安装与入门 1.pip install -U pytest 2.创建一个test01.py的文件 3.在该目录下执行pytest(venv) 4.执行多个,新建一个py文件 tes ...

  9. windows下nodejs express安装及入门网站,视频资料,开源项目介绍

    windows下nodejs express安装及入门网站,视频资料,开源项目介绍,pm2,supervisor,npm,Pomelo,Grunt安装使用注意事项等总结 第一步:下载安装文件下载地址: ...

随机推荐

  1. 12.Clear Flags属性与天空盒

    选中Hierarchy面板的摄像机,然后在右侧Inspector面板的Clear Flags属性可以找到有如下选项, SkyBox:天空盒(默认效果,让场景看着有一个天空) Solid Color:固 ...

  2. Java实现上传文件到指定服务器指定目录(ChannelSftp实现文件上传下载)

    package com.tianyang.task.utils; import java.io.File;import java.io.FileInputStream;import java.io.I ...

  3. Python-字符串内容检测

    str.isnumeric():检测字符串是否只由数字组成 str.isalpha():检测字符串是否只由字母组成 str.islower():检测字符串中所有的字母是否都为小写 str.isuppe ...

  4. java 面向对象(二十四):interface:接口

    interface:接口1.使用说明: 1.接口使用interface来定义 * 2.Java中,接口和类是并列的两个结构 * 3.如何定义接口:定义接口中的成员 * * 3.1 JDK7及以前:只能 ...

  5. Python面向对象05 /私有成员、类方法、静态方法、属性、isinstance/issubclass

    Python面向对象05 /私有成员.类方法.静态方法.属性.isinstance/issubclass 目录 Python面向对象05 /私有成员.类方法.静态方法.属性.isinstance/is ...

  6. OSCP Learning Notes - Kali Linux

    Install Kali Linux : https://www.kali.org/ Common Commands: pwd man ls ls -la cd mkdir rmdir cp mv l ...

  7. Zabbix4.x如何安全传输数据

    由于设备都在混合云,所以不少数据传输是通过公网,这样极大的增加了危险性,所以在Zabbix数据传输这块则进行PSK安全认证,由proxy主动收集agent数据后统一发送给server,这样只需要对pr ...

  8. 波士顿动力狗 SPOT 权威购买指北

    两周前 油管科技视频播主 Lew Later 发布了一支 波士顿动力狗子的开箱视频,短短两周的时间内这支视频的播放量就达到了367万, 在Lew Later 近期发布的视频中,这支视频的播放量绝对算得 ...

  9. p70_域名解析系统DNS

    一.DNS作用 二.域名 www.cskaoyan.com. www 三级域名 cskaoyan 二级域名 com 顶级域名 三.域名服务器 根域名服务器:知道所有顶级域名服务器的域名和ip地址 顶级 ...

  10. 设计模式:strategy模式

    思想:将算法进行抽象,然后使用桥接的模式使用算法的抽象接口,达到算法整体替换的目的 理解:和桥接模式相同,只是桥接的两边分开的思想不同 例子: class Algrithm //算法的抽象 { pub ...