spark on yarn,client模式时,执行spark-submit命令后命令行日志和YARN AM日志
[root@linux-node1 bin]# ./spark-submit \
> --class com.kou.List2Hive \
> --master yarn \
> --deploy-mode client \
> sparkTestNew-1.0.jar
18/11/27 21:21:14 INFO spark.SparkContext: Running Spark version 2.2.1
18/11/27 21:21:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/11/27 21:21:15 INFO spark.SparkContext: Submitted application: com.kou.List2Hive
18/11/27 21:21:15 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:21:15 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:21:15 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:21:15 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:21:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:21:16 INFO util.Utils: Successfully started service 'sparkDriver' on port 45859.
18/11/27 21:21:16 INFO spark.SparkEnv: Registering MapOutputTracker
18/11/27 21:21:16 INFO spark.SparkEnv: Registering BlockManagerMaster
18/11/27 21:21:16 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/11/27 21:21:16 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/11/27 21:21:16 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-b93ca021-5b27-4a5c-8d3f-28ba53861c2e
18/11/27 21:21:16 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/11/27 21:21:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/11/27 21:21:17 INFO util.log: Logging initialized @4227ms
18/11/27 21:21:17 INFO server.Server: jetty-9.3.z-SNAPSHOT
18/11/27 21:21:17 INFO server.Server: Started @4446ms
18/11/27 21:21:17 INFO server.AbstractConnector: Started ServerConnector@4d4d48a6{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/11/27 21:21:17 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@71652c98{/jobs,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4837595f{/jobs/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b718392{/jobs/job,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49bf29c6{/jobs/job/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3fcdcf{/stages,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@46292372{/stages/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6c44052e{/stages/stage,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5215cd9a{/stages/stage/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@31198ceb{/stages/pool,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@75201592{/stages/pool/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@aa5455e{/storage,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5dda14d0{/storage/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d9fc57a{/storage/rdd,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b4ef7{/storage/rdd/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5987e932{/environment,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5bbbdd4b{/environment/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@25230246{/executors,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a8b5227{/executors/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6979efad{/executors/threadDump,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a67318f{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@17f9344b{/static,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@12365c88{/,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2237bada{/api,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@30272916{/jobs/job/kill,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5bf61e67{/stages/stage/kill,null,AVAILABLE,@Spark}
18/11/27 21:21:17 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.56.11:4040
18/11/27 21:21:17 INFO spark.SparkContext: Added JAR file:/home/koushengrui/app/spark/bin/sparkTestNew-1.0.jar at spark://192.168.56.11:45859/jars/sparkTestNew-1.0.jar with timestamp 1543324877614
18/11/27 21:21:19 INFO client.RMProxy: Connecting to ResourceManager at /192.168.56.11:8032
18/11/27 21:21:19 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
18/11/27 21:21:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/11/27 21:21:19 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/11/27 21:21:19 INFO yarn.Client: Setting up container launch context for our AM
18/11/27 21:21:19 INFO yarn.Client: Setting up the launch environment for our AM container
18/11/27 21:21:19 INFO yarn.Client: Preparing resources for our AM container
18/11/27 21:21:21 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/11/27 21:21:25 INFO yarn.Client: Uploading resource file:/tmp/spark-5655c941-df4e-40ec-ba7c-d22c16081087/__spark_libs__8205289718634466678.zip -> hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0005/__spark_libs__8205289718634466678.zip
18/11/27 21:21:27 INFO yarn.Client: Uploading resource file:/tmp/spark-5655c941-df4e-40ec-ba7c-d22c16081087/__spark_conf__1758655140796997826.zip -> hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0005/__spark_conf__.zip
18/11/27 21:21:27 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:21:27 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:21:27 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:21:27 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:21:27 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:21:27 INFO yarn.Client: Submitting application application_1543322675361_0005 to ResourceManager
18/11/27 21:21:27 INFO impl.YarnClientImpl: Submitted application application_1543322675361_0005
18/11/27 21:21:27 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1543322675361_0005 and attemptId None
18/11/27 21:21:29 INFO yarn.Client: Application report for application_1543322675361_0005 (state: ACCEPTED)
18/11/27 21:21:29 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1543324887956
final status: UNDEFINED
tracking URL: http://linux-node1:8088/proxy/application_1543322675361_0005/
user: root
18/11/27 21:21:36 INFO yarn.Client: Application report for application_1543322675361_0005 (state: ACCEPTED)
18/11/27 21:21:37 INFO yarn.Client: Application report for application_1543322675361_0005 (state: ACCEPTED)
18/11/27 21:21:37 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
18/11/27 21:21:37 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> linux-node1, PROXY_URI_BASES -> http://linux-node1:8088/proxy/application_1543322675361_0005), /proxy/application_1543322675361_0005
18/11/27 21:21:37 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/11/27 21:21:38 INFO yarn.Client: Application report for application_1543322675361_0005 (state: RUNNING)
18/11/27 21:21:38 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.56.11
ApplicationMaster RPC port: 0
queue: default
start time: 1543324887956
final status: UNDEFINED
tracking URL: http://linux-node1:8088/proxy/application_1543322675361_0005/
user: root
18/11/27 21:21:38 INFO cluster.YarnClientSchedulerBackend: Application application_1543322675361_0005 has started running.
18/11/27 21:21:38 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 41335.
18/11/27 21:21:38 INFO netty.NettyBlockTransferService: Server created on 192.168.56.11:41335
18/11/27 21:21:38 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/11/27 21:21:38 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.56.11, 41335, None)
18/11/27 21:21:38 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.56.11:41335 with 366.3 MB RAM, BlockManagerId(driver, 192.168.56.11, 41335, None)
18/11/27 21:21:38 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.56.11, 41335, None)
18/11/27 21:21:38 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.56.11, 41335, None)
18/11/27 21:21:38 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@111c229c{/metrics/json,null,AVAILABLE,@Spark}
18/11/27 21:21:45 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.56.11:60260) with ID 1
18/11/27 21:21:46 INFO storage.BlockManagerMasterEndpoint: Registering block manager linux-node1:39016 with 366.3 MB RAM, BlockManagerId(1, linux-node1, 39016, None)
18/11/27 21:21:46 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.56.11:60264) with ID 2
18/11/27 21:21:46 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/11/27 21:21:46 INFO kou.List2Hive: conf= [(spark.jars,file:/home/koushengrui/app/spark/bin/sparkTestNew-1.0.jar), (spark.app.name,com.kou.List2Hive), (spark.master,yarn), (spark.driver.host,192.168.56.11), (spark.sql.catalogImplementation,hive), (spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES,http://linux-node1:8088/proxy/application_1543322675361_0005), (spark.driver.appUIAddress,http://192.168.56.11:4040), (spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS,linux-node1), (spark.executor.id,driver), (spark.submit.deployMode,client), (spark.app.id,application_1543322675361_0005), (spark.ui.filters,org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter), (spark.driver.port,45859)]
18/11/27 21:21:46 INFO internal.SharedState: loading hive config file: jar:file:/home/koushengrui/app/spark/bin/sparkTestNew-1.0.jar!/hive-site.xml
18/11/27 21:21:46 INFO storage.BlockManagerMasterEndpoint: Registering block manager linux-node1:41448 with 366.3 MB RAM, BlockManagerId(2, linux-node1, 41448, None)
18/11/27 21:21:46 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/koushengrui/app/spark/bin/spark-warehouse').
18/11/27 21:21:46 INFO internal.SharedState: Warehouse path is 'file:/home/koushengrui/app/spark/bin/spark-warehouse'.
18/11/27 21:21:46 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@38732364{/SQL,null,AVAILABLE,@Spark}
18/11/27 21:21:46 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@48cd319d{/SQL/json,null,AVAILABLE,@Spark}
18/11/27 21:21:46 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3e5beab5{/SQL/execution,null,AVAILABLE,@Spark}
18/11/27 21:21:46 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33ec2c0c{/SQL/execution/json,null,AVAILABLE,@Spark}
18/11/27 21:21:46 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@492521c4{/static/sql,null,AVAILABLE,@Spark}
18/11/27 21:21:48 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/11/27 21:21:49 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:21:49 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:21:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/11/27 21:21:50 INFO metastore.ObjectStore: ObjectStore, initialize called
18/11/27 21:21:50 INFO DataNucleus.Persistence: Property datanucleus.schema.autoCreateTables unknown - will be ignored
18/11/27 21:21:50 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/11/27 21:21:50 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/11/27 21:21:52 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:21:52 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:21:52 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
18/11/27 21:21:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:21:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:21:55 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:21:55 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:21:55 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
18/11/27 21:21:55 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is OTHER
18/11/27 21:21:55 INFO metastore.ObjectStore: Initialized ObjectStore
18/11/27 21:21:56 INFO metastore.HiveMetaStore: Added admin role in metastore
18/11/27 21:21:56 INFO metastore.HiveMetaStore: Added public role in metastore
18/11/27 21:21:56 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
18/11/27 21:21:56 INFO metastore.HiveMetaStore: 0: get_all_databases
18/11/27 21:21:56 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
18/11/27 21:21:56 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
18/11/27 21:21:56 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
18/11/27 21:21:56 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
18/11/27 21:21:56 INFO session.SessionState: Created local directory: /home/hive/iotmp/1969120a-b146-48e1-9ffb-8a6286c87d99_resources
18/11/27 21:21:56 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1969120a-b146-48e1-9ffb-8a6286c87d99
18/11/27 21:21:56 INFO session.SessionState: Created local directory: /home/hive/iotmp/root/1969120a-b146-48e1-9ffb-8a6286c87d99
18/11/27 21:21:56 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/1969120a-b146-48e1-9ffb-8a6286c87d99/_tmp_space.db
18/11/27 21:21:56 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/koushengrui/app/spark/bin/spark-warehouse
18/11/27 21:21:56 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:21:56 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:21:56 INFO metastore.HiveMetaStore: 0: get_database: global_temp
18/11/27 21:21:56 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: global_temp
18/11/27 21:21:56 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
18/11/27 21:21:57 WARN conf.HiveConf: HiveConf of name hive.server2.webui.host does not exist
18/11/27 21:21:57 WARN conf.HiveConf: HiveConf of name hive.strict.checks.bucketing does not exist
18/11/27 21:21:57 INFO session.SessionState: Created local directory: /home/hive/iotmp/ab0e4563-65aa-4273-880f-8311ee60fe5b_resources
18/11/27 21:21:57 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/ab0e4563-65aa-4273-880f-8311ee60fe5b
18/11/27 21:21:57 INFO session.SessionState: Created local directory: /home/hive/iotmp/root/ab0e4563-65aa-4273-880f-8311ee60fe5b
18/11/27 21:21:57 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/ab0e4563-65aa-4273-880f-8311ee60fe5b/_tmp_space.db
18/11/27 21:21:57 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/koushengrui/app/spark/bin/spark-warehouse
18/11/27 21:21:57 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/11/27 21:21:57 INFO kou.List2Hive: runtimeConfig= Map(spark.driver.host -> 192.168.56.11, spark.driver.port -> 45859, spark.jars -> file:/home/koushengrui/app/spark/bin/sparkTestNew-1.0.jar, spark.app.name -> com.kou.List2Hive, spark.executor.id -> driver, spark.submit.deployMode -> client, spark.master -> yarn, spark.ui.filters -> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, spark.sql.catalogImplementation -> hive, spark.driver.appUIAddress -> http://192.168.56.11:4040, spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS -> linux-node1, spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES -> http://linux-node1:8088/proxy/application_1543322675361_0005, spark.app.id -> application_1543322675361_0005)
18/11/27 21:21:58 INFO execution.SparkSqlParser: Parsing command: ss
18/11/27 21:21:58 INFO execution.SparkSqlParser: Parsing command: use default
18/11/27 21:21:58 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:21:58 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:21:58 INFO execution.SparkSqlParser: Parsing command: insert into table people select * from ss
18/11/27 21:21:59 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:21:59 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:21:59 INFO parser.CatalystSqlParser: Parsing command: array<string>
18/11/27 21:21:59 INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://192.168.56.11:9000/user/hive/warehouse/people/.hive-staging_hive_2018-11-27_21-21-59_896_6634404133536252182-1
18/11/27 21:22:00 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
18/11/27 21:22:00 INFO datasources.SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
18/11/27 21:22:01 INFO codegen.CodeGenerator: Code generated in 476.205539 ms
18/11/27 21:22:01 INFO spark.SparkContext: Starting job: sql at List2Hive.java:31
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Got job 0 (sql at List2Hive.java:31) with 1 output partitions
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (sql at List2Hive.java:31)
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Parents of final stage: List()
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Missing parents: List()
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at sql at List2Hive.java:31), which has no missing parents
18/11/27 21:22:02 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 154.4 KB, free 366.1 MB)
18/11/27 21:22:02 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 55.6 KB, free 366.1 MB)
18/11/27 21:22:02 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.56.11:41335 (size: 55.6 KB, free: 366.2 MB)
18/11/27 21:22:02 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
18/11/27 21:22:02 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at sql at List2Hive.java:31) (first 15 tasks are for partitions Vector(0))
18/11/27 21:22:02 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 tasks
18/11/27 21:22:02 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, linux-node1, executor 1, partition 0, PROCESS_LOCAL, 5118 bytes)
18/11/27 21:22:03 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on linux-node1:39016 (size: 55.6 KB, free: 366.2 MB)
18/11/27 21:22:07 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4387 ms on linux-node1 (executor 1) (1/1)
18/11/27 21:22:07 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
18/11/27 21:22:07 INFO scheduler.DAGScheduler: ResultStage 0 (sql at List2Hive.java:31) finished in 4.433 s
18/11/27 21:22:07 INFO scheduler.DAGScheduler: Job 0 finished: sql at List2Hive.java:31, took 5.433724 s
18/11/27 21:22:07 INFO datasources.FileFormatWriter: Job null committed.
18/11/27 21:22:07 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:22:07 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:22:09 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:22:09 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:22:09 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on linux-node1:39016 in memory (size: 55.6 KB, free: 366.3 MB)
18/11/27 21:22:09 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on 192.168.56.11:41335 in memory (size: 55.6 KB, free: 366.3 MB)
18/11/27 21:22:09 INFO spark.ContextCleaner: Cleaned accumulator 0
18/11/27 21:22:09 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
18/11/27 21:22:09 INFO metadata.Hive: Renaming src: hdfs://192.168.56.11:9000/user/hive/warehouse/people/.hive-staging_hive_2018-11-27_21-21-59_896_6634404133536252182-1/-ext-10000/part-00000-efd34876-2c2f-4b42-99e9-f9d3c909ba81-c000, dest: hdfs://192.168.56.11:9000/user/hive/warehouse/people/part-00000-efd34876-2c2f-4b42-99e9-f9d3c909ba81-c000, Status:true
18/11/27 21:22:10 INFO metastore.HiveMetaStore: 0: alter_table: db=default tbl=people newtbl=people
18/11/27 21:22:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=alter_table: db=default tbl=people newtbl=people
18/11/27 21:22:10 INFO hive.log: Updating table stats fast for people
18/11/27 21:22:10 INFO hive.log: Updated size of table people to 0
18/11/27 21:22:10 INFO execution.SparkSqlParser: Parsing command: `default`.`people`
18/11/27 21:22:10 INFO metastore.HiveMetaStore: 0: get_database: default
18/11/27 21:22:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
18/11/27 21:22:10 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:22:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:22:10 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=people
18/11/27 21:22:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=people
18/11/27 21:22:10 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:22:10 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:22:10 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:22:10 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:22:10 INFO parser.CatalystSqlParser: Parsing command: string
18/11/27 21:22:10 INFO server.AbstractConnector: Stopped Spark@4d4d48a6{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/11/27 21:22:10 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.56.11:4040
18/11/27 21:22:10 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
18/11/27 21:22:10 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
18/11/27 21:22:10 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/11/27 21:22:11 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/11/27 21:22:11 INFO cluster.YarnClientSchedulerBackend: Stopped
18/11/27 21:22:11 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/11/27 21:22:11 INFO memory.MemoryStore: MemoryStore cleared
18/11/27 21:22:11 INFO storage.BlockManager: BlockManager stopped
18/11/27 21:22:11 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/11/27 21:22:11 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/11/27 21:22:11 INFO spark.SparkContext: Successfully stopped SparkContext
18/11/27 21:22:11 INFO util.ShutdownHookManager: Shutdown hook called
18/11/27 21:22:11 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-5655c941-df4e-40ec-ba7c-d22c16081087
YARN AM日志:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/koushengrui/app/hadoop/data/nm-local-dir/usercache/root/filecache/21/__spark_libs__8205289718634466678.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/koushengrui/app/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/11/27 21:21:32 INFO util.SignalUtils: Registered signal handler for TERM
18/11/27 21:21:32 INFO util.SignalUtils: Registered signal handler for HUP
18/11/27 21:21:32 INFO util.SignalUtils: Registered signal handler for INT
18/11/27 21:21:34 INFO yarn.ApplicationMaster: Preparing Local resources
18/11/27 21:21:35 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1543322675361_0005_000001
18/11/27 21:21:35 INFO spark.SecurityManager: Changing view acls to: root
18/11/27 21:21:35 INFO spark.SecurityManager: Changing modify acls to: root
18/11/27 21:21:35 INFO spark.SecurityManager: Changing view acls groups to:
18/11/27 21:21:35 INFO spark.SecurityManager: Changing modify acls groups to:
18/11/27 21:21:35 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
18/11/27 21:21:36 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
18/11/27 21:21:36 INFO yarn.ApplicationMaster: Driver now available: 192.168.56.11:45859
18/11/27 21:21:36 INFO client.TransportClientFactory: Successfully created connection to /192.168.56.11:45859 after 269 ms (0 ms spent in bootstraps)
18/11/27 21:21:37 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> linux-node1, PROXY_URI_BASES -> http://linux-node1:8088/proxy/application_1543322675361_0005),/proxy/application_1543322675361_0005)
18/11/27 21:21:37 INFO yarn.ApplicationMaster:
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_YARN_STAGING_DIR -> *********(redacted)
SPARK_USER -> *********(redacted)
SPARK_YARN_MODE -> true
command:
{{JAVA_HOME}}/bin/java \
-server \
-Xmx1024m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.driver.port=45859' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@192.168.56.11:45859 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
1 \
--app-id \
application_1543322675361_0005 \
--user-class-path \
file:$PWD/__app__.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
__spark_libs__ -> resource { scheme: "hdfs" host: "192.168.56.11" port: 9000 file: "/user/root/.sparkStaging/application_1543322675361_0005/__spark_libs__8205289718634466678.zip" } size: 209021605 timestamp: 1543324887194 type: ARCHIVE visibility: PRIVATE
__spark_conf__ -> resource { scheme: "hdfs" host: "192.168.56.11" port: 9000 file: "/user/root/.sparkStaging/application_1543322675361_0005/__spark_conf__.zip" } size: 83443 timestamp: 1543324887866 type: ARCHIVE visibility: PRIVATE
===============================================================================
18/11/27 21:21:37 INFO client.RMProxy: Connecting to ResourceManager at /192.168.56.11:8030
18/11/27 21:21:37 INFO yarn.YarnRMClient: Registering the ApplicationMaster
18/11/27 21:21:37 INFO yarn.YarnAllocator: Will request 2 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
18/11/27 21:21:37 INFO yarn.YarnAllocator: Submitted 2 unlocalized container requests.
18/11/27 21:21:37 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
18/11/27 21:21:38 INFO impl.AMRMClientImpl: Received new token for : linux-node1:46122
18/11/27 21:21:38 INFO yarn.YarnAllocator: Launching container container_1543322675361_0005_01_000002 on host linux-node1 for executor with ID 1
18/11/27 21:21:38 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
18/11/27 21:21:38 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
18/11/27 21:21:38 INFO impl.ContainerManagementProtocolProxy: Opening proxy : linux-node1:46122
18/11/27 21:21:39 INFO yarn.YarnAllocator: Launching container container_1543322675361_0005_01_000003 on host linux-node1 for executor with ID 2
18/11/27 21:21:39 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
18/11/27 21:21:39 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
18/11/27 21:21:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : linux-node1:46122
18/11/27 21:22:10 INFO yarn.YarnAllocator: Driver requested a total number of 0 executor(s).
18/11/27 21:22:11 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. 192.168.56.11:45859
18/11/27 21:22:11 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
18/11/27 21:22:11 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. 192.168.56.11:45859
18/11/27 21:22:11 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
18/11/27 21:22:11 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
18/11/27 21:22:11 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://192.168.56.11:9000/user/root/.sparkStaging/application_1543322675361_0005
18/11/27 21:22:11 INFO util.ShutdownHookManager: Shutdown hook called
spark on yarn,client模式时,执行spark-submit命令后命令行日志和YARN AM日志的更多相关文章
- spark on yarn,cluster模式时,执行spark-submit命令后命令行日志和YARN AM日志
[root@linux-node1 bin]# ./spark-submit \> --class com.kou.List2Hive \> --master yarn \> --d ...
- yarn cluster和yarn client模式区别——yarn-cluster适用于生产环境,结果存HDFS;而yarn-client适用于交互和调试,也就是希望快速地看到application的输出
Yarn-cluster VS Yarn-client 从广义上讲,yarn-cluster适用于生产环境:而yarn-client适用于交互和调试,也就是希望快速地看到application的输出. ...
- laravel-admin安装时执行php arisan admin:install 命令时报SQLSTATE[42000]: Syntax error or acce ss violation: 1071 Specified key was too long; max key length is 1000 bytes
问题根源 MySql支持的utf8编码最大字符长度为3字节,如果遇到4字节的宽字符就会出现插入异常.三个字节UTF-8最大能编码的Unicode字符是0xffff,即Unicode中的基本多文种平面( ...
- 理解Spark运行模式(一)(Yarn Client)
Spark运行模式有Local,STANDALONE,YARN,MESOS,KUBERNETES这5种,其中最为常见的是YARN运行模式,它又可分为Client模式和Cluster模式.这里以Spar ...
- spark跑YARN模式或Client模式提交任务不成功(application state: ACCEPTED)
不多说,直接上干货! 问题详情 电脑8G,目前搭建3节点的spark集群,采用YARN模式. master分配2G,slave1分配1G,slave2分配1G.(在安装虚拟机时) export SPA ...
- Spark基本工作流程及YARN cluster模式原理(读书笔记)
Spark基本工作流程及YARN cluster模式原理 转载请注明出处:http://www.cnblogs.com/BYRans/ Spark基本工作流程 相关术语解释 Spark应用程序相关的几 ...
- spark yarn cluster模式下任务提交和计算流程分析
spark可以运行在standalone,yarn,mesos等多种模式下,当前我们用的最普遍的是yarn模式,在yarn模式下又分为client和cluster.本文接下来将分析yarn clust ...
- 自适应查询执行:在运行时提升Spark SQL执行性能
前言 Catalyst是Spark SQL核心优化器,早期主要基于规则的优化器RBO,后期又引入基于代价进行优化的CBO.但是在这些版本中,Spark SQL执行计划一旦确定就不会改变.由于缺乏或者不 ...
- Spark On Yarn报警告信息 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
1 贴出完整日志信息 // :: INFO client.RMProxy: Connecting to ResourceManager at hdp1/ // :: INFO yarn.Client: ...
随机推荐
- Luogu 3629 [APIO2010]巡逻
先考虑$k = 1$的情况,很明显每一条边都要被走两遍,而连成一个环之后,环上的每一条边都只要走一遍即可,所以我们使这个环的长度尽可能大,那么一棵树中最长的路径就是树的直径. 设直径的长度为$L$,答 ...
- latex中的空格
两个quad空格 a \qquad b 两个m的宽度 quad空格 a \quad b 一个m的宽度 大空格 a\ b 1/3m宽度 中等空格 a\;b 2/7m宽度 小空格 a\,b 1/6m宽度 ...
- Java面试问题列表
- C++中的内存重叠问题
内存重叠,直到做到一个笔试题才知道了什么是内存重叠.先上题目吧,是一个淘宝的笔试题,当时有点懵,不知道这个名词是啥子意思. 题目:补充下面函数代码: 如果两段内存重叠,用memcpy函数可能会导致行为 ...
- 使用metasploit进行栈溢出攻击-1
攻击是在bt5下面进行,目标程序是在ubuntu虚拟机上运行. 首先,需要搞明白什么是栈溢出攻击,详细内容请阅读 http://blog.csdn.net/cnctloveyu/article/det ...
- 如何选择SSL 证书服务
从信任等级的角度来说,SSL证书主要分为三类: 1. 域名型https证书(DVSSL):信任等级一般,只需验证网站的真实性便可颁发证书保护网站: 2. 企业型https证书(OVSSL):信任等级高 ...
- [Windows] VS打开资源文件(.rc)时显示 error RC2247 : SYMBOL name too long
源解决方案:error RC2247 : SYMBOL name too long 解决方法: 将所有要包含的文件用 APSTUDIO_HIDDEN_SYMBOLS 宏包起来,保存后关闭当前的资源文件 ...
- Schema技术
Schema 技术 Schema 是 DTD 的代替者,名称为 XML Schema,用于描述XML 文档结构,即对XML文档做出规范,比 DTD 更加强大,最主要的特征之一就是XML Schema ...
- AOP切点相关
1.切点定义 切点定义包含两个部分 一个切入点表达式 一个包含名字和任意参数的方法签名 package com.sysker.aspect; import org.aspectj.lang.annot ...
- THINKPHP 框架的模板技术
//echo C('name'); App/Action/IndexAction.class.php文件夹下的 URL模式 //输出URL模式//echo C('URL_MODEL'),'<br ...