Spark教程——(11)Spark程序local模式执行、cluster模式执行以及Oozie/Hue执行的设置方式
本地执行Spark SQL程序:
package com.fc //import common.util.{phoenixConnectMode, timeUtil} import org.apache.spark.sql.SQLContext import org.apache.spark.sql.functions.col import org.apache.spark.{SparkConf, SparkContext} /* 每天执行 */ object costDay { def main(args: Array[String]): Unit = { val conf = new SparkConf() .setAppName("fdsf") .setMaster("local") val sc = new SparkContext(conf) val sqlContext = new SQLContext(sc) // val df = sqlContext.load( // "org.apache.phoenix.spark" // , Map("table" -> "ASSET_NORMAL" // , "zkUrl" -> "node3,node4,node5:2181") // ) val tableName = "ASSET_NORMAL" val columes = Array( "ID", "ASSET_ID", "ASSET_NAME", "ASSET_FIRST_DEGREE_ID", "ASSET_FIRST_DEGREE_NAME", "ASSET_SECOND_DEGREE_ID", "ASSET_SECOND_DEGREE_NAME", "GB_DEGREE_ID", "GB_DEGREE_NAME", "ASSET_USE_FIRST_DEGREE_ID", "ASSET_USE_FIRST_DEGREE_NAME", "ASSET_USE_SECOND_DEGREE_ID", "ASSET_USE_SECOND_DEGREE_NAME", "MANAGEMENT_TYPE_ID", "MANAGEMENT_TYPE_NAME", "ASSET_MODEL", "FACTORY_NUMBER", "ASSET_COUNTRY_ID", "ASSET_COUNTRY_NAME", "MANUFACTURER", "SUPPLIER", "SUPPLIER_TEL", "ORIGINAL_VALUE", "USE_DEPARTMENT_ID", "USE_DEPARTMENT_NAME", "USER_ID", "USER_NAME", "ASSET_LOCATION_OF_PARK_ID", "ASSET_LOCATION_OF_PARK_NAME", "ASSET_LOCATION_OF_BUILDING_ID", "ASSET_LOCATION_OF_BUILDING_NAME", "ASSET_LOCATION_OF_ROOM_ID", "ASSET_LOCATION_OF_ROOM_NUMBER", "PRODUCTION_DATE", "ACCEPTANCE_DATE", "REQUISITION_DATE", "PERFORMANCE_INDEX", "ASSET_STATE_ID", "ASSET_STATE_NAME", "INSPECTION_TYPE_ID", "INSPECTION_TYPE_NAME", "SEAL_DATE", "SEAL_CAUSE", "COST_ITEM_ID", "COST_ITEM_NAME", "ITEM_COMMENTS", "UNSEAL_DATE", "SCRAP_DATE", "PURCHASE_NUMBER", "WARRANTY_PERIOD", "DEPRECIABLE_LIVES_ID", "DEPRECIABLE_LIVES_NAME", "MEASUREMENT_UNITS_ID", "MEASUREMENT_UNITS_NAME", "ANNEX", "REMARK", "ACCOUNTING_TYPE_ID", "ACCOUNTING_TYPE_NAME", "SYSTEM_TYPE_ID", "SYSTEM_TYPE_NAME", "ASSET_ID_PARENT", "CLASSIFIED_LEVEL_ID", "CLASSIFIED_LEVEL_NAME", "ASSET_PICTURE", "MILITARY_SPECIAL_CODE", "CHECK_CYCLE_ID", "CHECK_CYCLE_NAME", "CHECK_DATE", "CHECK_EFFECTIVE_DATE", "CHECK_MODE_ID", "CHECK_MODE_NAME", "CHECK_DEPARTMENT_ID", "CHECK_DEPARTMENT_NAME", "RENT_STATUS_ID", "RENT_STATUS_NAME", "STORAGE_TIME", "UPDATE_USER", "UPDATE_TIME", "IS_ON_PROCESS", "IS_DELETED", "FIRST_DEPARTMENT_ID", "FIRST_DEPARTMENT_NAME", "SECOND_DEPARTMENT_ID", "SECOND_DEPARTMENT_NAME", "CREATE_USER", "CREATE_TIME" ) val df = phoenixConnectMode.getMode1(sqlContext, tableName, columes) .filter(col("USE_DEPARTMENT_ID") isNotNull) df.registerTempTable("asset_normal") // df.show(false) def costingWithin(originalValue: Double, years: Int): Double = (originalValue*0.95)/(years*365) sqlContext.udf.register("costingWithin", costingWithin _) def costingBeyond(originalValue: Double): Double = originalValue*0.05/365 sqlContext.udf.register("costingBeyond", costingBeyond _) def expire(acceptanceDate: String, years: Int): Boolean = timeUtil.dateStrAddYears2TimeStamp(acceptanceDate, timeUtil.SECOND_TIME_FORMAT, years) > System.currentTimeMillis() sqlContext.udf.register("expire", expire _) val costDay = sqlContext .sql( "select " + "ID" + ",USE_DEPARTMENT_ID as FIRST_DEPARTMENT_ID" + ",case when expire(ACCEPTANCE_DATE, DEPRECIABLE_LIVES_NAME) then costingWithin(ORIGINAL_VALUE, DEPRECIABLE_LIVES_NAME) else costingBeyond(ORIGINAL_VALUE) end as ACTUAL_COST" + ",ORIGINAL_VALUE" + ",current_timestamp() as GENERATION_TIME" + " from asset_normal" ) costDay.printSchema() println(costDay.count()) costDay.col("ORIGINAL_VALUE") costDay.describe("ORIGINAL_VALUE").show() costDay.show(false) // costDay.write // .format("org.apache.phoenix.spark") // .mode("overwrite") // .option("table", "ASSET_FINANCIAL_DETAIL_DAY") // .option("zkUrl", "node3,node4,node5:2181") // .save() } }
执行结果参考《Spark教程——(10)Spark SQL读取Phoenix数据本地执行计算》,如下:
// :: INFO spark.SparkContext: Running Spark version // :: INFO spark.SecurityManager: Changing view acls to: cf_pc // :: INFO spark.SecurityManager: Changing modify acls to: cf_pc // :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cf_pc); users with modify permissions: Set(cf_pc) // :: INFO util.Utils: Successfully started service . // :: INFO slf4j.Slf4jLogger: Slf4jLogger started // :: INFO Remoting: Starting remoting // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.200.74.155:54708] // :: INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@10.200.74.155:54708] // :: INFO util.Utils: Successfully started service . // :: INFO spark.SparkEnv: Registering MapOutputTracker // :: INFO spark.SparkEnv: Registering BlockManagerMaster // :: INFO storage.DiskBlockManager: Created local directory at C:\Users\cf_pc\AppData\Local\Temp\blockmgr-9012d6b9-eab0-41f0-8e9e-b63c3fe7fb09 // :: INFO storage.MemoryStore: MemoryStore started with capacity 478.2 MB // :: INFO spark.SparkEnv: Registering OutputCommitCoordinator // :: INFO server.Server: jetty-.y.z-SNAPSHOT // :: INFO server.AbstractConnector: Started SelectChannelConnector@ // :: INFO util.Utils: Successfully started service . // :: INFO ui.SparkUI: Started SparkUI at http://10.200.74.155:4040 // :: INFO executor.Executor: Starting executor ID driver on host localhost // :: INFO util.Utils: Successfully started service . // :: INFO netty.NettyBlockTransferService: Server created on // :: INFO storage.BlockManagerMaster: Trying to register BlockManager // :: INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost: with ) // :: INFO storage.BlockManagerMaster: Registered BlockManager // :: INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 247.5 KB, free 477.9 MB) // :: INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 15.4 KB, free 477.9 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_0_piece0 (size: 15.4 KB, free: 478.2 MB) // :: INFO spark.SparkContext: Created broadcast from newAPIHadoopRDD at PhoenixRDD.scala: // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO log.QueryLoggerDisruptor: Starting QueryLoggerDisruptor , waitStrategy=BlockingWaitStrategy, exceptionHandler=org.apache.phoenix.log.QueryLoggerDefaultExceptionHandler@4f169009... // :: INFO query.ConnectionQueryServicesImpl: An instance of ConnectionQueryServices was created. // :: INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection- // :: INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=-, built on // : GMT // :: INFO zookeeper.ZooKeeper: Client environment:host.name=DESKTOP-0CDQ4PM // :: INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_212 // :: INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation // :: INFO zookeeper.ZooKeeper: Client environment:java.home=C:\3rd\Java\jdk1..0_212\jre // :: INFO zookeeper.ZooKeeper: Client environment:java.class.path=C:\3rd\Java\jdk1..0_212\jre\lib\charsets.jar;C:\3rd\Java\jdk1..0_212\jre\lib\deploy.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\access-bridge-.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\cldrdata.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\dnsns.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\jaccess.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\jfxrt.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\localedata.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\nashorn.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\sunec.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\sunjce_provider.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\sunmscapi.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\sunpkcs11.jar;C:\3rd\Java\jdk1..0_212\jre\lib\ext\zipfs.jar;C:\3rd\Java\jdk1..0_212\jre\lib\javaws.jar;C:\3rd\Java\jdk1..0_212\jre\lib\jce.jar;C:\3rd\Java\jdk1..0_212\jre\lib\jfr.jar;C:\3rd\Java\jdk1..0_212\jre\lib\jfxswt.jar;C:\3rd\Java\jdk1..0_212\jre\lib\jsse.jar;C:\3rd\Java\jdk1..0_212\jre\lib\management-agent.jar;C:\3rd\Java\jdk1..0_212\jre\lib\plugin.jar;C:\3rd\Java\jdk1..0_212\jre\lib\resources.jar;C:\3rd\Java\jdk1..0_212\jre\lib\rt.jar;C:\development\cf\scalapp\target\classes;C:\3rd\scala\scala-\lib\scala-actors-migration.jar;C:\3rd\scala\scala-\lib\scala-actors.jar;C:\3rd\scala\scala-\lib\scala-library.jar;C:\3rd\scala\scala-\lib\scala-reflect.jar;C:\3rd\scala\scala-\lib\scala-swing.jar;C:\3rd\scala\scala-\src\scala-actors-src.jar;C:\3rd\scala\scala-\src\scala-library-src.jar;C:\3rd\scala\scala-\src\scala-reflect-src.jar;C:\3rd\scala\scala-\src\scala-swing-src.jar;C:\development\MavenRepository\org\apache\spark\spark-core_2.\-cdh5.--cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.\\chill_2.-.jar;C:\development\MavenRepository\com\esotericsoftware\kryo\kryo\\chill-java-.jar;C:\development\MavenRepository\org\apache\xbean\xbean-asm5-shaded\-cdh5.-cdh5.-cdh5.-cdh5.\xercesImpl-.jar;C:\development\MavenRepository\xml-apis\xml-apis\\xml-apis-.jar;C:\development\MavenRepository\org\apache\hadoop\hadoop-mapreduce-client-app\-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.\aws-java-sdk-bundle-.jar;C:\development\MavenRepository\org\apache\spark\spark-launcher_2.\-cdh5.--cdh5.\-cdh5.--cdh5.\jackson-annotations-.jar;C:\development\MavenRepository\org\apache\spark\spark-network-shuffle_2.\-cdh5.--cdh5.\-cdh5.--cdh5.\jets3t-.jar;C:\development\MavenRepository\org\apache\httpcomponents\httpcore\\httpcore-.jar;C:\development\MavenRepository\com\jamesmurty\utils\java-xmlbuilder\\curator-recipes-.jar;C:\development\MavenRepository\org\apache\curator\curator-framework\\curator-framework-.jar;C:\development\MavenRepository\org\apache\zookeeper\zookeeper\\zookeeper-.jar;C:\development\MavenRepository\org\eclipse\jetty\orbit\javax.servlet\.v201112011016\javax.servlet-.v201112011016.jar;C:\development\MavenRepository\org\apache\commons\commons-lang3\\commons-lang3-.jar;C:\development\MavenRepository\org\apache\commons\commons-math3\\commons-math3-.jar;C:\development\MavenRepository\com\google\code\findbugs\jsr305\\jsr305-.jar;C:\development\MavenRepository\org\slf4j\slf4j-api\\slf4j-api-.jar;C:\development\MavenRepository\org\slf4j\jul-to-slf4j\\jul-to-slf4j-.jar;C:\development\MavenRepository\org\slf4j\jcl-over-slf4j\\jcl-over-slf4j-.jar;C:\development\MavenRepository\log4j\log4j\\log4j-.jar;C:\development\MavenRepository\org\slf4j\slf4j-log4j12\\slf4j-log4j12-.jar;C:\development\MavenRepository\com\ning\compress-lzf\\compress-lzf-.jar;C:\development\MavenRepository\org\xerial\snappy\snappy-java\\lz4-.jar;C:\development\MavenRepository\org\roaringbitmap\RoaringBitmap\\RoaringBitmap-.jar;C:\development\MavenRepository\commons-net\commons-net\\-shaded-protobuf\akka-remote_2.--shaded-protobuf.jar;C:\development\MavenRepository\org\spark-project\akka\akka-actor_2.\-shaded-protobuf\akka-actor_2.--shaded-protobuf.jar;C:\development\MavenRepository\com\typesafe\config\\config-.jar;C:\development\MavenRepository\org\spark-project\protobuf\protobuf-java\-shaded\protobuf-java--shaded.jar;C:\development\MavenRepository\org\uncommons\maths\uncommons-maths\\-shaded-protobuf\akka-slf4j_2.--shaded-protobuf.jar;C:\development\MavenRepository\org\scala-lang\scala-library\\scala-library-.jar;C:\development\MavenRepository\org\json4s\json4s-jackson_2.\\json4s-jackson_2.-.jar;C:\development\MavenRepository\org\json4s\json4s-core_2.\\json4s-core_2.-.jar;C:\development\MavenRepository\org\json4s\json4s-ast_2.\\json4s-ast_2.-.jar;C:\development\MavenRepository\org\scala-lang\scalap\\scalap-.jar;C:\development\MavenRepository\org\scala-lang\scala-compiler\\scala-compiler-.jar;C:\development\MavenRepository\com\sun\jersey\jersey-server\\mesos--shaded-protobuf.jar;C:\development\MavenRepository\io\netty\netty-all\.Final\netty-all-.Final.jar;C:\development\MavenRepository\com\clearspring\analytics\stream\\stream-.jar;C:\development\MavenRepository\io\dropwizard\metrics\metrics-core\\metrics-core-.jar;C:\development\MavenRepository\io\dropwizard\metrics\metrics-jvm\\metrics-jvm-.jar;C:\development\MavenRepository\io\dropwizard\metrics\metrics-json\\metrics-json-.jar;C:\development\MavenRepository\io\dropwizard\metrics\metrics-graphite\\metrics-graphite-.jar;C:\development\MavenRepository\com\fasterxml\jackson\core\jackson-databind\\jackson-databind-.jar;C:\development\MavenRepository\com\fasterxml\jackson\core\jackson-core\\jackson-core-.jar;C:\development\MavenRepository\com\fasterxml\jackson\module\jackson-module-scala_2.\\jackson-module-scala_2.-.jar;C:\development\MavenRepository\com\thoughtworks\paranamer\paranamer\\ivy-.jar;C:\development\MavenRepository\oro\oro\\oro-.jar;C:\development\MavenRepository\org\tachyonproject\tachyon-client\\tachyon-client-.jar;C:\development\MavenRepository\commons-lang\commons-lang\\tachyon-underfs-hdfs-.jar;C:\development\MavenRepository\org\tachyonproject\tachyon-underfs-s3\\tachyon-underfs-s3-.jar;C:\development\MavenRepository\org\tachyonproject\tachyon-underfs-local\\tachyon-underfs-local-.jar;C:\development\MavenRepository\net\razorvine\pyrolite\\chimera-.jar;C:\development\MavenRepository\org\spark-project\spark\unused\\unused-.jar;C:\development\MavenRepository\org\apache\spark\spark-sql_2.\-cdh5.--cdh5.\-cdh5.--cdh5.\scala-reflect-.jar;C:\development\MavenRepository\org\codehaus\janino\janino\\janino-.jar;C:\development\MavenRepository\org\codehaus\janino\commons-compiler\\commons-compiler-.jar;C:\development\MavenRepository\com\twitter\parquet-column\-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5..cloudera.\jetty-.cloudera..jar;C:\development\MavenRepository\org\mortbay\jetty\jetty-util\.cloudera.\jetty-util-.cloudera..jar;C:\development\MavenRepository\com\sun\jersey\jersey-json\-\jaxb-impl--.jar;C:\development\MavenRepository\tomcat\jasper-compiler\\jasper-compiler-.jar;C:\development\MavenRepository\tomcat\jasper-runtime\\jasper-runtime-.jar;C:\development\MavenRepository\commons-el\commons-el\\commons-beanutils-.jar;C:\development\MavenRepository\commons-beanutils\commons-beanutils-core\\commons-beanutils-core-.jar;C:\development\MavenRepository\com\google\code\gson\gson\\gson-.jar;C:\development\MavenRepository\org\apache\hadoop\hadoop-auth\-cdh5.-cdh5.-M15\apacheds-kerberos-codec--M15.jar;C:\development\MavenRepository\org\apache\directory\server\apacheds-i18n\-M15\apacheds-i18n--M15.jar;C:\development\MavenRepository\org\apache\directory\api\api-asn1-api\-M20\api-asn1-api--M20.jar;C:\development\MavenRepository\org\apache\directory\api\api-util\-M20\api-util--M20.jar;C:\development\MavenRepository\com\jcraft\jsch\\jsch-.jar;C:\development\MavenRepository\org\apache\curator\curator-client\\curator-client-.jar;C:\development\MavenRepository\org\apache\htrace\htrace-core4\-incubating\htrace-core4--incubating.jar;C:\development\MavenRepository\org\apache\commons\commons-compress\\commons-compress-.jar;C:\development\MavenRepository\org\tukaani\xz\-cdh5.-cdh5.\jcodings-.jar;C:\development\MavenRepository\com\yammer\metrics\metrics-core\\metrics-core-.jar;C:\development\MavenRepository\org\apache\hbase\hbase-protocol\-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.\high-scale-lib-.jar;C:\development\MavenRepository\org\apache\commons\commons-math\.cloudera.\jetty-sslengine-.cloudera..jar;C:\development\MavenRepository\org\mortbay\jetty\jsp-\jsp-.jar;C:\development\MavenRepository\org\mortbay\jetty\jsp-api-\jsp-api-.jar;C:\development\MavenRepository\org\mortbay\jetty\servlet-api-\servlet-api-.jar;C:\development\MavenRepository\org\codehaus\jackson\jackson-jaxrs\\jackson-jaxrs-.jar;C:\development\MavenRepository\org\jamon\jamon-runtime\\jamon-runtime-.jar;C:\development\MavenRepository\org\hamcrest\hamcrest-core\-mr1-cdh5.-mr1-cdh5.\core-.jar;C:\development\MavenRepository\org\apache\hadoop\hadoop-hdfs\-cdh5.-cdh5.\commons-daemon-.jar;C:\development\MavenRepository\com\google\protobuf\protobuf-java\\protobuf-java-.jar;C:\development\MavenRepository\commons-logging\commons-logging\-\findbugs-annotations--.jar;C:\development\MavenRepository\org\apache\phoenix\phoenix-spark\-cdh5.-cdh5.\disruptor-.jar;C:\development\MavenRepository\org\apache\phoenix\phoenix-core\-cdh5.-cdh5.-incubating\tephra-api--incubating.jar;C:\development\MavenRepository\org\apache\tephra\tephra-core\-incubating\tephra-core--incubating.jar;C:\development\MavenRepository\com\google\inject\guice\\javax.inject-.jar;C:\development\MavenRepository\aopalliance\aopalliance\\libthrift-.jar;C:\development\MavenRepository\it\unimi\dsi\fastutil\\fastutil-.jar;C:\development\MavenRepository\org\apache\twill\twill-common\\twill-common-.jar;C:\development\MavenRepository\org\apache\twill\twill-core\\twill-core-.jar;C:\development\MavenRepository\org\apache\twill\twill-api\\twill-api-.jar;C:\development\MavenRepository\org\ow2\asm\asm-all\\asm-all-.jar;C:\development\MavenRepository\org\apache\twill\twill-discovery-api\\twill-discovery-api-.jar;C:\development\MavenRepository\org\apache\twill\twill-discovery-core\\twill-discovery-core-.jar;C:\development\MavenRepository\org\apache\twill\twill-zookeeper\\twill-zookeeper-.jar;C:\development\MavenRepository\org\apache\tephra\tephra-hbase-compat--incubating\tephra-hbase-compat--incubating.jar;C:\development\MavenRepository\org\antlr\antlr-runtime\\antlr-runtime-.jar;C:\development\MavenRepository\jline\jline\\sqlline-.jar;C:\development\MavenRepository\com\google\guava\guava\\guava-.jar;C:\development\MavenRepository\joda-\jcip-annotations-.jar;C:\development\MavenRepository\org\codehaus\jackson\jackson-core-asl\\jackson-core-asl-.jar;C:\development\MavenRepository\org\codehaus\jackson\jackson-mapper-asl\\jackson-mapper-asl-.jar;C:\development\MavenRepository\junit\junit\\httpclient-.jar;C:\development\MavenRepository\org\iq80\snappy\snappy\-incubating\htrace-core--incubating.jar;C:\development\MavenRepository\commons-cli\commons-cli\\commons-collections-.jar;C:\development\MavenRepository\org\apache\commons\commons-csv\-cdh5.-cdh5..0_212\lib\tools.jar;C:\development\MavenRepository\org\apache\hbase\hbase-common\-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.-cdh5.\joni-.jar;C:\development\MavenRepository\com\salesforce\i18n\i18n-util\\i18n-util-.jar;C:\development\MavenRepository\com\ibm\icu\icu4j\-cdh5.-cdh5.-cdh5.-cdh5.\jaxb-api-.jar;C:\development\MavenRepository\javax\xml\stream\stax-api\\stax-api-.jar;C:\development\MavenRepository\javax\activation\activation\\jackson-xc-.jar;C:\development\MavenRepository\com\sun\jersey\contribs\jersey-guice\-cdh5.-cdh5..Final\netty-.Final.jar;C:\development\MavenRepository\mysql\mysql-connector-java\\mysql-connector-java-.jar;C:\3rd\JetBrains\IntelliJ IDEA \lib\idea_rt.jar // :: INFO zookeeper.ZooKeeper: Client environment:java.library.path=C:\3rd\Java\jdk1..0_212\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\3rd\Anaconda2;C:\3rd\Anaconda2\Library\mingw-w64\bin;C:\3rd\Anaconda2\Library\usr\bin;C:\3rd\Anaconda2\Library\bin;C:\3rd\Anaconda2\Scripts;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\iCLS\;C:\Program Files\Intel\Intel(R) Management Engine Components\iCLS\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\3rd\MATLAB\R2016a\runtime\win64;C:\3rd\MATLAB\R2016a\bin;C:\3rd\MATLAB\R2016a\polyspace\bin;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\TortoiseSVN\bin;C:\Users\cf_pc\Documents\Caffe\Release;C:\3rd\scala\scala-\bin;C:\3rd\Java\jdk1..0_212\bin;C:\development\apache-maven-\bin;C:\3rd\mysql--winx64\bin;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\3rd\hadoop-common-\bin;C:\Users\cf_pc\AppData\Local\Microsoft\WindowsApps;;C:\Program Files\Microsoft VS Code\bin;. // :: INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=C:\Users\cf_pc\AppData\Local\Temp\ // :: INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> // :: INFO zookeeper.ZooKeeper: Client environment:os.name=Windows // :: INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 // :: INFO zookeeper.ZooKeeper: Client environment:os.version=10.0 // :: INFO zookeeper.ZooKeeper: Client environment:user.name=cf_pc // :: INFO zookeeper.ZooKeeper: Client environment:user.home=C:\Users\cf_pc // :: INFO zookeeper.ZooKeeper: Client environment:user.dir=C:\development\cf\scalapp // :: INFO zookeeper.ZooKeeper: Initiating client connection, connectString=node3: sessionTimeout= watcher=hconnection-0x4bc59b270x0, quorum=node3:, baseZNode=/hbase // :: INFO zookeeper.ClientCnxn: Opening socket connection to server node3/. Will not attempt to authenticate using SASL (unknown error) // :: INFO zookeeper.ClientCnxn: Socket connection established to node3/, initiating session // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO query.ConnectionQueryServicesImpl: HConnection established. Stacktrace ) org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:) org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:) org.apache.phoenix.query.ConnectionQueryServicesImpl.access$(ConnectionQueryServicesImpl.java:) org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:) org.apache.phoenix.query.ConnectionQueryServicesImpl$.call(ConnectionQueryServicesImpl.java:) org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:) org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:) org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:) org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:) org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:) java.sql.DriverManager.getConnection(DriverManager.java:) java.sql.DriverManager.getConnection(DriverManager.java:) org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:) org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:) org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:) org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:) org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:) com.fc.phoenixConnectMode$.getMode1(phoenixConnectMode.scala:) com.fc.costDay$.main(costDay.scala:) com.fc.costDay.main(costDay.scala) // :: INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available // :: INFO mapreduce.PhoenixInputFormat: UseSelectColumns=, selectColumnList=ID,ASSET_ID,ASSET_NAME,ASSET_FIRST_DEGREE_ID,ASSET_FIRST_DEGREE_NAME,ASSET_SECOND_DEGREE_ID,ASSET_SECOND_DEGREE_NAME,GB_DEGREE_ID,GB_DEGREE_NAME,ASSET_USE_FIRST_DEGREE_ID,ASSET_USE_FIRST_DEGREE_NAME,ASSET_USE_SECOND_DEGREE_ID,ASSET_USE_SECOND_DEGREE_NAME,MANAGEMENT_TYPE_ID,MANAGEMENT_TYPE_NAME,ASSET_MODEL,FACTORY_NUMBER,ASSET_COUNTRY_ID,ASSET_COUNTRY_NAME,MANUFACTURER,SUPPLIER,SUPPLIER_TEL,ORIGINAL_VALUE,USE_DEPARTMENT_ID,USE_DEPARTMENT_NAME,USER_ID,USER_NAME,ASSET_LOCATION_OF_PARK_ID,ASSET_LOCATION_OF_PARK_NAME,ASSET_LOCATION_OF_BUILDING_ID,ASSET_LOCATION_OF_BUILDING_NAME,ASSET_LOCATION_OF_ROOM_ID,ASSET_LOCATION_OF_ROOM_NUMBER,PRODUCTION_DATE,ACCEPTANCE_DATE,REQUISITION_DATE,PERFORMANCE_INDEX,ASSET_STATE_ID,ASSET_STATE_NAME,INSPECTION_TYPE_ID,INSPECTION_TYPE_NAME,SEAL_DATE,SEAL_CAUSE,COST_ITEM_ID,COST_ITEM_NAME,ITEM_COMMENTS,UNSEAL_DATE,SCRAP_DATE,PURCHASE_NUMBER,WARRANTY_PERIOD,DEPRECIABLE_LIVES_ID,DEPRECIABLE_LIVES_NAME,MEASUREMENT_UNITS_ID,MEASUREMENT_UNITS_NAME,ANNEX,REMARK,ACCOUNTING_TYPE_ID,ACCOUNTING_TYPE_NAME,SYSTEM_TYPE_ID,SYSTEM_TYPE_NAME,ASSET_ID_PARENT,CLASSIFIED_LEVEL_ID,CLASSIFIED_LEVEL_NAME,ASSET_PICTURE,MILITARY_SPECIAL_CODE,CHECK_CYCLE_ID,CHECK_CYCLE_NAME,CHECK_DATE,CHECK_EFFECTIVE_DATE,CHECK_MODE_ID,CHECK_MODE_NAME,CHECK_DEPARTMENT_ID,CHECK_DEPARTMENT_NAME,RENT_STATUS_ID,RENT_STATUS_NAME,STORAGE_TIME,UPDATE_USER,UPDATE_TIME,IS_ON_PROCESS,IS_DELETED,FIRST_DEPARTMENT_ID,FIRST_DEPARTMENT_NAME,SECOND_DEPARTMENT_ID,SECOND_DEPARTMENT_NAME,CREATE_USER,CREATE_TIME root |-- ID: string (nullable = true) |-- FIRST_DEPARTMENT_ID: string (nullable = true) |-- ACTUAL_COST: double (nullable = true) |-- ORIGINAL_VALUE: double (nullable = true) |-- GENERATION_TIME: timestamp (nullable = false) // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO mapreduce.PhoenixInputFormat: UseSelectColumns=, selectColumnList=ID,ASSET_ID,ASSET_NAME,ASSET_FIRST_DEGREE_ID,ASSET_FIRST_DEGREE_NAME,ASSET_SECOND_DEGREE_ID,ASSET_SECOND_DEGREE_NAME,GB_DEGREE_ID,GB_DEGREE_NAME,ASSET_USE_FIRST_DEGREE_ID,ASSET_USE_FIRST_DEGREE_NAME,ASSET_USE_SECOND_DEGREE_ID,ASSET_USE_SECOND_DEGREE_NAME,MANAGEMENT_TYPE_ID,MANAGEMENT_TYPE_NAME,ASSET_MODEL,FACTORY_NUMBER,ASSET_COUNTRY_ID,ASSET_COUNTRY_NAME,MANUFACTURER,SUPPLIER,SUPPLIER_TEL,ORIGINAL_VALUE,USE_DEPARTMENT_ID,USE_DEPARTMENT_NAME,USER_ID,USER_NAME,ASSET_LOCATION_OF_PARK_ID,ASSET_LOCATION_OF_PARK_NAME,ASSET_LOCATION_OF_BUILDING_ID,ASSET_LOCATION_OF_BUILDING_NAME,ASSET_LOCATION_OF_ROOM_ID,ASSET_LOCATION_OF_ROOM_NUMBER,PRODUCTION_DATE,ACCEPTANCE_DATE,REQUISITION_DATE,PERFORMANCE_INDEX,ASSET_STATE_ID,ASSET_STATE_NAME,INSPECTION_TYPE_ID,INSPECTION_TYPE_NAME,SEAL_DATE,SEAL_CAUSE,COST_ITEM_ID,COST_ITEM_NAME,ITEM_COMMENTS,UNSEAL_DATE,SCRAP_DATE,PURCHASE_NUMBER,WARRANTY_PERIOD,DEPRECIABLE_LIVES_ID,DEPRECIABLE_LIVES_NAME,MEASUREMENT_UNITS_ID,MEASUREMENT_UNITS_NAME,ANNEX,REMARK,ACCOUNTING_TYPE_ID,ACCOUNTING_TYPE_NAME,SYSTEM_TYPE_ID,SYSTEM_TYPE_NAME,ASSET_ID_PARENT,CLASSIFIED_LEVEL_ID,CLASSIFIED_LEVEL_NAME,ASSET_PICTURE,MILITARY_SPECIAL_CODE,CHECK_CYCLE_ID,CHECK_CYCLE_NAME,CHECK_DATE,CHECK_EFFECTIVE_DATE,CHECK_MODE_ID,CHECK_MODE_NAME,CHECK_DEPARTMENT_ID,CHECK_DEPARTMENT_NAME,RENT_STATUS_ID,RENT_STATUS_NAME,STORAGE_TIME,UPDATE_USER,UPDATE_TIME,IS_ON_PROCESS,IS_DELETED,FIRST_DEPARTMENT_ID,FIRST_DEPARTMENT_NAME,SECOND_DEPARTMENT_ID,SECOND_DEPARTMENT_NAME,CREATE_USER,CREATE_TIME // :: INFO mapreduce.PhoenixInputFormat: Select Statement: SELECT "."CREATE_TIME" FROM ASSET_NORMAL // :: INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection- // :: INFO zookeeper.ZooKeeper: Initiating client connection, connectString=node3: sessionTimeout= watcher=hconnection-0x226172700x0, quorum=node3:, baseZNode=/hbase // :: INFO zookeeper.ClientCnxn: Opening socket connection to server node3/. Will not attempt to authenticate using SASL (unknown error) // :: INFO zookeeper.ClientCnxn: Socket connection established to node3/, initiating session // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO util.RegionSizeCalculator: Calculating region sizes for table "IDX_ASSET_NORMAL". // :: INFO client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService // :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x36ca2ccfed69559 // :: INFO zookeeper.ZooKeeper: Session: 0x36ca2ccfed69559 closed // :: INFO zookeeper.ClientCnxn: EventThread shut down // :: INFO spark.SparkContext: Starting job: count at costDay.scala: // :: INFO scheduler.DAGScheduler: Registering RDD (count at costDay.scala:) // :: INFO scheduler.DAGScheduler: Got job (count at costDay.scala:) with output partitions // :: INFO scheduler.DAGScheduler: Final stage: ResultStage (count at costDay.scala:) // :: INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage ) // :: INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage ) // :: INFO scheduler.DAGScheduler: Submitting ShuffleMapStage (MapPartitionsRDD[] at count at costDay.scala:), which has no missing parents // :: INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 24.8 KB, free 477.9 MB) // :: INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 9.5 KB, free 477.9 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_1_piece0 (size: 9.5 KB, free: 478.1 MB) // :: INFO spark.SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO scheduler.DAGScheduler: Submitting missing tasks from ShuffleMapStage (MapPartitionsRDD[] at count at costDay.scala:) (first tasks are )) // :: INFO scheduler.TaskSchedulerImpl: Adding task set tasks // :: INFO scheduler.TaskSetManager: Starting task , localhost, executor driver, partition , ANY, bytes) // :: INFO executor.Executor: Running task ) // :: INFO rdd.NewHadoopRDD: Input split: org.apache.phoenix.mapreduce.PhoenixInputSplit@20b488 // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO codegen.GeneratePredicate: Code generated in 309.4486 ms // :: INFO codegen.GenerateUnsafeProjection: Code generated in 20.3535 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 10.5156 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 10.4614 ms // :: INFO codegen.GenerateUnsafeRowJoiner: Code generated in 6.8774 ms // :: INFO codegen.GenerateUnsafeProjection: Code generated in 6.7907 ms // :: INFO executor.Executor: Finished task ). bytes result sent to driver // :: INFO scheduler.TaskSetManager: Finished task ) ms on localhost (executor driver) (/) // :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool // :: INFO scheduler.DAGScheduler: ShuffleMapStage (count at costDay.scala:) finished in 10.745 s // :: INFO scheduler.DAGScheduler: looking for newly runnable stages // :: INFO scheduler.DAGScheduler: running: Set() // :: INFO scheduler.DAGScheduler: waiting: Set(ResultStage ) // :: INFO scheduler.DAGScheduler: failed: Set() // :: INFO scheduler.DAGScheduler: Submitting ResultStage (MapPartitionsRDD[] at count at costDay.scala:), which has no missing parents // :: INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 25.3 KB, free 477.9 MB) // :: INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 9.8 KB, free 477.8 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_2_piece0 (size: 9.8 KB, free: 478.1 MB) // :: INFO spark.SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO scheduler.DAGScheduler: Submitting missing tasks from ResultStage (MapPartitionsRDD[] at count at costDay.scala:) (first tasks are )) // :: INFO scheduler.TaskSchedulerImpl: Adding task set tasks // :: INFO scheduler.TaskSetManager: Starting task , localhost, executor driver, partition , NODE_LOCAL, bytes) // :: INFO executor.Executor: Running task ) // :: INFO storage.ShuffleBlockFetcherIterator: Getting non-empty blocks out of blocks // :: INFO storage.ShuffleBlockFetcherIterator: Started remote fetches ms // :: INFO codegen.GenerateMutableProjection: Code generated in 22.44 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 13.992 ms // :: INFO executor.Executor: Finished task ). bytes result sent to driver // :: INFO scheduler.TaskSetManager: Finished task ) ms on localhost (executor driver) (/) // :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool // :: INFO scheduler.DAGScheduler: ResultStage (count at costDay.scala:) finished in 0.205 s // :: INFO scheduler.DAGScheduler: Job finished: count at costDay.scala:, took 11.157661 s // :: INFO spark.SparkContext: Starting job: describe at costDay.scala: // :: INFO scheduler.DAGScheduler: Registering RDD (describe at costDay.scala:) // :: INFO scheduler.DAGScheduler: Got job (describe at costDay.scala:) with output partitions // :: INFO scheduler.DAGScheduler: Final stage: ResultStage (describe at costDay.scala:) // :: INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage ) // :: INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage ) // :: INFO scheduler.DAGScheduler: Submitting ShuffleMapStage (MapPartitionsRDD[] at describe at costDay.scala:), which has no missing parents // :: INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 27.2 KB, free 477.8 MB) // :: INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 10.4 KB, free 477.8 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_3_piece0 (size: 10.4 KB, free: 478.1 MB) // :: INFO spark.SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO scheduler.DAGScheduler: Submitting missing tasks from ShuffleMapStage (MapPartitionsRDD[] at describe at costDay.scala:) (first tasks are )) // :: INFO scheduler.TaskSchedulerImpl: Adding task set tasks // :: INFO scheduler.TaskSetManager: Starting task , localhost, executor driver, partition , ANY, bytes) // :: INFO executor.Executor: Running task ) // :: INFO rdd.NewHadoopRDD: Input split: org.apache.phoenix.mapreduce.PhoenixInputSplit@20b488 // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO codegen.GenerateUnsafeProjection: Code generated in 11.405 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 10.5886 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 39.2201 ms // :: INFO codegen.GenerateUnsafeRowJoiner: Code generated in 8.3737 ms // :: INFO codegen.GenerateUnsafeProjection: Code generated in 22.542 ms // :: INFO storage.BlockManagerInfo: Removed broadcast_2_piece0 on localhost: in memory (size: 9.8 KB, free: 478.1 MB) // :: INFO executor.Executor: Finished task ). bytes result sent to driver // :: INFO scheduler.TaskSetManager: Finished task ) ms on localhost (executor driver) (/) // :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool // :: INFO scheduler.DAGScheduler: ShuffleMapStage (describe at costDay.scala:) finished in 8.825 s // :: INFO scheduler.DAGScheduler: looking for newly runnable stages // :: INFO scheduler.DAGScheduler: running: Set() // :: INFO scheduler.DAGScheduler: waiting: Set(ResultStage ) // :: INFO scheduler.DAGScheduler: failed: Set() // :: INFO scheduler.DAGScheduler: Submitting ResultStage (MapPartitionsRDD[] at describe at costDay.scala:), which has no missing parents // :: INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 28.7 KB, free 477.8 MB) // :: INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 11.0 KB, free 477.8 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_4_piece0 (size: 11.0 KB, free: 478.1 MB) // :: INFO spark.SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO scheduler.DAGScheduler: Submitting missing tasks from ResultStage (MapPartitionsRDD[] at describe at costDay.scala:) (first tasks are )) // :: INFO scheduler.TaskSchedulerImpl: Adding task set tasks // :: INFO scheduler.TaskSetManager: Starting task , localhost, executor driver, partition , NODE_LOCAL, bytes) // :: INFO executor.Executor: Running task ) // :: INFO storage.ShuffleBlockFetcherIterator: Getting non-empty blocks out of blocks // :: INFO storage.ShuffleBlockFetcherIterator: Started remote fetches ms // :: INFO codegen.GenerateMutableProjection: Code generated in 21.5454 ms // :: INFO codegen.GenerateMutableProjection: Code generated in 10.7314 ms // :: INFO codegen.GenerateUnsafeProjection: Code generated in 13.0969 ms // :: INFO codegen.GenerateSafeProjection: Code generated in 8.2397 ms // :: INFO executor.Executor: Finished task ). bytes result sent to driver // :: INFO scheduler.DAGScheduler: ResultStage (describe at costDay.scala:) finished in 0.097 s // :: INFO scheduler.DAGScheduler: Job finished: describe at costDay.scala:, took 8.948356 s // :: INFO scheduler.TaskSetManager: Finished task ) ms on localhost (executor driver) (/) // :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool +-------+--------------------+ |summary| ORIGINAL_VALUE| +-------+--------------------+ | count| | | mean|2.427485546653306E12| | stddev|5.474385018305400...| | min| -7970934.0| | max|1.234567890123456...| +-------+--------------------+ // :: INFO spark.SparkContext: Starting job: show at costDay.scala: // :: INFO scheduler.DAGScheduler: Got job (show at costDay.scala:) with output partitions // :: INFO scheduler.DAGScheduler: Final stage: ResultStage (show at costDay.scala:) // :: INFO scheduler.DAGScheduler: Parents of final stage: List() // :: INFO scheduler.DAGScheduler: Missing parents: List() // :: INFO scheduler.DAGScheduler: Submitting ResultStage (MapPartitionsRDD[] at show at costDay.scala:), which has no missing parents // :: INFO storage.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 23.6 KB, free 477.8 MB) // :: INFO storage.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 9.0 KB, free 477.8 MB) // :: INFO storage.BlockManagerInfo: Added broadcast_5_piece0 (size: 9.0 KB, free: 478.1 MB) // :: INFO spark.SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO scheduler.DAGScheduler: Submitting missing tasks from ResultStage (MapPartitionsRDD[] at show at costDay.scala:) (first tasks are )) // :: INFO scheduler.TaskSchedulerImpl: Adding task set tasks // :: INFO scheduler.TaskSetManager: Starting task , localhost, executor driver, partition , ANY, bytes) // :: INFO executor.Executor: Running task ) // :: INFO rdd.NewHadoopRDD: Input split: org.apache.phoenix.mapreduce.PhoenixInputSplit@20b488 // :: INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Trying to connect to a secure cluster as with keytab /hbase // :: INFO jdbc.PhoenixEmbeddedDriver$ConnectionInfo: Successful login to secure cluster // :: INFO codegen.GenerateUnsafeProjection: Code generated in 33.6563 ms // :: INFO codegen.GenerateSafeProjection: Code generated in 7.0589 ms // :: INFO executor.Executor: Finished task ). bytes result sent to driver // :: INFO scheduler.TaskSetManager: Finished task ) ms on localhost (executor driver) (/) // :: INFO scheduler.DAGScheduler: ResultStage (show at costDay.scala:) finished in 0.466 s // :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool // :: INFO scheduler.DAGScheduler: Job finished: show at costDay.scala:, took 0.489113 s +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |ID |FIRST_DEPARTMENT_ID|ACTUAL_COST |ORIGINAL_VALUE|GENERATION_TIME | +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |d25bb550a290457382c175b0e57c0982||-- ::31.864| |492016e2f7ec4cd18615c164c92c6c6d||-- ::31.864| |1d138a7401bd493da12f8f8323e9dee0||-- ::31.864| |09718925d34b4a099e09a30a0621ded8||-- ::31.864| |d5cfd5e898464130b71530d74b43e9d1||-- ::31.864| |6b39ac96b8734103b2413520d3195ee6||-- ::31.864| |8d20d0abd04d49cea3e52d9ca67e39da||-- ::31.864| |66ae7e7c7a104cea99615358e12c03b0||-- ::31.864| |d49b0324bbf14b70adefe8b1d9163db2||-- ::31.864| |d4d701514a2a425e8192acf47bb57f9b||-- ::31.864| |d6a016c618c1455ca0e2c7d73ba947ac||-- ::31.864| |5dfa3be825464ddd98764b2790720fae||-- ::31.864| |6e5653ef4aaa4c03bcd00fbeb1e6811d||-- ::31.864| |32bd2654082645cba35527d50e0d52f9||-- ::31.864| |8ed4424408bc458dbe200acffe5733bf||-- ::31.864| |1b2faa31f139461488847e77eacd794a||-- ::31.864| |f398245c9ccc4760a5eb3251db3680bf||-- ::31.864| |2696de9733d247e5bf88573244f36ba2||-- ::31.864| |9c8cfad3d4334b37a7b9beb56b528c22||-- ::31.864| |3e2721b79e754a798d0be940ae011d72||-- ::31.864| +--------------------------------+-------------------+---------------------+--------------+-----------------------+ only showing top rows // :: INFO spark.SparkContext: Invoking stop() from shutdown hook // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} // :: INFO ui.SparkUI: Stopped Spark web UI at http://10.200.74.155:4040 // :: INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! // :: INFO storage.MemoryStore: MemoryStore cleared // :: INFO storage.BlockManager: BlockManager stopped // :: INFO storage.BlockManagerMaster: BlockManagerMaster stopped // :: INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! // :: INFO spark.SparkContext: Successfully stopped SparkContext // :: INFO util.ShutdownHookManager: Shutdown hook called // :: INFO util.ShutdownHookManager: Deleting directory C:\Users\cf_pc\AppData\Local\Temp\spark-4fcfa10d-c258-46b7-b4f5-ee977276fa00 Process finished with exit code
执行打包操作,返回信息:
C:\3rd\Java\jdk1..0_212\bin\java.exe -Dmaven.multiModuleProjectDirectory=C:\development\cf\scalapp -Dmaven.home=C:\development\apache-maven- -Dclassworlds.conf=C:\development\apache-maven-\bin\m2.conf -classpath C:\development\apache-maven-\boot\plexus-classworlds-.jar org.codehaus.classworlds.Launcher -Didea.version2019.\conf\settings.xml -Dmaven.repo.local=C:\development\MavenRepository package [INFO] Scanning for projects... [INFO] [INFO] ---------------------------< com.fc:scalapp >--------------------------- [INFO] Building scalapp 1.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ scalapp --- [WARNING] Using platform encoding (UTF- actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying resource [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ scalapp --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-scala-plugin::compile (default) @ scalapp --- [INFO] Checking for multiple versions of scala [WARNING] Expected all dependencies to require Scala version: [WARNING] com.twitter:chill_2.: requires scala version: [WARNING] Multiple versions of scala libraries detected! [INFO] includes = [**/*.java,**/*.scala,] [INFO] excludes = [] [INFO] C:\development\cf\scalapp\src\main\scala:-: info: compiling [INFO] Compiling source files to C:\development\cf\scalapp\target\classes at [WARNING] warning: there were feature warning(s); re-run with -feature for details [WARNING] one warning found [INFO] prepare-compile s [INFO] compile s [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ scalapp --- [WARNING] Using platform encoding (UTF- actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory C:\development\cf\scalapp\src\test\resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ scalapp --- [INFO] Nothing to compile - all classes are up to date [INFO] [INFO] --- maven-scala-plugin::testCompile (default) @ scalapp --- [INFO] Checking for multiple versions of scala [WARNING] Expected all dependencies to require Scala version: [WARNING] com.twitter:chill_2.: requires scala version: [WARNING] Multiple versions of scala libraries detected! [INFO] includes = [**/*.java,**/*.scala,] [INFO] excludes = [] [WARNING] No source files found. [INFO] [INFO] --- maven-surefire-plugin::test (default-test) @ scalapp --- [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ scalapp --- [INFO] Building jar: C:\development\cf\scalapp\target\scalapp-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-shade-plugin::shade (default) @ scalapp --- [INFO] Including org.apache.spark:spark-core_2.:jar:-cdh5.14.2 in the shaded jar. [INFO] Including org.apache.avro:avro-mapred:jar:hadoop2:-cdh5.14.2 in the shaded jar. …… [INFO] Including mysql:mysql-connector-java:jar: in the shaded jar. [WARNING] commons-collections-.jar, commons-beanutils-.jar, commons-beanutils-core-.jar define overlapping classes: [WARNING] - org.apache.commons.collections.FastHashMap$EntrySet [WARNING] - org.apache.commons.collections.FastHashMap$KeySet [WARNING] - org.apache.commons.collections.FastHashMap$CollectionView$CollectionViewIterator [WARNING] - org.apache.commons.collections.ArrayStack [WARNING] - org.apache.commons.collections.FastHashMap$Values [WARNING] - org.apache.commons.collections.FastHashMap$CollectionView [WARNING] - org.apache.commons.collections.FastHashMap$ [WARNING] - org.apache.commons.collections.Buffer [WARNING] - org.apache.commons.collections.FastHashMap [WARNING] - org.apache.commons.collections.BufferUnderflowException …… [WARNING] mvn dependency:tree -Ddetail=true and the above output. [WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/ [INFO] Replacing original artifact with shaded artifact. [INFO] Replacing C:\development\cf\scalapp\target\scalapp-1.0-SNAPSHOT.jar with C:\development\cf\scalapp\target\scalapp-1.0-SNAPSHOT-shaded.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total : min [INFO] Finished at: --24T00::+: [INFO] ------------------------------------------------------------------------ Process finished with exit code
上传到服务器上,执行命令:
spark-submit --class com.fc.costDay --executor-memory 500m --total-executor-cores /home/cf/scalapp-1.0-SNAPSHOT.jar
报错:
java.lang.ClassNotFoundException: Class org.apache.phoenix.spark.PhoenixRecordWritable not found
参考:https://www.jianshu.com/p/f336f7e5f31b,更改执行命令为:
spark-submit --master yarn-cluster --driver-memory 4g --num-executors --executor-memory 2g --executor-cores --class com.fc.costDay --conf spark.driver.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX--cdh5./lib/phoenix/lib/* --conf spark.executor.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX-4.14.0-cdh5.14.2.p0.3/lib/phoenix/lib/* /home/cf/scalapp-1.0-SNAPSHOT.jar
仍然报错:
File does not exist: hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0105/……
完整错误信息如下:
[root@node1 ~]# spark-submit --master yarn-cluster --driver-memory 4g --num-executors --executor-memory 2g --executor-cores --class com.fc.costDay --conf spark.driver.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX--cdh5./lib/phoenix/lib/* --conf spark.executor.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX-4.14.0-cdh5.14.2.p0.3/lib/phoenix/lib/* /home/cf/scalapp-1.0-SNAPSHOT.jar 19/09/24 00:01:36 INFO client.RMProxy: Connecting to ResourceManager at node1/10.200.101.131:8032 19/09/24 00:01:36 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers 19/09/24 00:01:36 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (40874 MB per container) 19/09/24 00:01:36 INFO yarn.Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead 19/09/24 00:01:36 INFO yarn.Client: Setting up container launch context for our AM 19/09/24 00:01:36 INFO yarn.Client: Setting up the launch environment for our AM container 19/09/24 00:01:37 INFO yarn.Client: Preparing resources for our AM container 19/09/24 00:01:37 INFO yarn.Client: Uploading resource file:/home/cf/scalapp-1.0-SNAPSHOT.jar -> hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0105/scalapp-1.0-SNAPSHOT.jar 19/09/24 00:01:38 INFO yarn.Client: Uploading resource file:/tmp/spark-2c548285-8012-414b-ab4e-797a164e38bc/__spark_conf__8997139291240118668.zip -> hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0105/__spark_conf__8997139291240118668.zip 19/09/24 00:01:38 INFO spark.SecurityManager: Changing view acls to: root 19/09/24 00:01:38 INFO spark.SecurityManager: Changing modify acls to: root 19/09/24 00:01:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 19/09/24 00:01:38 INFO yarn.Client: Submitting application 105 to ResourceManager 19/09/24 00:01:38 INFO impl.YarnClientImpl: Submitted application application_1566100765602_0105 19/09/24 00:01:39 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:39 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.users.root start time: 1569254498669 final status: UNDEFINED tracking URL: http://node1:8088/proxy/application_1566100765602_0105/ user: root 19/09/24 00:01:40 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:41 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:42 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:43 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:44 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:45 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:46 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:47 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:48 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:49 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:50 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:51 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:52 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:53 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:54 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:55 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:56 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:57 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:58 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:01:59 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:02:00 INFO yarn.Client: Application report for application_1566100765602_0105 (state: ACCEPTED) 19/09/24 00:02:01 INFO yarn.Client: Application report for application_1566100765602_0105 (state: FAILED) 19/09/24 00:02:01 INFO yarn.Client: client token: N/A diagnostics: Application application_1566100765602_0105 failed 2 times due to AM Container for appattempt_1566100765602_0105_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://node1:8088/proxy/application_1566100765602_0105/Then, click on links to logs of each attempt. Diagnostics: File does not exist: hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0105/__spark_conf__8997139291240118668.zip java.io.FileNotFoundException: File does not exist: hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0105/__spark_conf__8997139291240118668.zip at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1269) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1261) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1261) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.users.root start time: 1569254498669 final status: FAILED tracking URL: http://node1:8088/cluster/app/application_1566100765602_0105 user: root Exception in thread "main" org.apache.spark.SparkException: Application application_1566100765602_0105 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1025) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1072) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:730) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 19/09/24 00:02:01 INFO util.ShutdownHookManager: Shutdown hook called 19/09/24 00:02:01 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-2c548285-8012-414b-ab4e-797a164e38bc
参考:https://blog.csdn.net/adorechen/article/details/78746363,将源代码中设定本地执行的语句注释掉:
val conf = new SparkConf() .setAppName("fdsf") // .setMaster("local") //本地执行
重新打包,执行以上语句,成功!返回如下信息:
[root@node1 ~]# spark-submit --master yarn-cluster --driver-memory 4g --num-executors --executor-memory 2g --executor-cores --class com.fc.costDay --conf spark.driver.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX--cdh5./lib/phoenix/lib/* --conf spark.executor.extraClassPath=/opt/cloudera/parcels/APACHE_PHOENIX-4.14.0-cdh5.14.2.p0.3/lib/phoenix/lib/* /home/cf/scalapp-1.0-SNAPSHOT.jar 19/09/24 00:13:58 INFO client.RMProxy: Connecting to ResourceManager at node1/10.200.101.131:8032 19/09/24 00:13:58 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers 19/09/24 00:13:58 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (40874 MB per container) 19/09/24 00:13:58 INFO yarn.Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead 19/09/24 00:13:58 INFO yarn.Client: Setting up container launch context for our AM 19/09/24 00:13:58 INFO yarn.Client: Setting up the launch environment for our AM container 19/09/24 00:13:58 INFO yarn.Client: Preparing resources for our AM container 19/09/24 00:13:59 INFO yarn.Client: Uploading resource file:/home/cf/scalapp-1.0-SNAPSHOT.jar -> hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0106/scalapp-1.0-SNAPSHOT.jar 19/09/24 00:14:00 INFO yarn.Client: Uploading resource file:/tmp/spark-875aaa2a-1c5d-4c6a-95a6-12f276a80054/__spark_conf__4530217919668141816.zip -> hdfs://node1:8020/user/root/.sparkStaging/application_1566100765602_0106/__spark_conf__4530217919668141816.zip 19/09/24 00:14:00 INFO spark.SecurityManager: Changing view acls to: root 19/09/24 00:14:00 INFO spark.SecurityManager: Changing modify acls to: root 19/09/24 00:14:00 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 19/09/24 00:14:00 INFO yarn.Client: Submitting application 106 to ResourceManager 19/09/24 00:14:00 INFO impl.YarnClientImpl: Submitted application application_1566100765602_0106 19/09/24 00:14:01 INFO yarn.Client: Application report for application_1566100765602_0106 (state: ACCEPTED) 19/09/24 00:14:01 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.users.root start time: 1569255240698 final status: UNDEFINED tracking URL: http://node1:8088/proxy/application_1566100765602_0106/ user: root 19/09/24 00:14:02 INFO yarn.Client: Application report for application_1566100765602_0106 (state: ACCEPTED) 19/09/24 00:14:03 INFO yarn.Client: Application report for application_1566100765602_0106 (state: ACCEPTED) 19/09/24 00:14:04 INFO yarn.Client: Application report for application_1566100765602_0106 (state: ACCEPTED) 19/09/24 00:14:05 INFO yarn.Client: Application report for application_1566100765602_0106 (state: ACCEPTED) 19/09/24 00:14:06 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:06 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.200.101.135 ApplicationMaster RPC port: 0 queue: root.users.root start time: 1569255240698 final status: UNDEFINED tracking URL: http://node1:8088/proxy/application_1566100765602_0106/ user: root 19/09/24 00:14:07 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:08 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:09 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:10 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:11 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:12 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:13 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:14 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:15 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:16 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:17 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:18 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:19 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:20 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:21 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:22 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:23 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:24 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:25 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:26 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:27 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:28 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:29 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:30 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:31 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:32 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:33 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:34 INFO yarn.Client: Application report for application_1566100765602_0106 (state: RUNNING) 19/09/24 00:14:35 INFO yarn.Client: Application report for application_1566100765602_0106 (state: FINISHED) 19/09/24 00:14:35 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.200.101.135 ApplicationMaster RPC port: 0 queue: root.users.root start time: 1569255240698 final status: SUCCEEDED tracking URL: http://node1:8088/proxy/application_1566100765602_0106/ user: root 19/09/24 00:14:35 INFO util.ShutdownHookManager: Shutdown hook called 19/09/24 00:14:35 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-875aaa2a-1c5d-4c6a-95a6-12f276a80054
去任务界面查看输出信息:
如下:
Log Type: stderr Log Upload Time: Tue Sep :: + Log Length: Showing bytes of total. Click here for the full log. rdd,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} // :: INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} // :: INFO ui.SparkUI: Stopped Spark web UI at http://10.200.101.135:32960 // :: INFO yarn.YarnAllocator: Driver requested a total number of executor(s). // :: INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors // :: INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down // :: INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! // :: INFO storage.MemoryStore: MemoryStore cleared // :: INFO storage.BlockManager: BlockManager stopped // :: INFO storage.BlockManagerMaster: BlockManagerMaster stopped // :: INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! // :: INFO spark.SparkContext: Successfully stopped SparkContext // :: INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED // :: INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. // :: INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. // :: INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. // :: INFO Remoting: Remoting shut down // :: INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. // :: INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1566100765602_0106 // :: INFO util.ShutdownHookManager: Shutdown hook called // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data2/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-771c1143-d7a4-434d-8e27-0ec5964b2fb2 // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data1/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-757f2701-97e9-4a23--1b9536ec0698 // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data0/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-7dc0f9ff----b41bb0706ce0 // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data3/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-95e96c3f-915e--9ba3-ad50973224a0 // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data5/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-6b26fd69-d3e9-4e7e-a4d4-f00978232b57 // :: INFO util.ShutdownHookManager: Deleting directory /cetc38/bigdata/dfs/data4/yarn/nm/usercache/root/appcache/application_1566100765602_0106/spark-a05e4cfc-b152-4e2e-bc8c-e0350e8d9f3b Log Type: stdout Log Upload Time: Tue Sep :: + Log Length: root |-- ID: string (nullable = true) |-- FIRST_DEPARTMENT_ID: string (nullable = true) |-- ACTUAL_COST: double (nullable = true) |-- ORIGINAL_VALUE: double (nullable = true) |-- GENERATION_TIME: timestamp (nullable = false) +-------+--------------------+ |summary| ORIGINAL_VALUE| +-------+--------------------+ | count| | | mean|2.427485546653306E12| | stddev|5.474385018305400...| | min| -7970934.0| | max|1.234567890123456...| +-------+--------------------+ +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |ID |FIRST_DEPARTMENT_ID|ACTUAL_COST |ORIGINAL_VALUE|GENERATION_TIME | +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |d25bb550a290457382c175b0e57c0982||-- ::14.283| |492016e2f7ec4cd18615c164c92c6c6d||-- ::14.283| |1d138a7401bd493da12f8f8323e9dee0||-- ::14.283| |09718925d34b4a099e09a30a0621ded8||-- ::14.283| |d5cfd5e898464130b71530d74b43e9d1||-- ::14.283| |6b39ac96b8734103b2413520d3195ee6||-- ::14.283| |8d20d0abd04d49cea3e52d9ca67e39da||-- ::14.283| |66ae7e7c7a104cea99615358e12c03b0||-- ::14.283| |d49b0324bbf14b70adefe8b1d9163db2||-- ::14.283| |d4d701514a2a425e8192acf47bb57f9b||-- ::14.283| |d6a016c618c1455ca0e2c7d73ba947ac||-- ::14.283| |5dfa3be825464ddd98764b2790720fae||-- ::14.283| |6e5653ef4aaa4c03bcd00fbeb1e6811d||-- ::14.283| |32bd2654082645cba35527d50e0d52f9||-- ::14.283| |8ed4424408bc458dbe200acffe5733bf||-- ::14.283| |1b2faa31f139461488847e77eacd794a||-- ::14.283| |f398245c9ccc4760a5eb3251db3680bf||-- ::14.283| |2696de9733d247e5bf88573244f36ba2||-- ::14.283| |9c8cfad3d4334b37a7b9beb56b528c22||-- ::14.283| |3e2721b79e754a798d0be940ae011d72||-- ::14.283| +--------------------------------+-------------------+---------------------+--------------+-----------------------+ only showing top rows
编排Spark任务:
选择jar包:
添加jar包:
填写jar包和主类等信息:
填写执行模式信息:
保存任务:
执行,返回执行信息:
错误详细信息为:
-- ::, ERROR org.apache.oozie.command.wf.SignalXCommand: SERVER[node1] USER[admin] GROUP[-] TOKEN[] APP[Cf0924] JOB[--oozie-oozi-W] ACTION[--oozie-oozi-W@:start:] Workflow action failed : E0700: XML error, For input string: "" org.apache.oozie.workflow.WorkflowException: E0700: XML error, For input string: "" at org.apache.oozie.service.LiteWorkflowStoreService.getUserRetryMax(LiteWorkflowStoreService.java:) at org.apache.oozie.service.LiteWorkflowStoreService.liteExecute(LiteWorkflowStoreService.java:) at org.apache.oozie.service.LiteWorkflowStoreService$LiteActionHandler.start(LiteWorkflowStoreService.java:) at org.apache.oozie.workflow.lite.ActionNodeHandler.enter(ActionNodeHandler.java:) at org.apache.oozie.workflow.lite.LiteWorkflowInstance.signal(LiteWorkflowInstance.java:) at org.apache.oozie.workflow.lite.LiteWorkflowInstance.signal(LiteWorkflowInstance.java:) at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:) at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:) at org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.wf.ActionStartXCommand.callActionEnd(ActionStartXCommand.java:) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:) at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:) at org.apache.oozie.command.XCommand.call(XCommand.java:) at org.apache.oozie.DagEngine.start(DagEngine.java:) at org.apache.oozie.servlet.V1JobServlet.startWorkflowJob(V1JobServlet.java:) at org.apache.oozie.servlet.V1JobServlet.startJob(V1JobServlet.java:) at org.apache.oozie.servlet.BaseJobServlet.doPut(BaseJobServlet.java:) at javax.servlet.http.HttpServlet.service(HttpServlet.java:) at org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:) at javax.servlet.http.HttpServlet.service(HttpServlet.java:) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:) at org.apache.oozie.servlet.AuthFilter$.doFilter(AuthFilter.java:) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:) at org.apache.oozie.servlet.AuthFilter.doFilter(AuthFilter.java:) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:) at org.apache.oozie.servlet.HostnameFilter.doFilter(HostnameFilter.java:) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:) at java.lang.Thread.run(Thread.java:) Caused by: java.lang.NumberFormatException: For input string: "" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:) at java.lang.Integer.parseInt(Integer.java:) at java.lang.Integer.parseInt(Integer.java:) at org.apache.oozie.service.LiteWorkflowStoreService.getUserRetryMax(LiteWorkflowStoreService.java:) ... more
关联到oozie任务信息:
点开链接,查看任务详情:
将jar包上传到workspaces目录下:
重新创建任务:
修改运行模式:
显示正在执行:
执行结束:
执行成功,在日志中查看结果输出:
点击 here 链接,查看日志的完整内容:
=============================================================================== 2019-09-24 22:44:16,017 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at node1/10.200.101.131:8030 2019-09-24 22:44:16,041 [main] INFO org.apache.spark.deploy.yarn.YarnRMClient - Registering the ApplicationMaster 2019-09-24 22:44:16,203 [main] INFO org.apache.spark.deploy.yarn.YarnAllocator - Will request 2 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead) 2019-09-24 22:44:16,218 [main] INFO org.apache.spark.deploy.yarn.YarnAllocator - Submitted 2 unlocalized container requests. 2019-09-24 22:44:16,247 [main] INFO org.apache.spark.deploy.yarn.ApplicationMaster - Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 2019-09-24 22:44:17,686 [Reporter] INFO org.apache.spark.deploy.yarn.YarnAllocator - Launching container container_1566100765602_0108_01_000002 on host node5 2019-09-24 22:44:17,687 [Reporter] INFO org.apache.spark.deploy.yarn.YarnAllocator - Launching container container_1566100765602_0108_01_000003 on host node5 2019-09-24 22:44:17,688 [Reporter] INFO org.apache.spark.deploy.yarn.YarnAllocator - Received 2 containers from YARN, launching executors on 2 of them. 2019-09-24 22:44:17,704 [ContainerLauncher-0] INFO org.apache.spark.deploy.yarn.ExecutorRunnable - Preparing Local resources 2019-09-24 22:44:17,704 [ContainerLauncher-1] INFO org.apache.spark.deploy.yarn.ExecutorRunnable - Preparing Local resources 2019-09-24 22:44:17,721 [ContainerLauncher-0] INFO org.apache.spark.deploy.yarn.ExecutorRunnable - Prepared Local resources Map(hadoop-annotations.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-annotations.jar" } size: 21538 timestamp: 1554876846919 type: FILE visibility: PUBLIC, minlog-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/minlog-1.2.jar" } size: 4965 timestamp: 1554876847080 type: FILE visibility: PUBLIC, spark-unsafe_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-unsafe_2.10-1.6.0-cdh5.14.2.jar" } size: 39527 timestamp: 1554876847174 type: FILE visibility: PUBLIC, spark-mllib_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-mllib_2.10-1.6.0-cdh5.14.2.jar" } size: 4999801 timestamp: 1554876847170 type: FILE visibility: PUBLIC, commons-lang3-3.3.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-lang3-3.3.2.jar" } size: 412739 timestamp: 1554876846886 type: FILE visibility: PUBLIC, flume-ng-core-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-core-1.6.0-cdh5.14.2.jar" } size: 390528 timestamp: 1554876846910 type: FILE visibility: PUBLIC, tachyon-client-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-client-0.8.2.jar" } size: 2291167 timestamp: 1554876847192 type: FILE visibility: PUBLIC, jetty-6.1.26.cloudera.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jetty-6.1.26.cloudera.4.jar" } size: 540685 timestamp: 1554876847045 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-hs.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-hs.jar" } size: 178284 timestamp: 1554876846948 type: FILE visibility: PUBLIC, metrics-core-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-core-3.1.2.jar" } size: 112558 timestamp: 1554876847074 type: FILE visibility: PUBLIC, pmml-agent-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-agent-1.1.15.jar" } size: 5278 timestamp: 1554876847104 type: FILE visibility: PUBLIC, jersey-client-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-client-1.9.jar" } size: 130458 timestamp: 1554876847034 type: FILE visibility: PUBLIC, flume-ng-configuration-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-configuration-1.6.0-cdh5.14.2.jar" } size: 57779 timestamp: 1554876846909 type: FILE visibility: PUBLIC, __spark__.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-yarn_2.10-1.6.0-cdh5.14.2.jar" } size: 595290 timestamp: 1554876847176 type: FILE visibility: PUBLIC, spark-avro_2.10-1.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-avro_2.10-1.1.0-cdh5.14.2.jar" } size: 103563 timestamp: 1554876847128 type: FILE visibility: PUBLIC, json4s-jackson_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-jackson_2.10-3.2.10.jar" } size: 39953 timestamp: 1554876847052 type: FILE visibility: PUBLIC, datanucleus-api-jdo-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-api-jdo-3.2.1.jar" } size: 337012 timestamp: 1554876846901 type: FILE visibility: PUBLIC, metrics-graphite-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-graphite-3.1.2.jar" } size: 20852 timestamp: 1554876847079 type: FILE visibility: PUBLIC, __app__.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/scalapp-1.0-SNAPSHOT.jar" } size: 246113940 timestamp: 1569336246598 type: FILE visibility: PRIVATE, avro.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro.jar" } size: 477010 timestamp: 1554876846852 type: FILE visibility: PUBLIC, jansi-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jansi-1.9.jar" } size: 113796 timestamp: 1554876847018 type: FILE visibility: PUBLIC, chimera-0.9.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chimera-0.9.2.jar" } size: 62501 timestamp: 1554876846867 type: FILE visibility: PUBLIC, jasper-runtime-5.5.23.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jasper-runtime-5.5.23.jar" } size: 76844 timestamp: 1554876847018 type: FILE visibility: PUBLIC, curator-client-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-client-2.7.1.jar" } size: 69500 timestamp: 1554876846898 type: FILE visibility: PUBLIC, libfb303-0.9.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/libfb303-0.9.3.jar" } size: 313702 timestamp: 1554876847064 type: FILE visibility: PUBLIC, commons-lang-2.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-lang-2.4.jar" } size: 261809 timestamp: 1554876846884 type: FILE visibility: PUBLIC, spark-core_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-core_2.10-1.6.0-cdh5.14.2.jar" } size: 11668600 timestamp: 1554876847166 type: FILE visibility: PUBLIC, asm-3.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/asm-3.2.jar" } size: 43398 timestamp: 1554876846843 type: FILE visibility: PUBLIC, jackson-jaxrs-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-jaxrs-1.8.8.jar" } size: 17884 timestamp: 1554876847008 type: FILE visibility: PUBLIC, derby-10.10.1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/derby-10.10.1.1.jar" } size: 2831358 timestamp: 1554876846914 type: FILE visibility: PUBLIC, RoaringBitmap-0.5.11.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/RoaringBitmap-0.5.11.jar" } size: 201928 timestamp: 1554876846825 type: FILE visibility: PUBLIC, stax-api-1.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stax-api-1.0.1.jar" } size: 26514 timestamp: 1554876847177 type: FILE visibility: PUBLIC, pyspark.zip -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pyspark.zip" } size: 357519 timestamp: 1554876847113 type: FILE visibility: PUBLIC, hive-exec.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-exec.jar" } size: 19619438 timestamp: 1554876847026 type: FILE visibility: PUBLIC, chill_2.10-0.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chill_2.10-0.5.0.jar" } size: 221032 timestamp: 1554876846866 type: FILE visibility: PUBLIC, commons-cli-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-cli-1.2.jar" } size: 41123 timestamp: 1554876846870 type: FILE visibility: PUBLIC, hbase-protocol.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-protocol.jar" } size: 4631462 timestamp: 1554876846990 type: FILE visibility: PUBLIC, commons-daemon-1.0.13.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-daemon-1.0.13.jar" } size: 24239 timestamp: 1554876846878 type: FILE visibility: PUBLIC, xercesImpl-2.11.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xercesImpl-2.11.0.jar" } size: 1367760 timestamp: 1554876847197 type: FILE visibility: PUBLIC, spark-streaming-flume_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-flume_2.10-1.6.0-cdh5.14.2.jar" } size: 106028 timestamp: 1554876847169 type: FILE visibility: PUBLIC, chill-java-0.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chill-java-0.5.0.jar" } size: 46715 timestamp: 1554876846862 type: FILE visibility: PUBLIC, commons-dbcp-1.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-dbcp-1.4.jar" } size: 160519 timestamp: 1554876846878 type: FILE visibility: PUBLIC, janino-2.7.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/janino-2.7.8.jar" } size: 613299 timestamp: 1554876847016 type: FILE visibility: PUBLIC, py4j-0.9-src.zip -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/py4j-0.9-src.zip" } size: 44846 timestamp: 1554876847107 type: FILE visibility: PUBLIC, jettison-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jettison-1.1.jar" } size: 67758 timestamp: 1554876847044 type: FILE visibility: PUBLIC, log4j-1.2.17.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/log4j-1.2.17.jar" } size: 489884 timestamp: 1554876847072 type: FILE visibility: PUBLIC, spark-streaming_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming_2.10-1.6.0-cdh5.14.2.jar" } size: 2061129 timestamp: 1554876847177 type: FILE visibility: PUBLIC, javax.servlet-3.0.0.v201112011016.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.servlet-3.0.0.v201112011016.jar" } size: 200387 timestamp: 1554876847023 type: FILE visibility: PUBLIC, jtransforms-2.4.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jtransforms-2.4.0.jar" } size: 764569 timestamp: 1554876847056 type: FILE visibility: PUBLIC, arpack_combined_all-0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/arpack_combined_all-0.1.jar" } size: 1194003 timestamp: 1554876846848 type: FILE visibility: PUBLIC, breeze_2.10-0.11.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/breeze_2.10-0.11.2.jar" } size: 13689583 timestamp: 1554876846901 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-core.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-core.jar" } size: 1553188 timestamp: 1554876846946 type: FILE visibility: PUBLIC, hbase-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-common.jar" } size: 587216 timestamp: 1554876846977 type: FILE visibility: PUBLIC, netty-3.10.5.Final.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/netty-3.10.5.Final.jar" } size: 1330394 timestamp: 1554876847088 type: FILE visibility: PUBLIC, avro-ipc-tests.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-ipc-tests.jar" } size: 395186 timestamp: 1554876846848 type: FILE visibility: PUBLIC, pmml-model-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-model-1.1.15.jar" } size: 656969 timestamp: 1554876847107 type: FILE visibility: PUBLIC, jackson-module-scala_2.10-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-module-scala_2.10-2.2.3.jar" } size: 469406 timestamp: 1554876847014 type: FILE visibility: PUBLIC, stax-api-1.0-2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stax-api-1.0-2.jar" } size: 23346 timestamp: 1554876847177 type: FILE visibility: PUBLIC, py4j-0.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/py4j-0.9.jar" } size: 84451 timestamp: 1554876847108 type: FILE visibility: PUBLIC, asm-4.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/asm-4.0.jar" } size: 46022 timestamp: 1554876846843 type: FILE visibility: PUBLIC, spark-network-common_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-network-common_2.10-1.6.0-cdh5.14.2.jar" } size: 2357547 timestamp: 1554876847163 type: FILE visibility: PUBLIC, commons-io-2.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-io-2.4.jar" } size: 185140 timestamp: 1554876846883 type: FILE visibility: PUBLIC, jsp-api-2.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jsp-api-2.0.jar" } size: 48457 timestamp: 1554876847052 type: FILE visibility: PUBLIC, zookeeper.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/zookeeper.jar" } size: 1411840 timestamp: 1554876847191 type: FILE visibility: PUBLIC, commons-net-3.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-net-3.1.jar" } size: 273370 timestamp: 1554876846890 type: FILE visibility: PUBLIC, protobuf-java-2.4.1-shaded.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/protobuf-java-2.4.1-shaded.jar" } size: 455260 timestamp: 1554876847107 type: FILE visibility: PUBLIC, lz4-1.3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/lz4-1.3.0.jar" } size: 236880 timestamp: 1554876847073 type: FILE visibility: PUBLIC, commons-el-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-el-1.0.jar" } size: 112341 timestamp: 1554876846879 type: FILE visibility: PUBLIC, calcite-core-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-core-1.2.0-incubating.jar" } size: 3519262 timestamp: 1554876846870 type: FILE visibility: PUBLIC, paranamer-2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/paranamer-2.3.jar" } size: 29555 timestamp: 1554876847097 type: FILE visibility: PUBLIC, oozie-sharelib-spark-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oozie-sharelib-spark-4.1.0-cdh5.14.2.jar" } size: 35401 timestamp: 1554876847087 type: FILE visibility: PUBLIC, xz-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xz-1.0.jar" } size: 94672 timestamp: 1554876847191 type: FILE visibility: PUBLIC, jersey-core-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-core-1.9.jar" } size: 458739 timestamp: 1554876847033 type: FILE visibility: PUBLIC, calcite-linq4j-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-linq4j-1.2.0-incubating.jar" } size: 442406 timestamp: 1554876846862 type: FILE visibility: PUBLIC, leveldbjni-all-1.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/leveldbjni-all-1.8.jar" } size: 1045744 timestamp: 1554876847066 type: FILE visibility: PUBLIC, jta-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jta-1.1.jar" } size: 15071 timestamp: 1554876847053 type: FILE visibility: PUBLIC, ivy-2.0.0-rc2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/ivy-2.0.0-rc2.jar" } size: 893199 timestamp: 1554876847003 type: FILE visibility: PUBLIC, hadoop-yarn-server-web-proxy.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-web-proxy.jar" } size: 39674 timestamp: 1554876846962 type: FILE visibility: PUBLIC, jsr305-1.3.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jsr305-1.3.9.jar" } size: 33015 timestamp: 1554876847052 type: FILE visibility: PUBLIC, parquet-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-common.jar" } size: 41082 timestamp: 1554876847098 type: FILE visibility: PUBLIC, oro-2.0.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oro-2.0.8.jar" } size: 65261 timestamp: 1554876847089 type: FILE visibility: PUBLIC, akka-slf4j_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-slf4j_2.10-2.2.3-shaded-protobuf.jar" } size: 14186 timestamp: 1554876846832 type: FILE visibility: PUBLIC, spire_2.10-0.7.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spire_2.10-0.7.4.jar" } size: 7253596 timestamp: 1554876847194 type: FILE visibility: PUBLIC, spark-log4j.properties -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/spark-log4j.properties" } size: 2964 timestamp: 1569336248622 type: FILE visibility: PRIVATE, parquet-encoding.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-encoding.jar" } size: 278924 timestamp: 1554876847099 type: FILE visibility: PUBLIC, htrace-core-3.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/htrace-core-3.2.0-incubating.jar" } size: 1483913 timestamp: 1554876846992 type: FILE visibility: PUBLIC, eigenbase-properties-1.1.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/eigenbase-properties-1.1.5.jar" } size: 18482 timestamp: 1554876846908 type: FILE visibility: PUBLIC, findbugs-annotations-1.3.9-1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/findbugs-annotations-1.3.9-1.jar" } size: 15322 timestamp: 1554876846908 type: FILE visibility: PUBLIC, commons-codec-1.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-codec-1.4.jar" } size: 58160 timestamp: 1554876846870 type: FILE visibility: PUBLIC, compress-lzf-1.0.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/compress-lzf-1.0.3.jar" } size: 79845 timestamp: 1554876846892 type: FILE visibility: PUBLIC, datanucleus-core-3.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-core-3.2.2.jar" } size: 1801810 timestamp: 1554876846908 type: FILE visibility: PUBLIC, config-1.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/config-1.0.2.jar" } size: 187497 timestamp: 1554876846896 type: FILE visibility: PUBLIC, jackson-core-asl-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-core-asl-1.8.8.jar" } size: 227500 timestamp: 1554876847004 type: FILE visibility: PUBLIC, opencsv-2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/opencsv-2.3.jar" } size: 19827 timestamp: 1554876847088 type: FILE visibility: PUBLIC, spark-launcher_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-launcher_2.10-1.6.0-cdh5.14.2.jar" } size: 68077 timestamp: 1554876847143 type: FILE visibility: PUBLIC, netty-all-4.0.29.Final.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/netty-all-4.0.29.Final.jar" } size: 2054931 timestamp: 1554876847087 type: FILE visibility: PUBLIC, spark-bagel_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-bagel_2.10-1.6.0-cdh5.14.2.jar" } size: 46125 timestamp: 1554876847131 type: FILE visibility: PUBLIC, reflectasm-1.07-shaded.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/reflectasm-1.07-shaded.jar" } size: 65612 timestamp: 1554876847118 type: FILE visibility: PUBLIC, zkclient-0.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/zkclient-0.7.jar" } size: 73756 timestamp: 1554876847192 type: FILE visibility: PUBLIC, oozie-hadoop-utils-2.6.0-cdh5.14.2.oozie-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-hadoop-utils-2.6.0-cdh5.14.2.oozie-4.1.0-cdh5.14.2.jar" } size: 11790 timestamp: 1554876846711 type: FILE visibility: PUBLIC, jersey-server-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-server-1.9.jar" } size: 713089 timestamp: 1554876847038 type: FILE visibility: PUBLIC, hadoop-yarn-server-nodemanager.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-nodemanager.jar" } size: 733354 timestamp: 1554876846964 type: FILE visibility: PUBLIC, guava-14.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guava-14.0.1.jar" } size: 2189117 timestamp: 1554876846929 type: FILE visibility: PUBLIC, quasiquotes_2.10-2.0.0-M8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/quasiquotes_2.10-2.0.0-M8.jar" } size: 721002 timestamp: 1554876847116 type: FILE visibility: PUBLIC, spark-streaming-flume-sink_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-flume-sink_2.10-1.6.0-cdh5.14.2.jar" } size: 86097 timestamp: 1554876847165 type: FILE visibility: PUBLIC, avro-ipc.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-ipc.jar" } size: 130018 timestamp: 1554876846849 type: FILE visibility: PUBLIC, calcite-avatica-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-avatica-1.2.0-incubating.jar" } size: 258370 timestamp: 1554876846859 type: FILE visibility: PUBLIC, jersey-json-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-json-1.9.jar" } size: 147952 timestamp: 1554876847034 type: FILE visibility: PUBLIC, jul-to-slf4j-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jul-to-slf4j-1.7.5.jar" } size: 4960 timestamp: 1554876847061 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-shuffle.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-shuffle.jar" } size: 56388 timestamp: 1554876846958 type: FILE visibility: PUBLIC, htrace-core4-4.0.1-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/htrace-core4-4.0.1-incubating.jar" } size: 1485102 timestamp: 1554876846995 type: FILE visibility: PUBLIC, scala-compiler-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-compiler-2.10.5.jar" } size: 14472629 timestamp: 1554876847154 type: FILE visibility: PUBLIC, metrics-core-2.2.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-core-2.2.0.jar" } size: 82123 timestamp: 1554876847073 type: FILE visibility: PUBLIC, scala-library-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-library-2.10.5.jar" } size: 7130772 timestamp: 1554876847135 type: FILE visibility: PUBLIC, slf4j-log4j12-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/slf4j-log4j12-1.7.5.jar" } size: 8869 timestamp: 1554876847125 type: FILE visibility: PUBLIC, slf4j-api-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/slf4j-api-1.7.5.jar" } size: 26084 timestamp: 1554876847122 type: FILE visibility: PUBLIC, jdo-api-3.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jdo-api-3.0.1.jar" } size: 201124 timestamp: 1554876847033 type: FILE visibility: PUBLIC, jetty-util-6.1.26.cloudera.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jetty-util-6.1.26.cloudera.4.jar" } size: 177702 timestamp: 1554876847044 type: FILE visibility: PUBLIC, akka-actor_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-actor_2.10-2.2.3-shaded-protobuf.jar" } size: 2669483 timestamp: 1554876846841 type: FILE visibility: PUBLIC, jodd-core-3.5.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jodd-core-3.5.2.jar" } size: 427780 timestamp: 1554876847047 type: FILE visibility: PUBLIC, xbean-asm5-shaded-4.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xbean-asm5-shaded-4.4.jar" } size: 144660 timestamp: 1554876847204 type: FILE visibility: PUBLIC, antlr-2.7.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/antlr-2.7.7.jar" } size: 445288 timestamp: 1554876846835 type: FILE visibility: PUBLIC, hive-metastore.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-metastore.jar" } size: 5967686 timestamp: 1554876846997 type: FILE visibility: PUBLIC, ST4-4.0.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/ST4-4.0.4.jar" } size: 236660 timestamp: 1554876846828 type: FILE visibility: PUBLIC, curator-framework-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-framework-2.7.1.jar" } size: 186273 timestamp: 1554876846899 type: FILE visibility: PUBLIC, guice-servlet-3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guice-servlet-3.0.jar" } size: 65012 timestamp: 1554876846918 type: FILE visibility: PUBLIC, hive-site.xml -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/hue/oozie/workspaces/hue-oozie-1569335914.01/lib/hive-site.xml" } size: 5597 timestamp: 1569336227743 type: FILE visibility: PRIVATE, jets3t-0.6.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jets3t-0.6.1.jar" } size: 321806 timestamp: 1554876847036 type: FILE visibility: PUBLIC, hadoop-yarn-client.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-client.jar" } size: 160218 timestamp: 1554876846963 type: FILE visibility: PUBLIC, objenesis-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/objenesis-1.2.jar" } size: 36046 timestamp: 1554876847088 type: FILE visibility: PUBLIC, joda-time-2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/joda-time-2.1.jar" } size: 570478 timestamp: 1554876847046 type: FILE visibility: PUBLIC, json-simple-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/json-simple-1.1.jar" } size: 16046 timestamp: 1554876846711 type: FILE visibility: PUBLIC, mesos-0.21.1-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/mesos-0.21.1-shaded-protobuf.jar" } size: 1277883 timestamp: 1554876847075 type: FILE visibility: PUBLIC, parquet-format.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-format.jar" } size: 384616 timestamp: 1554876847100 type: FILE visibility: PUBLIC, tachyon-underfs-s3-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-s3-0.8.2.jar" } size: 505388 timestamp: 1554876847210 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-app.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-app.jar" } size: 532433 timestamp: 1554876846930 type: FILE visibility: PUBLIC, __spark_conf__ -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/__spark_conf__8248970040515488741.zip" } size: 1199 timestamp: 1569336248693 type: ARCHIVE visibility: PRIVATE, jackson-mapper-asl-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-mapper-asl-1.8.8.jar" } size: 668564 timestamp: 1554876847015 type: FILE visibility: PUBLIC, commons-logging-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-logging-1.1.jar" } size: 52915 timestamp: 1554876846886 type: FILE visibility: PUBLIC, spark-network-shuffle_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-network-shuffle_2.10-1.6.0-cdh5.14.2.jar" } size: 51920 timestamp: 1554876847160 type: FILE visibility: PUBLIC, tachyon-underfs-local-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-local-0.8.2.jar" } size: 7212 timestamp: 1554876847185 type: FILE visibility: PUBLIC, hadoop-yarn-api.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-api.jar" } size: 1931810 timestamp: 1554876846963 type: FILE visibility: PUBLIC, json4s-ast_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-ast_2.10-3.2.10.jar" } size: 83798 timestamp: 1554876847053 type: FILE visibility: PUBLIC, kafka_2.10-0.9.0-kafka-2.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kafka_2.10-0.9.0-kafka-2.0.2.jar" } size: 4945588 timestamp: 1554876847072 type: FILE visibility: PUBLIC, stream-2.7.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stream-2.7.0.jar" } size: 174351 timestamp: 1554876847182 type: FILE visibility: PUBLIC, pmml-schema-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-schema-1.1.15.jar" } size: 4560 timestamp: 1554876847105 type: FILE visibility: PUBLIC, javax.inject-1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.inject-1.jar" } size: 2497 timestamp: 1554876847021 type: FILE visibility: PUBLIC, parquet-hadoop.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-hadoop.jar" } size: 212643 timestamp: 1554876847099 type: FILE visibility: PUBLIC, xml-apis-1.4.01.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xml-apis-1.4.01.jar" } size: 220536 timestamp: 1554876847193 type: FILE visibility: PUBLIC, commons-compress-1.4.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-compress-1.4.1.jar" } size: 241367 timestamp: 1554876846875 type: FILE visibility: PUBLIC, parquet-jackson.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-jackson.jar" } size: 927866 timestamp: 1554876847101 type: FILE visibility: PUBLIC, jackson-xc-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-xc-1.8.8.jar" } size: 32353 timestamp: 1554876847013 type: FILE visibility: PUBLIC, guice-3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guice-3.0.jar" } size: 710492 timestamp: 1554876846921 type: FILE visibility: PUBLIC, hadoop-yarn-server-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-common.jar" } size: 318563 timestamp: 1554876846963 type: FILE visibility: PUBLIC, spark-hive_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-hive_2.10-1.6.0-cdh5.14.2.jar" } size: 1330994 timestamp: 1554876847156 type: FILE visibility: PUBLIC, apache-log4j-extras-1.2.17.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/apache-log4j-extras-1.2.17.jar" } size: 448794 timestamp: 1554876846842 type: FILE visibility: PUBLIC, commons-httpclient-3.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-httpclient-3.1.jar" } size: 305001 timestamp: 1554876846881 type: FILE visibility: PUBLIC, akka-remote_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-remote_2.10-2.2.3-shaded-protobuf.jar" } size: 1276758 timestamp: 1554876846839 type: FILE visibility: PUBLIC, pyrolite-4.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pyrolite-4.9.jar" } size: 93407 timestamp: 1554876847111 type: FILE visibility: PUBLIC, jackson-annotations-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-annotations-2.2.3.jar" } size: 33483 timestamp: 1554876847002 type: FILE visibility: PUBLIC, hadoop-hdfs.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-hdfs.jar" } size: 11708785 timestamp: 1554876846950 type: FILE visibility: PUBLIC, hadoop-yarn-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-common.jar" } size: 1562176 timestamp: 1554876846973 type: FILE visibility: PUBLIC, httpcore-4.2.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/httpcore-4.2.5.jar" } size: 227708 timestamp: 1554876846992 type: FILE visibility: PUBLIC, spark-sql_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-sql_2.10-1.6.0-cdh5.14.2.jar" } size: 4102649 timestamp: 1554876847181 type: FILE visibility: PUBLIC, libthrift-0.9.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/libthrift-0.9.3.jar" } size: 234201 timestamp: 1554876847067 type: FILE visibility: PUBLIC, jcl-over-slf4j-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jcl-over-slf4j-1.7.5.jar" } size: 16517 timestamp: 1554876847027 type: FILE visibility: PUBLIC, jaxb-core-2.2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-core-2.2.7.jar" } size: 221747 timestamp: 1554876847026 type: FILE visibility: PUBLIC, spark-graphx_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-graphx_2.10-1.6.0-cdh5.14.2.jar" } size: 655624 timestamp: 1554876847141 type: FILE visibility: PUBLIC, spire-macros_2.10-0.7.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spire-macros_2.10-0.7.4.jar" } size: 79162 timestamp: 1554876847175 type: FILE visibility: PUBLIC, stringtemplate-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stringtemplate-3.2.1.jar" } size: 148627 timestamp: 1554876847183 type: FILE visibility: PUBLIC, breeze-macros_2.10-0.11.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/breeze-macros_2.10-0.11.2.jar" } size: 118301 timestamp: 1554876846856 type: FILE visibility: PUBLIC, jersey-guice-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-guice-1.9.jar" } size: 14786 timestamp: 1554876847033 type: FILE visibility: PUBLIC, json4s-core_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-core_2.10-3.2.10.jar" } size: 584691 timestamp: 1554876847054 type: FILE visibility: PUBLIC, flume-ng-sdk-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-sdk-1.6.0-cdh5.14.2.jar" } size: 150183 timestamp: 1554876846917 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-jobclient.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-jobclient.jar" } size: 46076 timestamp: 1554876846949 type: FILE visibility: PUBLIC, commons-pool-1.5.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-pool-1.5.4.jar" } size: 96221 timestamp: 1554876846891 type: FILE visibility: PUBLIC, parquet-column.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-column.jar" } size: 956035 timestamp: 1554876847100 type: FILE visibility: PUBLIC, commons-math3-3.4.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-math3-3.4.1.jar" } size: 2035066 timestamp: 1554876846892 type: FILE visibility: PUBLIC, spark-lineage_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-lineage_2.10-1.6.0-cdh5.14.2.jar" } size: 100116 timestamp: 1554876847150 type: FILE visibility: PUBLIC, activation-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/activation-1.1.jar" } size: 62983 timestamp: 1554876846826 type: FILE visibility: PUBLIC, kryo-2.21.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kryo-2.21.jar" } size: 363460 timestamp: 1554876847064 type: FILE visibility: PUBLIC, spark-repl_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-repl_2.10-1.6.0-cdh5.14.2.jar" } size: 688309 timestamp: 1554876847166 type: FILE visibility: PUBLIC, antlr-runtime-3.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/antlr-runtime-3.4.jar" } size: 164368 timestamp: 1554876846836 type: FILE visibility: PUBLIC, datanucleus-rdbms-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-rdbms-3.2.1.jar" } size: 1769726 timestamp: 1554876846910 type: FILE visibility: PUBLIC, xmlenc-0.52.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xmlenc-0.52.jar" } size: 15010 timestamp: 1554876847192 type: FILE visibility: PUBLIC, tachyon-underfs-hdfs-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-hdfs-0.8.2.jar" } size: 11079 timestamp: 1554876847184 type: FILE visibility: PUBLIC, unused-1.0.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/unused-1.0.0.jar" } size: 2777 timestamp: 1554876847204 type: FILE visibility: PUBLIC, scalap-2.10.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scalap-2.10.0.jar" } size: 855012 timestamp: 1554876847124 type: FILE visibility: PUBLIC, commons-collections-3.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-collections-3.2.2.jar" } size: 588337 timestamp: 1554876846872 type: FILE visibility: PUBLIC, oozie-sharelib-oozie-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-sharelib-oozie-4.1.0-cdh5.14.2.jar" } size: 61236 timestamp: 1554876846711 type: FILE visibility: PUBLIC, metrics-json-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-json-3.1.2.jar" } size: 15827 timestamp: 1554876847080 type: FILE visibility: PUBLIC, jackson-databind-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-databind-2.2.3.jar" } size: 865838 timestamp: 1554876847007 type: FILE visibility: PUBLIC, kafka-clients-0.9.0-kafka-2.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kafka-clients-0.9.0-kafka-2.0.2.jar" } size: 662487 timestamp: 1554876847065 type: FILE visibility: PUBLIC, jaxb-impl-2.2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-impl-2.2.7.jar" } size: 919968 timestamp: 1554876847028 type: FILE visibility: PUBLIC, javax.servlet-api-3.1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.servlet-api-3.1.0.jar" } size: 95806 timestamp: 1554876847024 type: FILE visibility: PUBLIC, bonecp-0.7.1.RELEASE.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/bonecp-0.7.1.RELEASE.jar" } size: 115709 timestamp: 1554876846852 type: FILE visibility: PUBLIC, jackson-core-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-core-2.2.3.jar" } size: 192699 timestamp: 1554876847003 type: FILE visibility: PUBLIC, curator-recipes-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-recipes-2.7.1.jar" } size: 270342 timestamp: 1554876846900 type: FILE visibility: PUBLIC, oozie-sharelib-oozie.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-sharelib-oozie.jar" } size: 61236 timestamp: 1554876846713 type: FILE visibility: PUBLIC, jline-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jline-2.10.5.jar" } size: 164623 timestamp: 1554876847043 type: FILE visibility: PUBLIC, aopalliance-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/aopalliance-1.0.jar" } size: 4467 timestamp: 1554876846836 type: FILE visibility: PUBLIC, scala-reflect-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-reflect-2.10.5.jar" } size: 3206179 timestamp: 1554876847131 type: FILE visibility: PUBLIC, httpclient-4.2.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/httpclient-4.2.5.jar" } size: 433368 timestamp: 1554876846992 type: FILE visibility: PUBLIC, uncommons-maths-1.2.2a.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/uncommons-maths-1.2.2a.jar" } size: 49019 timestamp: 1554876847204 type: FILE visibility: PUBLIC, jline-2.11.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jline-2.11.jar" } size: 208781 timestamp: 1554876847043 type: FILE visibility: PUBLIC, core-1.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/core-1.1.2.jar" } size: 164422 timestamp: 1554876846895 type: FILE visibility: PUBLIC, metrics-jvm-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-jvm-3.1.2.jar" } size: 39280 timestamp: 1554876847080 type: FILE visibility: PUBLIC, commons-compiler-2.7.6.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-compiler-2.7.6.jar" } size: 30595 timestamp: 1554876846874 type: FILE visibility: PUBLIC, hbase-annotations.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-annotations.jar" } size: 20878 timestamp: 1554876846972 type: FILE visibility: PUBLIC, oozie-hadoop-utils.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-hadoop-utils.jar" } size: 11790 timestamp: 1554876846710 type: FILE visibility: PUBLIC, jaxb-api-2.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-api-2.2.2.jar" } size: 105134 timestamp: 1554876847024 type: FILE visibility: PUBLIC, gson-2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/gson-2.7.jar" } size: 231952 timestamp: 1554876846918 type: FILE visibility: PUBLIC, logredactor-1.0.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/logredactor-1.0.3.jar" } size: 15618 timestamp: 1554876847071 type: FILE visibility: PUBLIC, mina-core-2.0.0-M5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/mina-core-2.0.0-M5.jar" } size: 637657 timestamp: 1554876847082 type: FILE visibility: PUBLIC, spark-streaming-kafka_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-kafka_2.10-1.6.0-cdh5.14.2.jar" } size: 291318 timestamp: 1554876847170 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-common.jar" } size: 755226 timestamp: 1554876846947 type: FILE visibility: PUBLIC, oozie-sharelib-spark.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oozie-sharelib-spark.jar" } size: 35401 timestamp: 1554876847087 type: FILE visibility: PUBLIC, spark-catalyst_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-catalyst_2.10-1.6.0-cdh5.14.2.jar" } size: 5271291 timestamp: 1554876847155 type: FILE visibility: PUBLIC, protobuf-java-2.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/protobuf-java-2.5.0.jar" } size: 533455 timestamp: 1554876847109 type: FILE visibility: PUBLIC, hive-cli.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-cli.jar" } size: 40245 timestamp: 1554876846975 type: FILE visibility: PUBLIC, avro-mapred-hadoop2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-mapred-hadoop2.jar" } size: 181325 timestamp: 1554876846851 type: FILE visibility: PUBLIC) 2019-09-24 22:44:17,721 [ContainerLauncher-1] INFO org.apache.spark.deploy.yarn.ExecutorRunnable - Prepared Local resources Map(hadoop-annotations.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-annotations.jar" } size: 21538 timestamp: 1554876846919 type: FILE visibility: PUBLIC, minlog-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/minlog-1.2.jar" } size: 4965 timestamp: 1554876847080 type: FILE visibility: PUBLIC, spark-unsafe_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-unsafe_2.10-1.6.0-cdh5.14.2.jar" } size: 39527 timestamp: 1554876847174 type: FILE visibility: PUBLIC, spark-mllib_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-mllib_2.10-1.6.0-cdh5.14.2.jar" } size: 4999801 timestamp: 1554876847170 type: FILE visibility: PUBLIC, commons-lang3-3.3.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-lang3-3.3.2.jar" } size: 412739 timestamp: 1554876846886 type: FILE visibility: PUBLIC, flume-ng-core-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-core-1.6.0-cdh5.14.2.jar" } size: 390528 timestamp: 1554876846910 type: FILE visibility: PUBLIC, tachyon-client-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-client-0.8.2.jar" } size: 2291167 timestamp: 1554876847192 type: FILE visibility: PUBLIC, jetty-6.1.26.cloudera.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jetty-6.1.26.cloudera.4.jar" } size: 540685 timestamp: 1554876847045 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-hs.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-hs.jar" } size: 178284 timestamp: 1554876846948 type: FILE visibility: PUBLIC, metrics-core-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-core-3.1.2.jar" } size: 112558 timestamp: 1554876847074 type: FILE visibility: PUBLIC, pmml-agent-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-agent-1.1.15.jar" } size: 5278 timestamp: 1554876847104 type: FILE visibility: PUBLIC, jersey-client-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-client-1.9.jar" } size: 130458 timestamp: 1554876847034 type: FILE visibility: PUBLIC, flume-ng-configuration-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-configuration-1.6.0-cdh5.14.2.jar" } size: 57779 timestamp: 1554876846909 type: FILE visibility: PUBLIC, __spark__.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-yarn_2.10-1.6.0-cdh5.14.2.jar" } size: 595290 timestamp: 1554876847176 type: FILE visibility: PUBLIC, spark-avro_2.10-1.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-avro_2.10-1.1.0-cdh5.14.2.jar" } size: 103563 timestamp: 1554876847128 type: FILE visibility: PUBLIC, json4s-jackson_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-jackson_2.10-3.2.10.jar" } size: 39953 timestamp: 1554876847052 type: FILE visibility: PUBLIC, datanucleus-api-jdo-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-api-jdo-3.2.1.jar" } size: 337012 timestamp: 1554876846901 type: FILE visibility: PUBLIC, metrics-graphite-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-graphite-3.1.2.jar" } size: 20852 timestamp: 1554876847079 type: FILE visibility: PUBLIC, __app__.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/scalapp-1.0-SNAPSHOT.jar" } size: 246113940 timestamp: 1569336246598 type: FILE visibility: PRIVATE, avro.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro.jar" } size: 477010 timestamp: 1554876846852 type: FILE visibility: PUBLIC, jansi-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jansi-1.9.jar" } size: 113796 timestamp: 1554876847018 type: FILE visibility: PUBLIC, chimera-0.9.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chimera-0.9.2.jar" } size: 62501 timestamp: 1554876846867 type: FILE visibility: PUBLIC, jasper-runtime-5.5.23.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jasper-runtime-5.5.23.jar" } size: 76844 timestamp: 1554876847018 type: FILE visibility: PUBLIC, curator-client-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-client-2.7.1.jar" } size: 69500 timestamp: 1554876846898 type: FILE visibility: PUBLIC, libfb303-0.9.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/libfb303-0.9.3.jar" } size: 313702 timestamp: 1554876847064 type: FILE visibility: PUBLIC, commons-lang-2.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-lang-2.4.jar" } size: 261809 timestamp: 1554876846884 type: FILE visibility: PUBLIC, spark-core_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-core_2.10-1.6.0-cdh5.14.2.jar" } size: 11668600 timestamp: 1554876847166 type: FILE visibility: PUBLIC, asm-3.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/asm-3.2.jar" } size: 43398 timestamp: 1554876846843 type: FILE visibility: PUBLIC, jackson-jaxrs-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-jaxrs-1.8.8.jar" } size: 17884 timestamp: 1554876847008 type: FILE visibility: PUBLIC, derby-10.10.1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/derby-10.10.1.1.jar" } size: 2831358 timestamp: 1554876846914 type: FILE visibility: PUBLIC, RoaringBitmap-0.5.11.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/RoaringBitmap-0.5.11.jar" } size: 201928 timestamp: 1554876846825 type: FILE visibility: PUBLIC, stax-api-1.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stax-api-1.0.1.jar" } size: 26514 timestamp: 1554876847177 type: FILE visibility: PUBLIC, pyspark.zip -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pyspark.zip" } size: 357519 timestamp: 1554876847113 type: FILE visibility: PUBLIC, hive-exec.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-exec.jar" } size: 19619438 timestamp: 1554876847026 type: FILE visibility: PUBLIC, chill_2.10-0.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chill_2.10-0.5.0.jar" } size: 221032 timestamp: 1554876846866 type: FILE visibility: PUBLIC, commons-cli-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-cli-1.2.jar" } size: 41123 timestamp: 1554876846870 type: FILE visibility: PUBLIC, hbase-protocol.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-protocol.jar" } size: 4631462 timestamp: 1554876846990 type: FILE visibility: PUBLIC, commons-daemon-1.0.13.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-daemon-1.0.13.jar" } size: 24239 timestamp: 1554876846878 type: FILE visibility: PUBLIC, xercesImpl-2.11.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xercesImpl-2.11.0.jar" } size: 1367760 timestamp: 1554876847197 type: FILE visibility: PUBLIC, spark-streaming-flume_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-flume_2.10-1.6.0-cdh5.14.2.jar" } size: 106028 timestamp: 1554876847169 type: FILE visibility: PUBLIC, chill-java-0.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/chill-java-0.5.0.jar" } size: 46715 timestamp: 1554876846862 type: FILE visibility: PUBLIC, commons-dbcp-1.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-dbcp-1.4.jar" } size: 160519 timestamp: 1554876846878 type: FILE visibility: PUBLIC, janino-2.7.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/janino-2.7.8.jar" } size: 613299 timestamp: 1554876847016 type: FILE visibility: PUBLIC, py4j-0.9-src.zip -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/py4j-0.9-src.zip" } size: 44846 timestamp: 1554876847107 type: FILE visibility: PUBLIC, jettison-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jettison-1.1.jar" } size: 67758 timestamp: 1554876847044 type: FILE visibility: PUBLIC, log4j-1.2.17.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/log4j-1.2.17.jar" } size: 489884 timestamp: 1554876847072 type: FILE visibility: PUBLIC, spark-streaming_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming_2.10-1.6.0-cdh5.14.2.jar" } size: 2061129 timestamp: 1554876847177 type: FILE visibility: PUBLIC, javax.servlet-3.0.0.v201112011016.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.servlet-3.0.0.v201112011016.jar" } size: 200387 timestamp: 1554876847023 type: FILE visibility: PUBLIC, jtransforms-2.4.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jtransforms-2.4.0.jar" } size: 764569 timestamp: 1554876847056 type: FILE visibility: PUBLIC, arpack_combined_all-0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/arpack_combined_all-0.1.jar" } size: 1194003 timestamp: 1554876846848 type: FILE visibility: PUBLIC, breeze_2.10-0.11.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/breeze_2.10-0.11.2.jar" } size: 13689583 timestamp: 1554876846901 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-core.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-core.jar" } size: 1553188 timestamp: 1554876846946 type: FILE visibility: PUBLIC, hbase-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-common.jar" } size: 587216 timestamp: 1554876846977 type: FILE visibility: PUBLIC, netty-3.10.5.Final.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/netty-3.10.5.Final.jar" } size: 1330394 timestamp: 1554876847088 type: FILE visibility: PUBLIC, avro-ipc-tests.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-ipc-tests.jar" } size: 395186 timestamp: 1554876846848 type: FILE visibility: PUBLIC, pmml-model-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-model-1.1.15.jar" } size: 656969 timestamp: 1554876847107 type: FILE visibility: PUBLIC, jackson-module-scala_2.10-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-module-scala_2.10-2.2.3.jar" } size: 469406 timestamp: 1554876847014 type: FILE visibility: PUBLIC, stax-api-1.0-2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stax-api-1.0-2.jar" } size: 23346 timestamp: 1554876847177 type: FILE visibility: PUBLIC, py4j-0.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/py4j-0.9.jar" } size: 84451 timestamp: 1554876847108 type: FILE visibility: PUBLIC, asm-4.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/asm-4.0.jar" } size: 46022 timestamp: 1554876846843 type: FILE visibility: PUBLIC, spark-network-common_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-network-common_2.10-1.6.0-cdh5.14.2.jar" } size: 2357547 timestamp: 1554876847163 type: FILE visibility: PUBLIC, commons-io-2.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-io-2.4.jar" } size: 185140 timestamp: 1554876846883 type: FILE visibility: PUBLIC, jsp-api-2.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jsp-api-2.0.jar" } size: 48457 timestamp: 1554876847052 type: FILE visibility: PUBLIC, zookeeper.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/zookeeper.jar" } size: 1411840 timestamp: 1554876847191 type: FILE visibility: PUBLIC, commons-net-3.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-net-3.1.jar" } size: 273370 timestamp: 1554876846890 type: FILE visibility: PUBLIC, protobuf-java-2.4.1-shaded.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/protobuf-java-2.4.1-shaded.jar" } size: 455260 timestamp: 1554876847107 type: FILE visibility: PUBLIC, lz4-1.3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/lz4-1.3.0.jar" } size: 236880 timestamp: 1554876847073 type: FILE visibility: PUBLIC, commons-el-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-el-1.0.jar" } size: 112341 timestamp: 1554876846879 type: FILE visibility: PUBLIC, calcite-core-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-core-1.2.0-incubating.jar" } size: 3519262 timestamp: 1554876846870 type: FILE visibility: PUBLIC, paranamer-2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/paranamer-2.3.jar" } size: 29555 timestamp: 1554876847097 type: FILE visibility: PUBLIC, oozie-sharelib-spark-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oozie-sharelib-spark-4.1.0-cdh5.14.2.jar" } size: 35401 timestamp: 1554876847087 type: FILE visibility: PUBLIC, xz-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xz-1.0.jar" } size: 94672 timestamp: 1554876847191 type: FILE visibility: PUBLIC, jersey-core-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-core-1.9.jar" } size: 458739 timestamp: 1554876847033 type: FILE visibility: PUBLIC, calcite-linq4j-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-linq4j-1.2.0-incubating.jar" } size: 442406 timestamp: 1554876846862 type: FILE visibility: PUBLIC, leveldbjni-all-1.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/leveldbjni-all-1.8.jar" } size: 1045744 timestamp: 1554876847066 type: FILE visibility: PUBLIC, jta-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jta-1.1.jar" } size: 15071 timestamp: 1554876847053 type: FILE visibility: PUBLIC, ivy-2.0.0-rc2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/ivy-2.0.0-rc2.jar" } size: 893199 timestamp: 1554876847003 type: FILE visibility: PUBLIC, hadoop-yarn-server-web-proxy.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-web-proxy.jar" } size: 39674 timestamp: 1554876846962 type: FILE visibility: PUBLIC, jsr305-1.3.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jsr305-1.3.9.jar" } size: 33015 timestamp: 1554876847052 type: FILE visibility: PUBLIC, parquet-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-common.jar" } size: 41082 timestamp: 1554876847098 type: FILE visibility: PUBLIC, oro-2.0.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oro-2.0.8.jar" } size: 65261 timestamp: 1554876847089 type: FILE visibility: PUBLIC, akka-slf4j_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-slf4j_2.10-2.2.3-shaded-protobuf.jar" } size: 14186 timestamp: 1554876846832 type: FILE visibility: PUBLIC, spire_2.10-0.7.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spire_2.10-0.7.4.jar" } size: 7253596 timestamp: 1554876847194 type: FILE visibility: PUBLIC, spark-log4j.properties -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/spark-log4j.properties" } size: 2964 timestamp: 1569336248622 type: FILE visibility: PRIVATE, parquet-encoding.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-encoding.jar" } size: 278924 timestamp: 1554876847099 type: FILE visibility: PUBLIC, htrace-core-3.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/htrace-core-3.2.0-incubating.jar" } size: 1483913 timestamp: 1554876846992 type: FILE visibility: PUBLIC, eigenbase-properties-1.1.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/eigenbase-properties-1.1.5.jar" } size: 18482 timestamp: 1554876846908 type: FILE visibility: PUBLIC, findbugs-annotations-1.3.9-1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/findbugs-annotations-1.3.9-1.jar" } size: 15322 timestamp: 1554876846908 type: FILE visibility: PUBLIC, commons-codec-1.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-codec-1.4.jar" } size: 58160 timestamp: 1554876846870 type: FILE visibility: PUBLIC, compress-lzf-1.0.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/compress-lzf-1.0.3.jar" } size: 79845 timestamp: 1554876846892 type: FILE visibility: PUBLIC, datanucleus-core-3.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-core-3.2.2.jar" } size: 1801810 timestamp: 1554876846908 type: FILE visibility: PUBLIC, config-1.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/config-1.0.2.jar" } size: 187497 timestamp: 1554876846896 type: FILE visibility: PUBLIC, jackson-core-asl-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-core-asl-1.8.8.jar" } size: 227500 timestamp: 1554876847004 type: FILE visibility: PUBLIC, opencsv-2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/opencsv-2.3.jar" } size: 19827 timestamp: 1554876847088 type: FILE visibility: PUBLIC, spark-launcher_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-launcher_2.10-1.6.0-cdh5.14.2.jar" } size: 68077 timestamp: 1554876847143 type: FILE visibility: PUBLIC, netty-all-4.0.29.Final.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/netty-all-4.0.29.Final.jar" } size: 2054931 timestamp: 1554876847087 type: FILE visibility: PUBLIC, spark-bagel_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-bagel_2.10-1.6.0-cdh5.14.2.jar" } size: 46125 timestamp: 1554876847131 type: FILE visibility: PUBLIC, reflectasm-1.07-shaded.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/reflectasm-1.07-shaded.jar" } size: 65612 timestamp: 1554876847118 type: FILE visibility: PUBLIC, zkclient-0.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/zkclient-0.7.jar" } size: 73756 timestamp: 1554876847192 type: FILE visibility: PUBLIC, oozie-hadoop-utils-2.6.0-cdh5.14.2.oozie-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-hadoop-utils-2.6.0-cdh5.14.2.oozie-4.1.0-cdh5.14.2.jar" } size: 11790 timestamp: 1554876846711 type: FILE visibility: PUBLIC, jersey-server-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-server-1.9.jar" } size: 713089 timestamp: 1554876847038 type: FILE visibility: PUBLIC, hadoop-yarn-server-nodemanager.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-nodemanager.jar" } size: 733354 timestamp: 1554876846964 type: FILE visibility: PUBLIC, guava-14.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guava-14.0.1.jar" } size: 2189117 timestamp: 1554876846929 type: FILE visibility: PUBLIC, quasiquotes_2.10-2.0.0-M8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/quasiquotes_2.10-2.0.0-M8.jar" } size: 721002 timestamp: 1554876847116 type: FILE visibility: PUBLIC, spark-streaming-flume-sink_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-flume-sink_2.10-1.6.0-cdh5.14.2.jar" } size: 86097 timestamp: 1554876847165 type: FILE visibility: PUBLIC, avro-ipc.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-ipc.jar" } size: 130018 timestamp: 1554876846849 type: FILE visibility: PUBLIC, calcite-avatica-1.2.0-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/calcite-avatica-1.2.0-incubating.jar" } size: 258370 timestamp: 1554876846859 type: FILE visibility: PUBLIC, jersey-json-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-json-1.9.jar" } size: 147952 timestamp: 1554876847034 type: FILE visibility: PUBLIC, jul-to-slf4j-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jul-to-slf4j-1.7.5.jar" } size: 4960 timestamp: 1554876847061 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-shuffle.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-shuffle.jar" } size: 56388 timestamp: 1554876846958 type: FILE visibility: PUBLIC, htrace-core4-4.0.1-incubating.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/htrace-core4-4.0.1-incubating.jar" } size: 1485102 timestamp: 1554876846995 type: FILE visibility: PUBLIC, scala-compiler-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-compiler-2.10.5.jar" } size: 14472629 timestamp: 1554876847154 type: FILE visibility: PUBLIC, metrics-core-2.2.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-core-2.2.0.jar" } size: 82123 timestamp: 1554876847073 type: FILE visibility: PUBLIC, scala-library-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-library-2.10.5.jar" } size: 7130772 timestamp: 1554876847135 type: FILE visibility: PUBLIC, slf4j-log4j12-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/slf4j-log4j12-1.7.5.jar" } size: 8869 timestamp: 1554876847125 type: FILE visibility: PUBLIC, slf4j-api-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/slf4j-api-1.7.5.jar" } size: 26084 timestamp: 1554876847122 type: FILE visibility: PUBLIC, jdo-api-3.0.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jdo-api-3.0.1.jar" } size: 201124 timestamp: 1554876847033 type: FILE visibility: PUBLIC, jetty-util-6.1.26.cloudera.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jetty-util-6.1.26.cloudera.4.jar" } size: 177702 timestamp: 1554876847044 type: FILE visibility: PUBLIC, akka-actor_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-actor_2.10-2.2.3-shaded-protobuf.jar" } size: 2669483 timestamp: 1554876846841 type: FILE visibility: PUBLIC, jodd-core-3.5.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jodd-core-3.5.2.jar" } size: 427780 timestamp: 1554876847047 type: FILE visibility: PUBLIC, xbean-asm5-shaded-4.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xbean-asm5-shaded-4.4.jar" } size: 144660 timestamp: 1554876847204 type: FILE visibility: PUBLIC, antlr-2.7.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/antlr-2.7.7.jar" } size: 445288 timestamp: 1554876846835 type: FILE visibility: PUBLIC, hive-metastore.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-metastore.jar" } size: 5967686 timestamp: 1554876846997 type: FILE visibility: PUBLIC, ST4-4.0.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/ST4-4.0.4.jar" } size: 236660 timestamp: 1554876846828 type: FILE visibility: PUBLIC, curator-framework-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-framework-2.7.1.jar" } size: 186273 timestamp: 1554876846899 type: FILE visibility: PUBLIC, guice-servlet-3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guice-servlet-3.0.jar" } size: 65012 timestamp: 1554876846918 type: FILE visibility: PUBLIC, hive-site.xml -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/hue/oozie/workspaces/hue-oozie-1569335914.01/lib/hive-site.xml" } size: 5597 timestamp: 1569336227743 type: FILE visibility: PRIVATE, jets3t-0.6.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jets3t-0.6.1.jar" } size: 321806 timestamp: 1554876847036 type: FILE visibility: PUBLIC, hadoop-yarn-client.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-client.jar" } size: 160218 timestamp: 1554876846963 type: FILE visibility: PUBLIC, objenesis-1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/objenesis-1.2.jar" } size: 36046 timestamp: 1554876847088 type: FILE visibility: PUBLIC, joda-time-2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/joda-time-2.1.jar" } size: 570478 timestamp: 1554876847046 type: FILE visibility: PUBLIC, json-simple-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/json-simple-1.1.jar" } size: 16046 timestamp: 1554876846711 type: FILE visibility: PUBLIC, mesos-0.21.1-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/mesos-0.21.1-shaded-protobuf.jar" } size: 1277883 timestamp: 1554876847075 type: FILE visibility: PUBLIC, parquet-format.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-format.jar" } size: 384616 timestamp: 1554876847100 type: FILE visibility: PUBLIC, tachyon-underfs-s3-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-s3-0.8.2.jar" } size: 505388 timestamp: 1554876847210 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-app.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-app.jar" } size: 532433 timestamp: 1554876846930 type: FILE visibility: PUBLIC, __spark_conf__ -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/admin/.sparkStaging/application_1566100765602_0108/__spark_conf__8248970040515488741.zip" } size: 1199 timestamp: 1569336248693 type: ARCHIVE visibility: PRIVATE, jackson-mapper-asl-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-mapper-asl-1.8.8.jar" } size: 668564 timestamp: 1554876847015 type: FILE visibility: PUBLIC, commons-logging-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-logging-1.1.jar" } size: 52915 timestamp: 1554876846886 type: FILE visibility: PUBLIC, spark-network-shuffle_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-network-shuffle_2.10-1.6.0-cdh5.14.2.jar" } size: 51920 timestamp: 1554876847160 type: FILE visibility: PUBLIC, tachyon-underfs-local-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-local-0.8.2.jar" } size: 7212 timestamp: 1554876847185 type: FILE visibility: PUBLIC, hadoop-yarn-api.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-api.jar" } size: 1931810 timestamp: 1554876846963 type: FILE visibility: PUBLIC, json4s-ast_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-ast_2.10-3.2.10.jar" } size: 83798 timestamp: 1554876847053 type: FILE visibility: PUBLIC, kafka_2.10-0.9.0-kafka-2.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kafka_2.10-0.9.0-kafka-2.0.2.jar" } size: 4945588 timestamp: 1554876847072 type: FILE visibility: PUBLIC, stream-2.7.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stream-2.7.0.jar" } size: 174351 timestamp: 1554876847182 type: FILE visibility: PUBLIC, pmml-schema-1.1.15.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pmml-schema-1.1.15.jar" } size: 4560 timestamp: 1554876847105 type: FILE visibility: PUBLIC, javax.inject-1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.inject-1.jar" } size: 2497 timestamp: 1554876847021 type: FILE visibility: PUBLIC, parquet-hadoop.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-hadoop.jar" } size: 212643 timestamp: 1554876847099 type: FILE visibility: PUBLIC, xml-apis-1.4.01.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xml-apis-1.4.01.jar" } size: 220536 timestamp: 1554876847193 type: FILE visibility: PUBLIC, commons-compress-1.4.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-compress-1.4.1.jar" } size: 241367 timestamp: 1554876846875 type: FILE visibility: PUBLIC, parquet-jackson.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-jackson.jar" } size: 927866 timestamp: 1554876847101 type: FILE visibility: PUBLIC, jackson-xc-1.8.8.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-xc-1.8.8.jar" } size: 32353 timestamp: 1554876847013 type: FILE visibility: PUBLIC, guice-3.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/guice-3.0.jar" } size: 710492 timestamp: 1554876846921 type: FILE visibility: PUBLIC, hadoop-yarn-server-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-server-common.jar" } size: 318563 timestamp: 1554876846963 type: FILE visibility: PUBLIC, spark-hive_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-hive_2.10-1.6.0-cdh5.14.2.jar" } size: 1330994 timestamp: 1554876847156 type: FILE visibility: PUBLIC, apache-log4j-extras-1.2.17.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/apache-log4j-extras-1.2.17.jar" } size: 448794 timestamp: 1554876846842 type: FILE visibility: PUBLIC, commons-httpclient-3.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-httpclient-3.1.jar" } size: 305001 timestamp: 1554876846881 type: FILE visibility: PUBLIC, akka-remote_2.10-2.2.3-shaded-protobuf.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/akka-remote_2.10-2.2.3-shaded-protobuf.jar" } size: 1276758 timestamp: 1554876846839 type: FILE visibility: PUBLIC, pyrolite-4.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/pyrolite-4.9.jar" } size: 93407 timestamp: 1554876847111 type: FILE visibility: PUBLIC, jackson-annotations-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-annotations-2.2.3.jar" } size: 33483 timestamp: 1554876847002 type: FILE visibility: PUBLIC, hadoop-hdfs.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-hdfs.jar" } size: 11708785 timestamp: 1554876846950 type: FILE visibility: PUBLIC, hadoop-yarn-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-yarn-common.jar" } size: 1562176 timestamp: 1554876846973 type: FILE visibility: PUBLIC, httpcore-4.2.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/httpcore-4.2.5.jar" } size: 227708 timestamp: 1554876846992 type: FILE visibility: PUBLIC, spark-sql_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-sql_2.10-1.6.0-cdh5.14.2.jar" } size: 4102649 timestamp: 1554876847181 type: FILE visibility: PUBLIC, libthrift-0.9.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/libthrift-0.9.3.jar" } size: 234201 timestamp: 1554876847067 type: FILE visibility: PUBLIC, jcl-over-slf4j-1.7.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jcl-over-slf4j-1.7.5.jar" } size: 16517 timestamp: 1554876847027 type: FILE visibility: PUBLIC, jaxb-core-2.2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-core-2.2.7.jar" } size: 221747 timestamp: 1554876847026 type: FILE visibility: PUBLIC, spark-graphx_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-graphx_2.10-1.6.0-cdh5.14.2.jar" } size: 655624 timestamp: 1554876847141 type: FILE visibility: PUBLIC, spire-macros_2.10-0.7.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spire-macros_2.10-0.7.4.jar" } size: 79162 timestamp: 1554876847175 type: FILE visibility: PUBLIC, stringtemplate-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/stringtemplate-3.2.1.jar" } size: 148627 timestamp: 1554876847183 type: FILE visibility: PUBLIC, breeze-macros_2.10-0.11.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/breeze-macros_2.10-0.11.2.jar" } size: 118301 timestamp: 1554876846856 type: FILE visibility: PUBLIC, jersey-guice-1.9.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jersey-guice-1.9.jar" } size: 14786 timestamp: 1554876847033 type: FILE visibility: PUBLIC, json4s-core_2.10-3.2.10.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/json4s-core_2.10-3.2.10.jar" } size: 584691 timestamp: 1554876847054 type: FILE visibility: PUBLIC, flume-ng-sdk-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/flume-ng-sdk-1.6.0-cdh5.14.2.jar" } size: 150183 timestamp: 1554876846917 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-jobclient.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-jobclient.jar" } size: 46076 timestamp: 1554876846949 type: FILE visibility: PUBLIC, commons-pool-1.5.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-pool-1.5.4.jar" } size: 96221 timestamp: 1554876846891 type: FILE visibility: PUBLIC, parquet-column.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/parquet-column.jar" } size: 956035 timestamp: 1554876847100 type: FILE visibility: PUBLIC, commons-math3-3.4.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-math3-3.4.1.jar" } size: 2035066 timestamp: 1554876846892 type: FILE visibility: PUBLIC, spark-lineage_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-lineage_2.10-1.6.0-cdh5.14.2.jar" } size: 100116 timestamp: 1554876847150 type: FILE visibility: PUBLIC, activation-1.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/activation-1.1.jar" } size: 62983 timestamp: 1554876846826 type: FILE visibility: PUBLIC, kryo-2.21.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kryo-2.21.jar" } size: 363460 timestamp: 1554876847064 type: FILE visibility: PUBLIC, spark-repl_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-repl_2.10-1.6.0-cdh5.14.2.jar" } size: 688309 timestamp: 1554876847166 type: FILE visibility: PUBLIC, antlr-runtime-3.4.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/antlr-runtime-3.4.jar" } size: 164368 timestamp: 1554876846836 type: FILE visibility: PUBLIC, datanucleus-rdbms-3.2.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/datanucleus-rdbms-3.2.1.jar" } size: 1769726 timestamp: 1554876846910 type: FILE visibility: PUBLIC, xmlenc-0.52.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/xmlenc-0.52.jar" } size: 15010 timestamp: 1554876847192 type: FILE visibility: PUBLIC, tachyon-underfs-hdfs-0.8.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/tachyon-underfs-hdfs-0.8.2.jar" } size: 11079 timestamp: 1554876847184 type: FILE visibility: PUBLIC, unused-1.0.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/unused-1.0.0.jar" } size: 2777 timestamp: 1554876847204 type: FILE visibility: PUBLIC, scalap-2.10.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scalap-2.10.0.jar" } size: 855012 timestamp: 1554876847124 type: FILE visibility: PUBLIC, commons-collections-3.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-collections-3.2.2.jar" } size: 588337 timestamp: 1554876846872 type: FILE visibility: PUBLIC, oozie-sharelib-oozie-4.1.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-sharelib-oozie-4.1.0-cdh5.14.2.jar" } size: 61236 timestamp: 1554876846711 type: FILE visibility: PUBLIC, metrics-json-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-json-3.1.2.jar" } size: 15827 timestamp: 1554876847080 type: FILE visibility: PUBLIC, jackson-databind-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-databind-2.2.3.jar" } size: 865838 timestamp: 1554876847007 type: FILE visibility: PUBLIC, kafka-clients-0.9.0-kafka-2.0.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/kafka-clients-0.9.0-kafka-2.0.2.jar" } size: 662487 timestamp: 1554876847065 type: FILE visibility: PUBLIC, jaxb-impl-2.2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-impl-2.2.7.jar" } size: 919968 timestamp: 1554876847028 type: FILE visibility: PUBLIC, javax.servlet-api-3.1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/javax.servlet-api-3.1.0.jar" } size: 95806 timestamp: 1554876847024 type: FILE visibility: PUBLIC, bonecp-0.7.1.RELEASE.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/bonecp-0.7.1.RELEASE.jar" } size: 115709 timestamp: 1554876846852 type: FILE visibility: PUBLIC, jackson-core-2.2.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jackson-core-2.2.3.jar" } size: 192699 timestamp: 1554876847003 type: FILE visibility: PUBLIC, curator-recipes-2.7.1.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/curator-recipes-2.7.1.jar" } size: 270342 timestamp: 1554876846900 type: FILE visibility: PUBLIC, oozie-sharelib-oozie.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-sharelib-oozie.jar" } size: 61236 timestamp: 1554876846713 type: FILE visibility: PUBLIC, jline-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jline-2.10.5.jar" } size: 164623 timestamp: 1554876847043 type: FILE visibility: PUBLIC, aopalliance-1.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/aopalliance-1.0.jar" } size: 4467 timestamp: 1554876846836 type: FILE visibility: PUBLIC, scala-reflect-2.10.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/scala-reflect-2.10.5.jar" } size: 3206179 timestamp: 1554876847131 type: FILE visibility: PUBLIC, httpclient-4.2.5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/httpclient-4.2.5.jar" } size: 433368 timestamp: 1554876846992 type: FILE visibility: PUBLIC, uncommons-maths-1.2.2a.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/uncommons-maths-1.2.2a.jar" } size: 49019 timestamp: 1554876847204 type: FILE visibility: PUBLIC, jline-2.11.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jline-2.11.jar" } size: 208781 timestamp: 1554876847043 type: FILE visibility: PUBLIC, core-1.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/core-1.1.2.jar" } size: 164422 timestamp: 1554876846895 type: FILE visibility: PUBLIC, metrics-jvm-3.1.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/metrics-jvm-3.1.2.jar" } size: 39280 timestamp: 1554876847080 type: FILE visibility: PUBLIC, commons-compiler-2.7.6.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/commons-compiler-2.7.6.jar" } size: 30595 timestamp: 1554876846874 type: FILE visibility: PUBLIC, hbase-annotations.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hbase-annotations.jar" } size: 20878 timestamp: 1554876846972 type: FILE visibility: PUBLIC, oozie-hadoop-utils.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/oozie/oozie-hadoop-utils.jar" } size: 11790 timestamp: 1554876846710 type: FILE visibility: PUBLIC, jaxb-api-2.2.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/jaxb-api-2.2.2.jar" } size: 105134 timestamp: 1554876847024 type: FILE visibility: PUBLIC, gson-2.7.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/gson-2.7.jar" } size: 231952 timestamp: 1554876846918 type: FILE visibility: PUBLIC, logredactor-1.0.3.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/logredactor-1.0.3.jar" } size: 15618 timestamp: 1554876847071 type: FILE visibility: PUBLIC, mina-core-2.0.0-M5.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/mina-core-2.0.0-M5.jar" } size: 637657 timestamp: 1554876847082 type: FILE visibility: PUBLIC, spark-streaming-kafka_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-streaming-kafka_2.10-1.6.0-cdh5.14.2.jar" } size: 291318 timestamp: 1554876847170 type: FILE visibility: PUBLIC, hadoop-mapreduce-client-common.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hadoop-mapreduce-client-common.jar" } size: 755226 timestamp: 1554876846947 type: FILE visibility: PUBLIC, oozie-sharelib-spark.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/oozie-sharelib-spark.jar" } size: 35401 timestamp: 1554876847087 type: FILE visibility: PUBLIC, spark-catalyst_2.10-1.6.0-cdh5.14.2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/spark-catalyst_2.10-1.6.0-cdh5.14.2.jar" } size: 5271291 timestamp: 1554876847155 type: FILE visibility: PUBLIC, protobuf-java-2.5.0.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/protobuf-java-2.5.0.jar" } size: 533455 timestamp: 1554876847109 type: FILE visibility: PUBLIC, hive-cli.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/hive-cli.jar" } size: 40245 timestamp: 1554876846975 type: FILE visibility: PUBLIC, avro-mapred-hadoop2.jar -> resource { scheme: "hdfs" host: "node1" port: 8020 file: "/user/oozie/share/lib/lib_20190410141404/spark/avro-mapred-hadoop2.jar" } size: 181325 timestamp: 1554876846851 type: FILE visibility: PUBLIC) 2019-09-24 22:44:20,313 [dispatcher-event-loop-21] INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend - Registered executor NettyRpcEndpointRef(null) (node5:38628) with ID 2 2019-09-24 22:44:20,354 [dispatcher-event-loop-17] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager node5:43168 with 530.0 MB RAM, BlockManagerId(2, node5, 43168) 2019-09-24 22:44:20,455 [dispatcher-event-loop-26] INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend - Registered executor NettyRpcEndpointRef(null) (node5:38630) with ID 1 2019-09-24 22:44:20,492 [Driver] INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend - SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 2019-09-24 22:44:20,492 [Driver] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - YarnClusterScheduler.postStartHook done 2019-09-24 22:44:20,523 [dispatcher-event-loop-27] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager node5:36256 with 530.0 MB RAM, BlockManagerId(1, node5, 36256) 2019-09-24 22:44:21,276 [Driver] INFO org.apache.spark.storage.MemoryStore - Block broadcast_0 stored as values in memory (estimated size 418.5 KB, free 491.3 MB) 2019-09-24 22:44:21,306 [Driver] INFO org.apache.spark.storage.MemoryStore - Block broadcast_0_piece0 stored as bytes in memory (estimated size 29.6 KB, free 491.2 MB) 2019-09-24 22:44:21,308 [dispatcher-event-loop-3] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_0_piece0 in memory on 10.200.101.135:39000 (size: 29.6 KB, free: 491.6 MB) 2019-09-24 22:44:21,310 [Driver] INFO org.apache.spark.SparkContext - Created broadcast 0 from newAPIHadoopRDD at PhoenixRDD.scala:49 2019-09-24 22:44:21,433 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Trying to connect to a secure cluster as 2181 with keytab /hbase 2019-09-24 22:44:21,433 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Successful login to secure cluster 2019-09-24 22:44:21,555 [Driver] INFO org.apache.phoenix.log.QueryLoggerDisruptor - Starting QueryLoggerDisruptor for with ringbufferSize=8192, waitStrategy=BlockingWaitStrategy, exceptionHandler=org.apache.phoenix.log.QueryLoggerDefaultExceptionHandler@17224f01... 2019-09-24 22:44:21,586 [Driver] INFO org.apache.phoenix.query.ConnectionQueryServicesImpl - An instance of ConnectionQueryServices was created. 2019-09-24 22:44:21,669 [Driver] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x698a515a connecting to ZooKeeper ensemble=node3:2181 2019-09-24 22:44:21,673 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-cdh5.14.2--1, built on 03/27/2018 20:39 GMT 2019-09-24 22:44:21,673 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=node5 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_131 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/java/jdk1.8.0_131/jre 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-network-common_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/mesos-0.21.1-shaded-protobuf.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/scalap-2.10.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/stax-api-1.0-2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/kafka_2.10-0.9.0-kafka-2.0.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/json4s-core_2.10-3.2.10.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-el-1.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jaxb-impl-2.2.7.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/akka-actor_2.10-2.2.3-shaded-protobuf.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jodd-core-3.5.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/curator-framework-2.7.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-sharelib-oozie-4.1.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/guice-servlet-3.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jersey-server-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/zookeeper.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-io-2.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-app.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-encoding.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/akka-remote_2.10-2.2.3-shaded-protobuf.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/eigenbase-properties-1.1.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/joda-time-2.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-hadoop.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jline-2.11.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-math3-3.4.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/__spark__.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/xbean-asm5-shaded-4.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-collections-3.2.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-codec-1.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/stream-2.7.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/flume-ng-core-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/avro-mapred-hadoop2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/core-1.1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jansi-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/tachyon-underfs-hdfs-0.8.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-hadoop-utils.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/flume-ng-sdk-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/pyrolite-4.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/slf4j-log4j12-1.7.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-jackson.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oro-2.0.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/netty-all-4.0.29.Final.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/ivy-2.0.0-rc2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-avro_2.10-1.1.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-column.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/javax.servlet-api-3.1.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hive-exec.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/arpack_combined_all-0.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/unused-1.0.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/kafka-clients-0.9.0-kafka-2.0.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-common.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/javax.inject-1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jsr305-1.3.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hive-metastore.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-common.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/avro-ipc.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/htrace-core4-4.0.1-incubating.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jaxb-api-2.2.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-core_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jets3t-0.6.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hbase-common.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-sharelib-oozie.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/lz4-1.3.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-annotations-2.2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/bonecp-0.7.1.RELEASE.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/quasiquotes_2.10-2.0.0-M8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-common.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/metrics-core-3.1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/scala-library-2.10.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-network-shuffle_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/avro.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-mllib_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/activation-1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/xz-1.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spire_2.10-0.7.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/breeze_2.10-0.11.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/logredactor-1.0.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-compiler-2.7.6.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-core-asl-1.8.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jul-to-slf4j-1.7.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jersey-guice-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-annotations.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-module-scala_2.10-2.2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-compress-1.4.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/curator-client-2.7.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-graphx_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/findbugs-annotations-1.3.9-1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-xc-1.8.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/asm-4.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/libfb303-0.9.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/mina-core-2.0.0-M5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/httpclient-4.2.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-shuffle.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/guice-3.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/RoaringBitmap-0.5.11.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/javax.servlet-3.0.0.v201112011016.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/guava-14.0.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-dbcp-1.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/pmml-schema-1.1.15.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/json4s-jackson_2.10-3.2.10.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/gson-2.7.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/log4j-1.2.17.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spire-macros_2.10-0.7.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/stax-api-1.0.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jaxb-core-2.2.7.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-lang3-3.3.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/chill_2.10-0.5.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-streaming-flume_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-hadoop-utils-2.6.0-cdh5.14.2.oozie-4.1.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hbase-protocol.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/parquet-format.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-api.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hive-cli.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-mapper-asl-1.8.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/calcite-avatica-1.2.0-incubating.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/flume-ng-configuration-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/avro-ipc-tests.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/tachyon-underfs-local-0.8.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/slf4j-api-1.7.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/tachyon-underfs-s3-0.8.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/zkclient-0.7.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-streaming-flume-sink_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-repl_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/xercesImpl-2.11.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/uncommons-maths-1.2.2a.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jtransforms-2.4.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-client.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/chimera-0.9.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-catalyst_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jta-1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jetty-6.1.26.cloudera.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/xml-apis-1.4.01.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/__app__.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-pool-1.5.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/metrics-core-2.2.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/aopalliance-1.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/json4s-ast_2.10-3.2.10.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/netty-3.10.5.Final.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jersey-json-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/curator-recipes-2.7.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/akka-slf4j_2.10-2.2.3-shaded-protobuf.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/tachyon-client-0.8.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/kryo-2.21.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jettison-1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/leveldbjni-all-1.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/scala-reflect-2.10.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/antlr-2.7.7.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/metrics-graphite-3.1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-sharelib-spark.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/protobuf-java-2.5.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/metrics-json-3.1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/htrace-core-3.2.0-incubating.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jline-2.10.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/paranamer-2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-streaming_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jasper-runtime-5.5.23.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jetty-util-6.1.26.cloudera.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-bagel_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-launcher_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/httpcore-4.2.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hbase-annotations.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-streaming-kafka_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/pmml-agent-1.1.15.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-core-2.2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jcl-over-slf4j-1.7.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-hs.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-daemon-1.0.13.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jdo-api-3.0.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-logging-1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/reflectasm-1.07-shaded.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/datanucleus-rdbms-3.2.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/scala-compiler-2.10.5.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/calcite-core-1.2.0-incubating.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/objenesis-1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/chill-java-0.5.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-hive_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/janino-2.7.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/protobuf-java-2.4.1-shaded.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/py4j-0.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-server-common.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/config-1.0.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-httpclient-3.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/asm-3.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/apache-log4j-extras-1.2.17.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-core.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/oozie-sharelib-spark-4.1.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/ST4-4.0.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/datanucleus-core-3.2.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jsp-api-2.0.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/breeze-macros_2.10-0.11.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/minlog-1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jersey-client-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-hdfs.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-unsafe_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-sql_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/xmlenc-0.52.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/calcite-linq4j-1.2.0-incubating.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/metrics-jvm-3.1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/datanucleus-api-jdo-3.2.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/spark-lineage_2.10-1.6.0-cdh5.14.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-jaxrs-1.8.8.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-server-web-proxy.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/derby-10.10.1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/opencsv-2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-mapreduce-client-jobclient.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/hadoop-yarn-server-nodemanager.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/pmml-model-1.1.15.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-cli-1.2.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-net-3.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/json-simple-1.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/stringtemplate-3.2.1.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jersey-core-1.9.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/libthrift-0.9.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/commons-lang-2.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/compress-lzf-1.0.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/jackson-databind-2.2.3.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/antlr-runtime-3.4.jar:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/__spark_conf__:/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/__spark__.jar:/etc/hadoop/conf.cloudera.yarn:/run/cloudera-scm-agent/process/983-yarn-NODEMANAGER:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-common-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-azure-datalake-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-aws-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/hadoop-annotations-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-tools.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-generator.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-column.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/parquet-avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-nfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/hadoop-hdfs-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-web-proxy-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-tests-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-nodemanager-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-registry-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-client-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/hadoop-yarn-api-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/spark-1.6.0-cdh5.14.2-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/avro.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-ant-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-ant.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-archive-logs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-archive-logs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-archives-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-archives.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-auth-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-azure-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-azure.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-datajoin-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-distcp-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-distcp.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-extras-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-extras.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-gridmix-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-gridmix.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-app-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-common-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-core-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-nativetask.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-client-shuffle.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-openstack-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-openstack.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-rumen-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-rumen.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-sls-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-sls.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.14.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hadoop-streaming.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/okio-1.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/okhttp-2.4.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/zookeeper.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop-mapreduce/lib/avro.jar: 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001/tmp 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-514.el7.x86_64 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=yarn 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/var/lib/hadoop-yarn 2019-09-24 22:44:21,674 [Driver] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/container_1566100765602_0108_01_000001 2019-09-24 22:44:21,675 [Driver] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=node3:2181 sessionTimeout=90000 watcher=hconnection-0x698a515a0x0, quorum=node3:2181, baseZNode=/hbase 2019-09-24 22:44:21,687 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server node3/10.200.101.133:2181. Will not attempt to authenticate using SASL (unknown error) 2019-09-24 22:44:21,688 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /10.200.101.135:39732, server: node3/10.200.101.133:2181 2019-09-24 22:44:21,692 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server node3/10.200.101.133:2181, sessionid = 0x36ca2ccfed69f76, negotiated timeout = 60000 2019-09-24 22:44:21,752 [Driver] INFO org.apache.phoenix.query.ConnectionQueryServicesImpl - HConnection established. Stacktrace for informational purposes: hconnection-0x698a515a java.lang.Thread.getStackTrace(Thread.java:1559) org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:55) org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:427) org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:267) org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2515) org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2491) org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2491) org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) java.sql.DriverManager.getConnection(DriverManager.java:664) java.sql.DriverManager.getConnection(DriverManager.java:208) org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113) org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58) org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:354) org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:118) org.apache.phoenix.spark.SparkSqlContextFunctions.phoenixTableAsDataFrame(SparkSqlContextFunctions.scala:39) com.fc.phoenixConnectMode$.getMode1(phoenixConnectMode.scala:16) com.fc.costDay$.main(costDay.scala:113) com.fc.costDay.main(costDay.scala) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:552) 2019-09-24 22:44:22,305 [Driver] INFO org.apache.hadoop.conf.Configuration.deprecation - hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2019-09-24 22:44:22,998 [Driver] INFO org.apache.phoenix.mapreduce.PhoenixInputFormat - UseSelectColumns=true, selectColumnList.size()=86, selectColumnList=ID,ASSET_ID,ASSET_NAME,ASSET_FIRST_DEGREE_ID,ASSET_FIRST_DEGREE_NAME,ASSET_SECOND_DEGREE_ID,ASSET_SECOND_DEGREE_NAME,GB_DEGREE_ID,GB_DEGREE_NAME,ASSET_USE_FIRST_DEGREE_ID,ASSET_USE_FIRST_DEGREE_NAME,ASSET_USE_SECOND_DEGREE_ID,ASSET_USE_SECOND_DEGREE_NAME,MANAGEMENT_TYPE_ID,MANAGEMENT_TYPE_NAME,ASSET_MODEL,FACTORY_NUMBER,ASSET_COUNTRY_ID,ASSET_COUNTRY_NAME,MANUFACTURER,SUPPLIER,SUPPLIER_TEL,ORIGINAL_VALUE,USE_DEPARTMENT_ID,USE_DEPARTMENT_NAME,USER_ID,USER_NAME,ASSET_LOCATION_OF_PARK_ID,ASSET_LOCATION_OF_PARK_NAME,ASSET_LOCATION_OF_BUILDING_ID,ASSET_LOCATION_OF_BUILDING_NAME,ASSET_LOCATION_OF_ROOM_ID,ASSET_LOCATION_OF_ROOM_NUMBER,PRODUCTION_DATE,ACCEPTANCE_DATE,REQUISITION_DATE,PERFORMANCE_INDEX,ASSET_STATE_ID,ASSET_STATE_NAME,INSPECTION_TYPE_ID,INSPECTION_TYPE_NAME,SEAL_DATE,SEAL_CAUSE,COST_ITEM_ID,COST_ITEM_NAME,ITEM_COMMENTS,UNSEAL_DATE,SCRAP_DATE,PURCHASE_NUMBER,WARRANTY_PERIOD,DEPRECIABLE_LIVES_ID,DEPRECIABLE_LIVES_NAME,MEASUREMENT_UNITS_ID,MEASUREMENT_UNITS_NAME,ANNEX,REMARK,ACCOUNTING_TYPE_ID,ACCOUNTING_TYPE_NAME,SYSTEM_TYPE_ID,SYSTEM_TYPE_NAME,ASSET_ID_PARENT,CLASSIFIED_LEVEL_ID,CLASSIFIED_LEVEL_NAME,ASSET_PICTURE,MILITARY_SPECIAL_CODE,CHECK_CYCLE_ID,CHECK_CYCLE_NAME,CHECK_DATE,CHECK_EFFECTIVE_DATE,CHECK_MODE_ID,CHECK_MODE_NAME,CHECK_DEPARTMENT_ID,CHECK_DEPARTMENT_NAME,RENT_STATUS_ID,RENT_STATUS_NAME,STORAGE_TIME,UPDATE_USER,UPDATE_TIME,IS_ON_PROCESS,IS_DELETED,FIRST_DEPARTMENT_ID,FIRST_DEPARTMENT_NAME,SECOND_DEPARTMENT_ID,SECOND_DEPARTMENT_NAME,CREATE_USER,CREATE_TIME root |-- ID: string (nullable = true) |-- FIRST_DEPARTMENT_ID: string (nullable = true) |-- ACTUAL_COST: double (nullable = true) |-- ORIGINAL_VALUE: double (nullable = true) |-- GENERATION_TIME: timestamp (nullable = false) 2019-09-24 22:44:24,895 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Trying to connect to a secure cluster as 2181 with keytab /hbase 2019-09-24 22:44:24,896 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Successful login to secure cluster 2019-09-24 22:44:24,921 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Trying to connect to a secure cluster as 2181 with keytab /hbase 2019-09-24 22:44:24,921 [Driver] INFO org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo - Successful login to secure cluster 2019-09-24 22:44:24,924 [Driver] INFO org.apache.phoenix.mapreduce.PhoenixInputFormat - UseSelectColumns=true, selectColumnList.size()=86, selectColumnList=ID,ASSET_ID,ASSET_NAME,ASSET_FIRST_DEGREE_ID,ASSET_FIRST_DEGREE_NAME,ASSET_SECOND_DEGREE_ID,ASSET_SECOND_DEGREE_NAME,GB_DEGREE_ID,GB_DEGREE_NAME,ASSET_USE_FIRST_DEGREE_ID,ASSET_USE_FIRST_DEGREE_NAME,ASSET_USE_SECOND_DEGREE_ID,ASSET_USE_SECOND_DEGREE_NAME,MANAGEMENT_TYPE_ID,MANAGEMENT_TYPE_NAME,ASSET_MODEL,FACTORY_NUMBER,ASSET_COUNTRY_ID,ASSET_COUNTRY_NAME,MANUFACTURER,SUPPLIER,SUPPLIER_TEL,ORIGINAL_VALUE,USE_DEPARTMENT_ID,USE_DEPARTMENT_NAME,USER_ID,USER_NAME,ASSET_LOCATION_OF_PARK_ID,ASSET_LOCATION_OF_PARK_NAME,ASSET_LOCATION_OF_BUILDING_ID,ASSET_LOCATION_OF_BUILDING_NAME,ASSET_LOCATION_OF_ROOM_ID,ASSET_LOCATION_OF_ROOM_NUMBER,PRODUCTION_DATE,ACCEPTANCE_DATE,REQUISITION_DATE,PERFORMANCE_INDEX,ASSET_STATE_ID,ASSET_STATE_NAME,INSPECTION_TYPE_ID,INSPECTION_TYPE_NAME,SEAL_DATE,SEAL_CAUSE,COST_ITEM_ID,COST_ITEM_NAME,ITEM_COMMENTS,UNSEAL_DATE,SCRAP_DATE,PURCHASE_NUMBER,WARRANTY_PERIOD,DEPRECIABLE_LIVES_ID,DEPRECIABLE_LIVES_NAME,MEASUREMENT_UNITS_ID,MEASUREMENT_UNITS_NAME,ANNEX,REMARK,ACCOUNTING_TYPE_ID,ACCOUNTING_TYPE_NAME,SYSTEM_TYPE_ID,SYSTEM_TYPE_NAME,ASSET_ID_PARENT,CLASSIFIED_LEVEL_ID,CLASSIFIED_LEVEL_NAME,ASSET_PICTURE,MILITARY_SPECIAL_CODE,CHECK_CYCLE_ID,CHECK_CYCLE_NAME,CHECK_DATE,CHECK_EFFECTIVE_DATE,CHECK_MODE_ID,CHECK_MODE_NAME,CHECK_DEPARTMENT_ID,CHECK_DEPARTMENT_NAME,RENT_STATUS_ID,RENT_STATUS_NAME,STORAGE_TIME,UPDATE_USER,UPDATE_TIME,IS_ON_PROCESS,IS_DELETED,FIRST_DEPARTMENT_ID,FIRST_DEPARTMENT_NAME,SECOND_DEPARTMENT_ID,SECOND_DEPARTMENT_NAME,CREATE_USER,CREATE_TIME 2019-09-24 22:44:24,926 [Driver] INFO org.apache.phoenix.mapreduce.PhoenixInputFormat - Select Statement: SELECT "ID","0"."ASSET_ID","0"."ASSET_NAME","0"."ASSET_FIRST_DEGREE_ID","0"."ASSET_FIRST_DEGREE_NAME","0"."ASSET_SECOND_DEGREE_ID","0"."ASSET_SECOND_DEGREE_NAME","0"."GB_DEGREE_ID","0"."GB_DEGREE_NAME","0"."ASSET_USE_FIRST_DEGREE_ID","0"."ASSET_USE_FIRST_DEGREE_NAME","0"."ASSET_USE_SECOND_DEGREE_ID","0"."ASSET_USE_SECOND_DEGREE_NAME","0"."MANAGEMENT_TYPE_ID","0"."MANAGEMENT_TYPE_NAME","0"."ASSET_MODEL","0"."FACTORY_NUMBER","0"."ASSET_COUNTRY_ID","0"."ASSET_COUNTRY_NAME","0"."MANUFACTURER","0"."SUPPLIER","0"."SUPPLIER_TEL","0"."ORIGINAL_VALUE","0"."USE_DEPARTMENT_ID","0"."USE_DEPARTMENT_NAME","0"."USER_ID","0"."USER_NAME","0"."ASSET_LOCATION_OF_PARK_ID","0"."ASSET_LOCATION_OF_PARK_NAME","0"."ASSET_LOCATION_OF_BUILDING_ID","0"."ASSET_LOCATION_OF_BUILDING_NAME","0"."ASSET_LOCATION_OF_ROOM_ID","0"."ASSET_LOCATION_OF_ROOM_NUMBER","0"."PRODUCTION_DATE","0"."ACCEPTANCE_DATE","0"."REQUISITION_DATE","0"."PERFORMANCE_INDEX","0"."ASSET_STATE_ID","0"."ASSET_STATE_NAME","0"."INSPECTION_TYPE_ID","0"."INSPECTION_TYPE_NAME","0"."SEAL_DATE","0"."SEAL_CAUSE","0"."COST_ITEM_ID","0"."COST_ITEM_NAME","0"."ITEM_COMMENTS","0"."UNSEAL_DATE","0"."SCRAP_DATE","0"."PURCHASE_NUMBER","0"."WARRANTY_PERIOD","0"."DEPRECIABLE_LIVES_ID","0"."DEPRECIABLE_LIVES_NAME","0"."MEASUREMENT_UNITS_ID","0"."MEASUREMENT_UNITS_NAME","0"."ANNEX","0"."REMARK","0"."ACCOUNTING_TYPE_ID","0"."ACCOUNTING_TYPE_NAME","0"."SYSTEM_TYPE_ID","0"."SYSTEM_TYPE_NAME","0"."ASSET_ID_PARENT","0"."CLASSIFIED_LEVEL_ID","0"."CLASSIFIED_LEVEL_NAME","0"."ASSET_PICTURE","0"."MILITARY_SPECIAL_CODE","0"."CHECK_CYCLE_ID","0"."CHECK_CYCLE_NAME","0"."CHECK_DATE","0"."CHECK_EFFECTIVE_DATE","0"."CHECK_MODE_ID","0"."CHECK_MODE_NAME","0"."CHECK_DEPARTMENT_ID","0"."CHECK_DEPARTMENT_NAME","0"."RENT_STATUS_ID","0"."RENT_STATUS_NAME","0"."STORAGE_TIME","0"."UPDATE_USER","0"."UPDATE_TIME","0"."IS_ON_PROCESS","0"."IS_DELETED","0"."FIRST_DEPARTMENT_ID","0"."FIRST_DEPARTMENT_NAME","0"."SECOND_DEPARTMENT_ID","0"."SECOND_DEPARTMENT_NAME","0"."CREATE_USER","0"."CREATE_TIME" FROM ASSET_NORMAL 2019-09-24 22:44:25,098 [Driver] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x5baa2ed7 connecting to ZooKeeper ensemble=node3:2181 2019-09-24 22:44:25,098 [Driver] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=node3:2181 sessionTimeout=90000 watcher=hconnection-0x5baa2ed70x0, quorum=node3:2181, baseZNode=/hbase 2019-09-24 22:44:25,099 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server node3/10.200.101.133:2181. Will not attempt to authenticate using SASL (unknown error) 2019-09-24 22:44:25,100 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /10.200.101.135:39740, server: node3/10.200.101.133:2181 2019-09-24 22:44:25,101 [Driver-SendThread(node3:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server node3/10.200.101.133:2181, sessionid = 0x36ca2ccfed69f77, negotiated timeout = 60000 2019-09-24 22:44:25,103 [Driver] INFO org.apache.hadoop.hbase.util.RegionSizeCalculator - Calculating region sizes for table "IDX_ASSET_NORMAL". 2019-09-24 22:44:25,153 [Driver] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService 2019-09-24 22:44:25,153 [Driver] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x36ca2ccfed69f77 2019-09-24 22:44:25,153 [Driver] INFO org.apache.zookeeper.ZooKeeper - Session: 0x36ca2ccfed69f77 closed 2019-09-24 22:44:25,153 [Driver-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down 2019-09-24 22:44:25,205 [Driver] INFO org.apache.spark.SparkContext - Starting job: count at costDay.scala:139 2019-09-24 22:44:25,222 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Registering RDD 7 (count at costDay.scala:139) 2019-09-24 22:44:25,225 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Got job 0 (count at costDay.scala:139) with 1 output partitions 2019-09-24 22:44:25,225 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: ResultStage 1 (count at costDay.scala:139) 2019-09-24 22:44:25,226 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List(ShuffleMapStage 0) 2019-09-24 22:44:25,227 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List(ShuffleMapStage 0) 2019-09-24 22:44:25,234 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting ShuffleMapStage 0 (MapPartitionsRDD[7] at count at costDay.scala:139), which has no missing parents 2019-09-24 22:44:25,250 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_1 stored as values in memory (estimated size 24.8 KB, free 491.2 MB) 2019-09-24 22:44:25,251 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_1_piece0 stored as bytes in memory (estimated size 9.5 KB, free 491.2 MB) 2019-09-24 22:44:25,252 [dispatcher-event-loop-4] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_1_piece0 in memory on 10.200.101.135:39000 (size: 9.5 KB, free: 491.6 MB) 2019-09-24 22:44:25,253 [dag-scheduler-event-loop] INFO org.apache.spark.SparkContext - Created broadcast 1 from broadcast at DAGScheduler.scala:1004 2019-09-24 22:44:25,257 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[7] at count at costDay.scala:139) (first 15 tasks are for partitions Vector(0)) 2019-09-24 22:44:25,258 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Adding task set 0.0 with 1 tasks 2019-09-24 22:44:25,313 [dispatcher-event-loop-1] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0 in stage 0.0 (TID 0, node5, executor 1, partition 0, RACK_LOCAL, 2538 bytes) 2019-09-24 22:44:26,497 [dispatcher-event-loop-6] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_1_piece0 in memory on node5:36256 (size: 9.5 KB, free: 530.0 MB) 2019-09-24 22:44:27,312 [dispatcher-event-loop-10] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_0_piece0 in memory on node5:36256 (size: 29.6 KB, free: 530.0 MB) 2019-09-24 22:44:34,470 [task-result-getter-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 0.0 (TID 0) in 9177 ms on node5 (executor 1) (1/1) 2019-09-24 22:44:34,471 [task-result-getter-0] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Removed TaskSet 0.0, whose tasks have all completed, from pool 2019-09-24 22:44:34,474 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ShuffleMapStage 0 (count at costDay.scala:139) finished in 9.184 s 2019-09-24 22:44:34,474 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - looking for newly runnable stages 2019-09-24 22:44:34,475 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - running: Set() 2019-09-24 22:44:34,475 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - waiting: Set(ResultStage 1) 2019-09-24 22:44:34,475 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - failed: Set() 2019-09-24 22:44:34,477 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting ResultStage 1 (MapPartitionsRDD[10] at count at costDay.scala:139), which has no missing parents 2019-09-24 22:44:34,481 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_2 stored as values in memory (estimated size 25.3 KB, free 491.2 MB) 2019-09-24 22:44:34,482 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_2_piece0 stored as bytes in memory (estimated size 9.8 KB, free 491.2 MB) 2019-09-24 22:44:34,483 [dispatcher-event-loop-24] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_2_piece0 in memory on 10.200.101.135:39000 (size: 9.8 KB, free: 491.6 MB) 2019-09-24 22:44:34,483 [dag-scheduler-event-loop] INFO org.apache.spark.SparkContext - Created broadcast 2 from broadcast at DAGScheduler.scala:1004 2019-09-24 22:44:34,484 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[10] at count at costDay.scala:139) (first 15 tasks are for partitions Vector(0)) 2019-09-24 22:44:34,484 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Adding task set 1.0 with 1 tasks 2019-09-24 22:44:34,486 [dispatcher-event-loop-20] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0 in stage 1.0 (TID 1, node5, executor 2, partition 0, NODE_LOCAL, 1999 bytes) 2019-09-24 22:44:34,678 [dispatcher-event-loop-31] INFO org.apache.spark.storage.BlockManagerInfo - Removed broadcast_1_piece0 on 10.200.101.135:39000 in memory (size: 9.5 KB, free: 491.6 MB) 2019-09-24 22:44:34,688 [dispatcher-event-loop-29] INFO org.apache.spark.storage.BlockManagerInfo - Removed broadcast_1_piece0 on node5:36256 in memory (size: 9.5 KB, free: 530.0 MB) 2019-09-24 22:44:34,692 [dispatcher-event-loop-0] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_2_piece0 in memory on node5:43168 (size: 9.8 KB, free: 530.0 MB) 2019-09-24 22:44:35,524 [dispatcher-event-loop-28] INFO org.apache.spark.MapOutputTrackerMasterEndpoint - Asked to send map output locations for shuffle 0 to node5:38628 2019-09-24 22:44:35,528 [map-output-dispatcher-0] INFO org.apache.spark.MapOutputTrackerMaster - Size of output statuses for shuffle 0 is 135 bytes 2019-09-24 22:44:35,929 [task-result-getter-1] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 1.0 (TID 1) in 1444 ms on node5 (executor 2) (1/1) 2019-09-24 22:44:35,930 [task-result-getter-1] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Removed TaskSet 1.0, whose tasks have all completed, from pool 2019-09-24 22:44:35,930 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 1 (count at costDay.scala:139) finished in 1.446 s 2019-09-24 22:44:35,934 [Driver] INFO org.apache.spark.scheduler.DAGScheduler - Job 0 finished: count at costDay.scala:139, took 10.729155 s 50902 2019-09-24 22:44:36,039 [Driver] INFO org.apache.spark.SparkContext - Starting job: describe at costDay.scala:141 2019-09-24 22:44:36,039 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Registering RDD 14 (describe at costDay.scala:141) 2019-09-24 22:44:36,040 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Got job 1 (describe at costDay.scala:141) with 1 output partitions 2019-09-24 22:44:36,040 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: ResultStage 3 (describe at costDay.scala:141) 2019-09-24 22:44:36,040 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List(ShuffleMapStage 2) 2019-09-24 22:44:36,040 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List(ShuffleMapStage 2) 2019-09-24 22:44:36,041 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting ShuffleMapStage 2 (MapPartitionsRDD[14] at describe at costDay.scala:141), which has no missing parents 2019-09-24 22:44:36,043 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_3 stored as values in memory (estimated size 27.2 KB, free 491.2 MB) 2019-09-24 22:44:36,044 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_3_piece0 stored as bytes in memory (estimated size 10.4 KB, free 491.2 MB) 2019-09-24 22:44:36,045 [dispatcher-event-loop-2] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_3_piece0 in memory on 10.200.101.135:39000 (size: 10.4 KB, free: 491.6 MB) 2019-09-24 22:44:36,046 [dag-scheduler-event-loop] INFO org.apache.spark.SparkContext - Created broadcast 3 from broadcast at DAGScheduler.scala:1004 2019-09-24 22:44:36,046 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 2 (MapPartitionsRDD[14] at describe at costDay.scala:141) (first 15 tasks are for partitions Vector(0)) 2019-09-24 22:44:36,046 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Adding task set 2.0 with 1 tasks 2019-09-24 22:44:36,047 [dispatcher-event-loop-12] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0 in stage 2.0 (TID 2, node5, executor 1, partition 0, RACK_LOCAL, 2538 bytes) 2019-09-24 22:44:36,062 [dispatcher-event-loop-14] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_3_piece0 in memory on node5:36256 (size: 10.4 KB, free: 530.0 MB) 2019-09-24 22:44:41,011 [task-result-getter-2] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 2.0 (TID 2) in 4964 ms on node5 (executor 1) (1/1) 2019-09-24 22:44:41,011 [task-result-getter-2] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Removed TaskSet 2.0, whose tasks have all completed, from pool 2019-09-24 22:44:41,011 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ShuffleMapStage 2 (describe at costDay.scala:141) finished in 4.964 s 2019-09-24 22:44:41,012 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - looking for newly runnable stages 2019-09-24 22:44:41,012 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - running: Set() 2019-09-24 22:44:41,012 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - waiting: Set(ResultStage 3) 2019-09-24 22:44:41,012 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - failed: Set() 2019-09-24 22:44:41,012 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting ResultStage 3 (MapPartitionsRDD[18] at describe at costDay.scala:141), which has no missing parents 2019-09-24 22:44:41,014 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_4 stored as values in memory (estimated size 28.7 KB, free 491.1 MB) 2019-09-24 22:44:41,016 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_4_piece0 stored as bytes in memory (estimated size 11.0 KB, free 491.1 MB) 2019-09-24 22:44:41,016 [dispatcher-event-loop-19] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_4_piece0 in memory on 10.200.101.135:39000 (size: 11.0 KB, free: 491.6 MB) 2019-09-24 22:44:41,017 [dag-scheduler-event-loop] INFO org.apache.spark.SparkContext - Created broadcast 4 from broadcast at DAGScheduler.scala:1004 2019-09-24 22:44:41,017 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 1 missing tasks from ResultStage 3 (MapPartitionsRDD[18] at describe at costDay.scala:141) (first 15 tasks are for partitions Vector(0)) 2019-09-24 22:44:41,017 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Adding task set 3.0 with 1 tasks 2019-09-24 22:44:41,018 [dispatcher-event-loop-21] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0 in stage 3.0 (TID 3, node5, executor 1, partition 0, NODE_LOCAL, 1999 bytes) 2019-09-24 22:44:41,038 [dispatcher-event-loop-22] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_4_piece0 in memory on node5:36256 (size: 11.0 KB, free: 530.0 MB) 2019-09-24 22:44:41,073 [dispatcher-event-loop-18] INFO org.apache.spark.MapOutputTrackerMasterEndpoint - Asked to send map output locations for shuffle 1 to node5:38630 2019-09-24 22:44:41,073 [map-output-dispatcher-1] INFO org.apache.spark.MapOutputTrackerMaster - Size of output statuses for shuffle 1 is 135 bytes 2019-09-24 22:44:41,193 [task-result-getter-3] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 3.0 (TID 3) in 175 ms on node5 (executor 1) (1/1) 2019-09-24 22:44:41,193 [task-result-getter-3] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Removed TaskSet 3.0, whose tasks have all completed, from pool 2019-09-24 22:44:41,194 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 3 (describe at costDay.scala:141) finished in 0.177 s 2019-09-24 22:44:41,194 [Driver] INFO org.apache.spark.scheduler.DAGScheduler - Job 1 finished: describe at costDay.scala:141, took 5.155182 s +-------+--------------------+ |summary| ORIGINAL_VALUE| +-------+--------------------+ | count| 50902| | mean|2.425387213401862E12| | stddev|5.472018460839137E14| | min| -7970934.0| | max|1.234567890123456...| +-------+--------------------+ 2019-09-24 22:44:41,255 [Driver] INFO org.apache.spark.SparkContext - Starting job: show at costDay.scala:142 2019-09-24 22:44:41,255 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Got job 2 (show at costDay.scala:142) with 1 output partitions 2019-09-24 22:44:41,256 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Final stage: ResultStage 4 (show at costDay.scala:142) 2019-09-24 22:44:41,256 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Parents of final stage: List() 2019-09-24 22:44:41,256 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Missing parents: List() 2019-09-24 22:44:41,256 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting ResultStage 4 (MapPartitionsRDD[22] at show at costDay.scala:142), which has no missing parents 2019-09-24 22:44:41,258 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_5 stored as values in memory (estimated size 23.6 KB, free 491.1 MB) 2019-09-24 22:44:41,259 [dag-scheduler-event-loop] INFO org.apache.spark.storage.MemoryStore - Block broadcast_5_piece0 stored as bytes in memory (estimated size 9.0 KB, free 491.1 MB) 2019-09-24 22:44:41,260 [dispatcher-event-loop-20] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_5_piece0 in memory on 10.200.101.135:39000 (size: 9.0 KB, free: 491.6 MB) 2019-09-24 22:44:41,260 [dag-scheduler-event-loop] INFO org.apache.spark.SparkContext - Created broadcast 5 from broadcast at DAGScheduler.scala:1004 2019-09-24 22:44:41,260 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - Submitting 1 missing tasks from ResultStage 4 (MapPartitionsRDD[22] at show at costDay.scala:142) (first 15 tasks are for partitions Vector(0)) 2019-09-24 22:44:41,260 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Adding task set 4.0 with 1 tasks 2019-09-24 22:44:41,262 [dispatcher-event-loop-26] INFO org.apache.spark.scheduler.TaskSetManager - Starting task 0.0 in stage 4.0 (TID 4, node5, executor 1, partition 0, RACK_LOCAL, 2549 bytes) 2019-09-24 22:44:41,273 [dispatcher-event-loop-30] INFO org.apache.spark.storage.BlockManagerInfo - Added broadcast_5_piece0 in memory on node5:36256 (size: 9.0 KB, free: 530.0 MB) 2019-09-24 22:44:41,502 [task-result-getter-0] INFO org.apache.spark.scheduler.TaskSetManager - Finished task 0.0 in stage 4.0 (TID 4) in 241 ms on node5 (executor 1) (1/1) 2019-09-24 22:44:41,502 [task-result-getter-0] INFO org.apache.spark.scheduler.cluster.YarnClusterScheduler - Removed TaskSet 4.0, whose tasks have all completed, from pool 2019-09-24 22:44:41,502 [dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 4 (show at costDay.scala:142) finished in 0.241 s 2019-09-24 22:44:41,503 [Driver] INFO org.apache.spark.scheduler.DAGScheduler - Job 2 finished: show at costDay.scala:142, took 0.247657 s +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |ID |FIRST_DEPARTMENT_ID|ACTUAL_COST |ORIGINAL_VALUE|GENERATION_TIME | +--------------------------------+-------------------+---------------------+--------------+-----------------------+ |d25bb550a290457382c175b0e57c0982|1149564326588321792|0.0010410958904109589|2.0 |2019-09-24 22:44:24.643| |492016e2f7ec4cd18615c164c92c6c6d|1149564326584127489|5.205479452054794E-4 |1.0 |2019-09-24 22:44:24.643| |1d138a7401bd493da12f8f8323e9dee0|1149564326588321710|0.00353972602739726 |34.0 |2019-09-24 22:44:24.643| |09718925d34b4a099e09a30a0621ded8|1149564326588321710|0.002891643835616438 |11.11 |2019-09-24 22:44:24.643| |d5cfd5e898464130b71530d74b43e9d1|1149564326584127489|3135.8751712328767 |9638690.0 |2019-09-24 22:44:24.643| |6b39ac96b8734103b2413520d3195ee6|1149564326584127489|1744.2413835616437 |6701559.0 |2019-09-24 22:44:24.643| |8d20d0abd04d49cea3e52d9ca67e39da|1149569393202696195|0.22175342465753425 |852.0 |2019-09-24 22:44:24.643| |66ae7e7c7a104cea99615358e12c03b0|1149569393202696195|1.0410958904109588 |2000.0 |2019-09-24 22:44:24.643| |d49b0324bbf14b70adefe8b1d9163db2|1149569393202696195|1.0410958904109588 |2000.0 |2019-09-24 22:44:24.643| |d4d701514a2a425e8192acf47bb57f9b|1149569393202696195|0.032534246575342464 |100.0 |2019-09-24 22:44:24.643| |d6a016c618c1455ca0e2c7d73ba947ac|1149569393202696195|0.6506849315068494 |2000.0 |2019-09-24 22:44:24.643| |5dfa3be825464ddd98764b2790720fae|1149569393202696195|147.91532534246576 |454645.0 |2019-09-24 22:44:24.643| |6e5653ef4aaa4c03bcd00fbeb1e6811d|1149569393202696195|0.1952054794520548 |600.0 |2019-09-24 22:44:24.643| |32bd2654082645cba35527d50e0d52f9|1149569393202696195|0.6506849315068494 |2000.0 |2019-09-24 22:44:24.643| |8ed4424408bc458dbe200acffe5733bf|1149564326584127488|5.205479452054794E-4 |1.0 |2019-09-24 22:44:24.643| |1b2faa31f139461488847e77eacd794a|1149564326584127488|33499.042109589034 |6.4353423E7 |2019-09-24 22:44:24.643| |f398245c9ccc4760a5eb3251db3680bf|1149564326584127488|33499.042109589034 |6.4353423E7 |2019-09-24 22:44:24.643| |2696de9733d247e5bf88573244f36ba2|1149564326584127488|0.011452054794520548 |22.0 |2019-09-24 22:44:24.643| |9c8cfad3d4334b37a7b9beb56b528c22|1149569976173203457|0.06506849315068493 |200.0 |2019-09-24 22:44:24.643| |3e2721b79e754a798d0be940ae011d72|1149569976173203457|0.004001712328767123 |12.3 |2019-09-24 22:44:24.643| +--------------------------------+-------------------+---------------------+--------------+-----------------------+ only showing top 20 rows 2019-09-24 22:44:41,507 [Driver] INFO org.apache.spark.deploy.yarn.ApplicationMaster - Final app status: SUCCEEDED, exitCode: 0 2019-09-24 22:44:41,510 [Thread-4] INFO org.apache.spark.SparkContext - Invoking stop() from shutdown hook 2019-09-24 22:44:41,593 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/static/sql,null} 2019-09-24 22:44:41,593 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null} 2019-09-24 22:44:41,593 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/SQL/execution,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/SQL/json,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/SQL,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/metrics/json,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/api,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/static,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/executors/json,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/executors,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/environment/json,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/environment,null} 2019-09-24 22:44:41,594 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/storage/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/storage,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/pool,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/stage,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/stages,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/jobs/job,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/jobs/json,null} 2019-09-24 22:44:41,595 [Thread-4] INFO org.spark-project.jetty.server.handler.ContextHandler - stopped o.s.j.s.ServletContextHandler{/jobs,null} 2019-09-24 22:44:41,647 [Thread-4] INFO org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://10.200.101.135:35379 2019-09-24 22:44:41,654 [dispatcher-event-loop-0] INFO org.apache.spark.deploy.yarn.YarnAllocator - Driver requested a total number of 0 executor(s). 2019-09-24 22:44:41,654 [Thread-4] INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend - Shutting down all executors 2019-09-24 22:44:41,655 [dispatcher-event-loop-28] INFO org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend - Asking each executor to shut down 2019-09-24 22:44:41,659 [dispatcher-event-loop-12] INFO org.apache.spark.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped! 2019-09-24 22:44:41,665 [Thread-4] INFO org.apache.spark.storage.MemoryStore - MemoryStore cleared 2019-09-24 22:44:41,665 [Thread-4] INFO org.apache.spark.storage.BlockManager - BlockManager stopped 2019-09-24 22:44:41,667 [Thread-4] INFO org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped 2019-09-24 22:44:41,670 [dispatcher-event-loop-10] INFO org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint - OutputCommitCoordinator stopped! 2019-09-24 22:44:41,672 [Thread-4] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext 2019-09-24 22:44:41,674 [Thread-4] INFO org.apache.spark.deploy.yarn.ApplicationMaster - Unregistering ApplicationMaster with SUCCEEDED 2019-09-24 22:44:41,676 [sparkDriverActorSystem-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-24 22:44:41,679 [sparkDriverActorSystem-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-24 22:44:41,681 [Thread-4] INFO org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl - Waiting for application to be successfully unregistered. 2019-09-24 22:44:41,707 [sparkDriverActorSystem-akka.actor.default-dispatcher-2] INFO Remoting - Remoting shut down 2019-09-24 22:44:41,707 [sparkDriverActorSystem-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-24 22:44:41,795 [Thread-4] INFO org.apache.spark.deploy.yarn.ApplicationMaster - Deleting staging directory .sparkStaging/application_1566100765602_0108 2019-09-24 22:44:41,928 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Shutdown hook called 2019-09-24 22:44:41,929 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data3/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-d8f0e665-4669-4be0-8400-5bf17354b336 2019-09-24 22:44:41,929 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data0/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-71145df1-e470-4900-9929-5910a3debd0f 2019-09-24 22:44:41,929 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data5/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-86035f79-d688-4aae-85ff-dd18cfe50e94 2019-09-24 22:44:41,929 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data1/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-c8c7cf5d-b28a-4108-9d80-74d3566daf2b 2019-09-24 22:44:41,930 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data4/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-bad98790-1201-49ac-9a6d-104caec5a190 2019-09-24 22:44:41,930 [Thread-4] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /cetc38/bigdata/dfs/data2/yarn/nm/usercache/admin/appcache/application_1566100765602_0108/spark-e5dc46c1-8452-4051-9f5b-d63699be75bc
附依赖文件:
phoenixConnectMode.scala :
package com.fc import org.apache.hadoop.conf.Configuration import org.apache.spark.sql.{DataFrame, SQLContext} import org.apache.phoenix.spark._ object phoenixConnectMode { private val zookeeper = "node3:2181" def getMode1(sqlContext: SQLContext, tableName: String, columns: Array[String]): DataFrame = { val configuration = new Configuration() configuration.set("phoenix.schema.isNamespaceMappingEnabled", "true") configuration.set("phoenix.schema.mapSystemTablesToNamespace", "true") configuration.set("hbase.zookeeper.quorum", zookeeper) val df = sqlContext.phoenixTableAsDataFrame(tableName, columns, conf = configuration) df } }
其中 configuration.set("phoenix.schema.isNamespaceMappingEnabled", "true") 需要根据具体环境修改为 true 或者 false
timeUtil.scala :
package com.fc import java.text.SimpleDateFormat import java.util.Calendar import org.joda.time.DateTime object timeUtil { final val ONE_DAY_Millis = 3600*24*1000 final val SECOND_TIME_FORMAT = "yyyy-MM-dd HH:mm:ss" final val MILLISECOND_TIME_FORMAT = "yyyy-MM-dd HH:mm:ss.SSS" final val DAY_DATE_FORMAT_ONE = "yyyy-MM-dd" final val DAY_DATE_FORMAT_TWO = "yyyyMMdd" final val MONTH_DATE_FORMAT = "yyyy-MM" /** * 时间字符串to时间戳 * @param dateStr 时间字符串 * @param pattern 时间格式 * @return */ def convertDateStr2TimeStamp(dateStr: String, pattern: String): Long = { new SimpleDateFormat(pattern).parse(dateStr).getTime } /** *时间戳to字符串 * @param timestamp 时间戳 * @param pattern 字符串 * @return */ def convertTimeStamp2DateStr(timestamp: Long, pattern: String): String = { new DateTime(timestamp).toString(pattern) } /** * 时间字符串+天数to时间戳 * @param dateStr 时间字符串 * @param pattern 时间格式 * @param days 增加天数 * @return */ def dateStrAddDays2TimeStamp(dateStr: String, pattern: String, days: Int): Long = { convertDateStr2Date(dateStr, pattern).plusDays(days).toDate.getTime } /** * 时间字符串+年数to时间戳 * @param dateStr 时间字符串 * @param pattern 时间格式 * @param years 增加年数 * @return */ def dateStrAddYears2TimeStamp(dateStr: String, pattern: String, years: Int): Long = { convertDateStr2Date(dateStr, pattern).plusYears(years).toDate.getTime } def dateStrAddYears2Str(dateStr: String, pattern: String, years: Int): String = { val t = convertDateStr2Date(dateStr, pattern).plusYears(years).toDate.getTime convertTimeStamp2DateStr(t, pattern) } /** * 时间字符串to日期 * @param dateStr 时间字符串 * @param pattern 时间格式 * @return */ def convertDateStr2Date(dateStr: String, pattern: String): DateTime = { new DateTime(new SimpleDateFormat(pattern).parse(dateStr)) } /** * 月差 * @param stDate 开始时间 * @param endDate 结束时间 * @return */ def getMonthSpace(stDate: String, endDate: String): Int = { val c1 = Calendar.getInstance() val c2 = Calendar.getInstance() c1.setTime(new SimpleDateFormat(MONTH_DATE_FORMAT).parse(stDate)) c2.setTime(new SimpleDateFormat(MONTH_DATE_FORMAT).parse(endDate)) val month1 = c2.get(Calendar.MONTH) - c1.get(Calendar.MONTH) val month2 = (c2.get(Calendar.YEAR) - c1.get(Calendar.YEAR)) * 12 Math.abs(month1 + month2) } /** * 日差 * @param stDate 开始时间 * @param endDate 结束时间 * @return */ def getDaySpace(stDate: String, endDate: String): Long = { val c1 = Calendar.getInstance() val c2 = Calendar.getInstance() c1.setTime(new SimpleDateFormat(SECOND_TIME_FORMAT).parse(stDate)) c2.setTime(new SimpleDateFormat(SECOND_TIME_FORMAT).parse(endDate)) val difference=c2.getTimeInMillis -c1.getTimeInMillis val days = difference/ONE_DAY_Millis Math.abs(days) } def getLastYearDateStr(dateStr: String): String = { val e = timeUtil.dateStrAddYears2TimeStamp(dateStr, "yyyy", -1) val f = timeUtil.convertTimeStamp2DateStr(e, "yyyy") f+"-12-31 23:59:59.999" } def getCurDateStr: String = { convertTimeStamp2DateStr(System.currentTimeMillis(), timeUtil.SECOND_TIME_FORMAT) } def lastYearEnd: String = { timeUtil.getLastYearDateStr(getCurDateStr) } def getCurMonthDays: Int = { val a = Calendar.getInstance() a.set(Calendar.DATE, 1) a.roll(Calendar.DATE, -1) val days = a.get(Calendar.DATE) days } }
pom.xml :
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.fc</groupId> <artifactId>scalapp</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.0-cdh5.14.2</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.6.0-cdh5.14.2</version> </dependency> <dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-spark</artifactId> <version>1.2.0-cdh5.14.2</version> </dependency> <dependency> <groupId>org.apache.phoenix</groupId> <artifactId>phoenix-spark</artifactId> <version>4.14.0-cdh5.14.2</version> </dependency> <dependency> <groupId>com.lmax</groupId> <artifactId>disruptor</artifactId> <version>3.3.8</version> </dependency> <dependency> <groupId>org.apache.phoenix</groupId> <artifactId>phoenix-core</artifactId> <version>4.14.0-cdh5.14.2</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.6.0-cdh5.14.2</version> </dependency> <!--MySQL驱动--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.43</version> </dependency> </dependencies> <repositories> <repository> <id>mvnrepository</id> <name>mvnrepository</name> <url>http://mvnrepository.com/</url> <layout>default</layout> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>cloudera</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> </repository> <repository> <id>hortonworks</id> <name>hortonworks</name> <url>http://repo.hortonworks.com/content/repositories/releases/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>hortonworks</id> <name>hortonworks</name> <url>http://repo.hortonworks.com/content/repositories/releases/</url> </pluginRepository> </pluginRepositories> <build> <sourceDirectory>src/main/scala</sourceDirectory> <plugins> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.15.2</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <createDependencyReducedPom>false</createDependencyReducedPom> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>
参考:
https://www.jianshu.com/p/f336f7e5f31b
https://blog.csdn.net/adorechen/article/details/78746363
Spark教程——(11)Spark程序local模式执行、cluster模式执行以及Oozie/Hue执行的设置方式的更多相关文章
- Spark代码中设置appName在client模式和cluster模式中不一样问题
问题 Spark应用名在使用yarn-cluster模式提交时不生效,在使用yarn-client模式提交时生效,如图1所示,第一个应用是使用yarn-client模式提交的,正确显示我们代码里设置的 ...
- spark教程(11)-sparkSQL 数据抽象
数据抽象 sparkSQL 的数据抽象是 DataFrame,df 相当于表格,它的每一行是一条信息,形成了一个 Row Row 它是 sparkSQL 的一个抽象,用于表示一行数据,从表现形式上看, ...
- spark on yarn,cluster模式时,执行spark-submit命令后命令行日志和YARN AM日志
[root@linux-node1 bin]# ./spark-submit \> --class com.kou.List2Hive \> --master yarn \> --d ...
- spark教程(六)-Python 编程与 spark-submit 命令
hadoop 是 java 开发的,原生支持 java:spark 是 scala 开发的,原生支持 scala: spark 还支持 java.python.R,本文只介绍 python spark ...
- 全网最详细的Cloudera Hue执行./build/env/bin/supervisor 时出现KeyError: "Couldn't get user id for user hue"的解决办法(图文详解)
不多说,直接上干货! 问题详情 如下: [root@bigdata-pro01 hue--cdh5.12.1]# ./build/env/bin/supervisor Traceback (most ...
- spark之scala程序开发(集群运行模式):单词出现次数统计
准备工作: 将运行Scala-Eclipse的机器节点(CloudDeskTop)内存调整至4G,因为需要在该节点上跑本地(local)Spark程序,本地Spark程序会启动Worker进程耗用大量 ...
- Spark教程——(10)Spark SQL读取Phoenix数据本地执行计算
添加配置文件 phoenixConnectMode.scala : package statistics.benefits import org.apache.hadoop.conf.Configur ...
- 012 Spark在IDEA中打jar包,并在集群上运行(包括local模式,standalone模式,yarn模式的集群运行)
一:打包成jar 1.修改代码 2.使用maven打包 但是目录中有中文,会出现打包错误 3.第二种方式 4.下一步 5.下一步 6.下一步 7.下一步 8.下一步 9.完成 二:在集群上运行(loc ...
- 【Spark深入学习 -14】Spark应用经验与程序调优
----本节内容------- 1.遗留问题解答 2.Spark调优初体验 2.1 利用WebUI分析程序瓶颈 2.2 设置合适的资源 2.3 调整任务的并发度 2.4 修改存储格式 3.Spark调 ...
随机推荐
- windows10下VS2013搭建opencv2.4.9吐血之作
1.下载opencv2.4.9.exe文件,然后双击extract文件到指定目录 2.按此链接安装opencv:https://www.cnblogs.com/cuteshongshong/p/405 ...
- 安装Anaconda3时出现conda不是内部或者外部命令
在win10,64位,python版本为3.7的环境下安装anaconda3的时候,无法在命令行执行conda命令,一直提示conda不是内部或者外部命令,参考网上的修改环境变量,修改完后还是没有用, ...
- 笔记本电脑插上耳机声音依然外放解决方法 为什么插入HDMI线,电脑的音响就没有声音了
笔记本电脑插上耳机声音依然外放解决方法: 下载一个驱动大师,安装声卡驱动.(驱动人生安装的驱动有可能不能用) 为什么插入HDMI线,电脑的音响就没有声音了 参考:https://zhidao.baid ...
- ES-windows版本设置远程访问
1,官网下载 2,下载完解压 3,修改配置文件 elasticsearch.yml network.host: 0.0.0.0http.port: 9200transport.host: localh ...
- java平衡二叉树AVL数
平衡二叉树(Balanced Binary Tree)具有以下性质:它是一棵空树或它的左右两个子树的高度差的绝对值不超过1,并且左右两个子树都是一棵平衡二叉树 右旋:在插入二叉树的时候,根节点的右侧高 ...
- shell中遍历数组的几种方式
#!/bin/bash arr=( '你好') length=${#arr} echo "长度为:$length" # for 遍历 for item in ${arr[*]} d ...
- dubbo-admin监控台的搭建
一.dubbo-admin dubbo-admin是dubbo的控制台web程序,可以利用浏览器对dubbo进行性能监控.服务治理.降级.分组以及一些参数的设置.2.6版本及以前打包后是一个war包, ...
- 惠普台式机,如何选择U盘启动
开机先连续点击键盘F9按键进入选择启动盘界面,找到自己的U盘(KingstonDataTraveler 3.0)
- Linux - 软硬链接,hard link and symbolic link
- 世界协调时间(UTC)与中国标准时间
整个地球分为二十四时区,每个时区都有自己的本地时间.在国际无线电通信场合,为了统一起见,使用一个统一的时间,称为通用协调时(UTC, Universal Time Coordinated).UTC与格 ...