rn

启动

先把这三个文件的名字改一下

配置slaves

配置spark-env.sh

  1. export JAVA_HOME=/opt/modules/jdk1..0_60
  2. export SCALA_HOME=/opt/modules/scala-2.11.
  3.  
  4. SPARK_MASTER_HOST=bigdata-pro02.kfk.com
  5. SPARK_MASTER_PORT=
  6. SPARK_MASTER_WEBUI_PORT=
  7. SPARK_WORKER_CORES=
  8. SPARK_WORKER_MEMORY=1g
  9. SPARK_WORKER_PORT=
  10. SPARK_WORKER_WEBUI_PORT=
  11.  
  12. SPARK_CONF_DIR=/opt/modules/spark-2.2.-bin/conf

将spark 配置分发到其他节点并修改每个节点特殊配置

scp -r spark-2.2.0-bin bigdata-pro01.kfk.com:/opt/modules/

scp -r spark-2.2.0-bin bigdata-pro03.kfk.com:/opt/modules/

http://bigdata-pro02.kfk.com:8080/

在浏览器打开这个页面

客户端测试

bin/spark-shell --master spark://bigdata-pro02.kfk.com:7077

执行一个job

点进去看看

bin/spark-submit --master spark://bigdata-pro02.kfk.com:7077 --deploy-mode cluster /opt/jars/sparkStu.jar  file:///opt/datas/stu.txt

可以看到报错了!!!!

我们应该使用这个模式

启动一下yarn

http://bigdata-pro01.kfk.com:8088/cluster

我们就把HADOOP_CONF_DIR配置近来

其他两个节点也一样。

再次运行,还是报错了

  1. [kfk@bigdata-pro02 spark-2.2.-bin]$ bin/spark-shell --master yarn --deploy-mode client
  2. Setting default log level to "WARN".
  3. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
  4. // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. // :: WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
  6. // :: ERROR SparkContext: Error initializing SparkContext.
  7. org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
  8. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:)
  9. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:)
  10. at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:)
  11. at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
  12. at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
  13. at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
  14. at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
  15. at scala.Option.getOrElse(Option.scala:)
  16. at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
  17. at org.apache.spark.repl.Main$.createSparkSession(Main.scala:)
  18. at $line3.$read$$iw$$iw.<init>(<console>:)
  19. at $line3.$read$$iw.<init>(<console>:)
  20. at $line3.$read.<init>(<console>:)
  21. at $line3.$read$.<init>(<console>:)
  22. at $line3.$read$.<clinit>(<console>)
  23. at $line3.$eval$.$print$lzycompute(<console>:)
  24. at $line3.$eval$.$print(<console>:)
  25. at $line3.$eval.$print(<console>)
  26. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  27. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
  28. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
  29. at java.lang.reflect.Method.invoke(Method.java:)
  30. at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:)
  31. at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:)
  32. at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$.apply(IMain.scala:)
  33. at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$.apply(IMain.scala:)
  34. at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:)
  35. at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:)
  36. at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:)
  37. at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:)
  38. at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:)
  39. at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:)
  40. at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:)
  41. at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:)
  42. at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$.apply$mcV$sp(SparkILoop.scala:)
  43. at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$.apply(SparkILoop.scala:)
  44. at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$.apply(SparkILoop.scala:)
  45. at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:)
  46. at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
  47. at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:)
  48. at scala.tools.nsc.interpreter.ILoop$$anonfun$process$.apply$mcZ$sp(ILoop.scala:)
  49. at scala.tools.nsc.interpreter.ILoop$$anonfun$process$.apply(ILoop.scala:)
  50. at scala.tools.nsc.interpreter.ILoop$$anonfun$process$.apply(ILoop.scala:)
  51. at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
  52. at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:)
  53. at org.apache.spark.repl.Main$.doMain(Main.scala:)
  54. at org.apache.spark.repl.Main$.main(Main.scala:)
  55. at org.apache.spark.repl.Main.main(Main.scala)
  56. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  57. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
  58. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
  59. at java.lang.reflect.Method.invoke(Method.java:)
  60. at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
  61. at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
  62. at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
  63. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
  64. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  65. // :: WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
  66. // :: WARN MetricsSystem: Stopping a MetricsSystem that is not running
  67. org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
  68. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:)
  69. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:)
  70. at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:)
  71. at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
  72. at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:)
  73. at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
  74. at org.apache.spark.sql.SparkSession$Builder$$anonfun$.apply(SparkSession.scala:)
  75. at scala.Option.getOrElse(Option.scala:)
  76. at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
  77. at org.apache.spark.repl.Main$.createSparkSession(Main.scala:)
  78. ... elided
  79. <console>:: error: not found: value spark
  80. import spark.implicits._
  81. ^
  82. <console>:: error: not found: value spark
  83. import spark.sql
  84. ^
  85. Welcome to
  86. ____ __
  87. / __/__ ___ _____/ /__
  88. _\ \/ _ \/ _ `/ __/ '_/
  89. /___/ .__/\_,_/_/ /_/\_\ version 2.2.
  90. /_/
  91.  
  92. Using Scala version 2.11. (Java HotSpot(TM) -Bit Server VM, Java 1.8.0_60)
  93. Type in expressions to have them evaluated.
  94. Type :help for more information.

我们来修改这个配置文件yarn-site.xml

加上这两项

  1. <property>
  2. <name>yarn.nodemanager.pmem-check-enabled</name>
  3. <value>false</value>
  4. </property>
  5.  
  6. <property>
  7. <name>yarn.nodemanager.vmem-check-enabled</name>
  8. <value>false</value>
  9. </property>

其他两个节点的yarn-site.xml也是一样,这里我就不多说了。或者是我们把节点2的这个文件分发给另外两个节点也是可以的。

不过分发之前先把yarn停下来

还有一点细节一定要注意,报这个错误其实原因有很多的,不单单是说内存不够的问题,内存不够只是其中一个原因,还有一个细节我们容易漏掉的就jdk版本一定要跟spark-env.sh的一致

尤其要注意hadoop里面的这两个文件

我这里是以其中一个节点来说明,其他两个节点的hadoop配置文件也是这样修改,因为我们之前的hadoop是用jdk1.7版本的,spark改用1.8版本了,所以关于hadoop的所有配置文件有关配置jdk的都某要改成1.8

我们再次启动yarn

启动spark(由于考虑到spark比较消耗内存,我就把spark的master切换到节点1去了,因为节点1我给他分配了4G内存)

记得修改spark-env.sh文件(3个节点都改)

进行分组求和

退出

用submit模式跑一下

可以看到报错了

  1. [kfk@bigdata-pro01 spark-2.2.-bin]$ bin/spark-submit --class com.spark.test.Test --master yarn --deploy-mode cluster /opt/jars/sparkStu.jar file:///opt/datas/stu.txt
  2. // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. // :: INFO Client: Requesting a new application from cluster with NodeManagers
  4. // :: INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
  5. // :: INFO Client: Will allocate AM container, with MB memory including MB overhead
  6. // :: INFO Client: Setting up container launch context for our AM
  7. // :: INFO Client: Setting up the launch environment for our AM container
  8. // :: INFO Client: Preparing resources for our AM container
  9. // :: WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
  10. // :: INFO Client: Uploading resource file:/tmp/spark-edc616a1-10bf--9d7c-91a2430844f8/__spark_libs__6050155581866596916.zip -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0003/__spark_libs__6050155581866596916.zip
  11. // :: INFO Client: Uploading resource file:/opt/jars/sparkStu.jar -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0003/sparkStu.jar
  12. // :: INFO Client: Uploading resource file:/tmp/spark-edc616a1-10bf--9d7c-91a2430844f8/__spark_conf__6419799297331143395.zip -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0003/__spark_conf__.zip
  13. // :: INFO SecurityManager: Changing view acls to: kfk
  14. // :: INFO SecurityManager: Changing modify acls to: kfk
  15. // :: INFO SecurityManager: Changing view acls groups to:
  16. // :: INFO SecurityManager: Changing modify acls groups to:
  17. // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(kfk); groups with view permissions: Set(); users with modify permissions: Set(kfk); groups with modify permissions: Set()
  18. // :: INFO Client: Submitting application application_1521167375207_0003 to ResourceManager
  19. // :: INFO YarnClientImpl: Submitted application application_1521167375207_0003
  20. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  21. // :: INFO Client:
  22. client token: N/A
  23. diagnostics: N/A
  24. ApplicationMaster host: N/A
  25. ApplicationMaster RPC port: -
  26. queue: default
  27. start time:
  28. final status: UNDEFINED
  29. tracking URL: http://bigdata-pro01.kfk.com:8088/proxy/application_1521167375207_0003/
  30. user: kfk
  31. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  32. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  33. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  34. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  35. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  36. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  37. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  38. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  39. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  40. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  41. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  42. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  43. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  44. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  45. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  46. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  47. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  48. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  49. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  50. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  51. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  52. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  53. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  54. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  55. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  56. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  57. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  58. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  59. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  60. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  61. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  62. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  63. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  64. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  65. // :: INFO Client: Application report for application_1521167375207_0003 (state: ACCEPTED)
  66. // :: INFO Client: Application report for application_1521167375207_0003 (state: FAILED)
  67. // :: INFO Client:
  68. client token: N/A
  69. diagnostics: Application application_1521167375207_0003 failed times due to AM Container for appattempt_1521167375207_0003_000002 exited with exitCode: -
  70. For more detailed output, check application tracking page:http://bigdata-pro01.kfk.com:8088/proxy/application_1521167375207_0003/Then, click on links to logs of each attempt.
  71. Diagnostics: File does not exist: hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0003/__spark_libs__6050155581866596916.zip
  72. java.io.FileNotFoundException: File does not exist: hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0003/__spark_libs__6050155581866596916.zip
  73. at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
  74. at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
  75. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
  76. at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
  77. at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:)
  78. at org.apache.hadoop.yarn.util.FSDownload.access$(FSDownload.java:)
  79. at org.apache.hadoop.yarn.util.FSDownload$.run(FSDownload.java:)
  80. at org.apache.hadoop.yarn.util.FSDownload$.run(FSDownload.java:)
  81. at java.security.AccessController.doPrivileged(Native Method)
  82. at javax.security.auth.Subject.doAs(Subject.java:)
  83. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
  84. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:)
  85. at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:)
  86. at java.util.concurrent.FutureTask.run(FutureTask.java:)
  87. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
  88. at java.util.concurrent.FutureTask.run(FutureTask.java:)
  89. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
  90. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
  91. at java.lang.Thread.run(Thread.java:)
  92.  
  93. Failing this attempt. Failing the application.
  94. ApplicationMaster host: N/A
  95. ApplicationMaster RPC port: -
  96. queue: default
  97. start time:
  98. final status: FAILED
  99. tracking URL: http://bigdata-pro01.kfk.com:8088/cluster/app/application_1521167375207_0003
  100. user: kfk
  101. Exception in thread "main" org.apache.spark.SparkException: Application application_1521167375207_0003 finished with failed status
  102. at org.apache.spark.deploy.yarn.Client.run(Client.scala:)
  103. at org.apache.spark.deploy.yarn.Client$.main(Client.scala:)
  104. at org.apache.spark.deploy.yarn.Client.main(Client.scala)
  105. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  106. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
  107. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
  108. at java.lang.reflect.Method.invoke(Method.java:)
  109. at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
  110. at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
  111. at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
  112. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
  113. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  114. // :: INFO ShutdownHookManager: Shutdown hook called
  115. // :: INFO ShutdownHookManager: Deleting directory /tmp/spark-edc616a1-10bf--9d7c-91a2430844f8
  116. [kfk@bigdata-pro01 spark-2.2.-bin]$

我们在idea把sparkStu的源码打开

改一下这里

把包完之后我们把这个包再次上传(为了保险,我把3个节点都上传了,可能我比较SB)

先把原来的包干掉

现在上传

再跑一次

可以看到成功了

  1. [kfk@bigdata-pro01 spark-2.2.-bin]$ bin/spark-submit --class com.spark.test.Test --master yarn --deploy-mode cluster /opt/jars/sparkStu.jar file:///opt/datas/stu.txt
  2. // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. // :: INFO Client: Requesting a new application from cluster with NodeManagers
  4. // :: INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
  5. // :: INFO Client: Will allocate AM container, with MB memory including MB overhead
  6. // :: INFO Client: Setting up container launch context for our AM
  7. // :: INFO Client: Setting up the launch environment for our AM container
  8. // :: INFO Client: Preparing resources for our AM container
  9. // :: WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
  10. // :: INFO Client: Uploading resource file:/tmp/spark-43f281a9-034a-424b--d6d00addfff6/__spark_libs__8012713420631475441.zip -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0004/__spark_libs__8012713420631475441.zip
  11. // :: INFO Client: Uploading resource file:/opt/jars/sparkStu.jar -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0004/sparkStu.jar
  12. // :: INFO Client: Uploading resource file:/tmp/spark-43f281a9-034a-424b--d6d00addfff6/__spark_conf__8776342149712582279.zip -> hdfs://ns/user/kfk/.sparkStaging/application_1521167375207_0004/__spark_conf__.zip
  13. // :: INFO SecurityManager: Changing view acls to: kfk
  14. // :: INFO SecurityManager: Changing modify acls to: kfk
  15. // :: INFO SecurityManager: Changing view acls groups to:
  16. // :: INFO SecurityManager: Changing modify acls groups to:
  17. // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(kfk); groups with view permissions: Set(); users with modify permissions: Set(kfk); groups with modify permissions: Set()
  18. // :: INFO Client: Submitting application application_1521167375207_0004 to ResourceManager
  19. // :: INFO YarnClientImpl: Submitted application application_1521167375207_0004
  20. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  21. // :: INFO Client:
  22. client token: N/A
  23. diagnostics: N/A
  24. ApplicationMaster host: N/A
  25. ApplicationMaster RPC port: -
  26. queue: default
  27. start time:
  28. final status: UNDEFINED
  29. tracking URL: http://bigdata-pro01.kfk.com:8088/proxy/application_1521167375207_0004/
  30. user: kfk
  31. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  32. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  33. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  34. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  35. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  36. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  37. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  38. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  39. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  40. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  41. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  42. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  43. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  44. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  45. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  46. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  47. // :: INFO Client: Application report for application_1521167375207_0004 (state: ACCEPTED)
  48. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  49. // :: INFO Client:
  50. client token: N/A
  51. diagnostics: N/A
  52. ApplicationMaster host: 192.168.86.152
  53. ApplicationMaster RPC port:
  54. queue: default
  55. start time:
  56. final status: UNDEFINED
  57. tracking URL: http://bigdata-pro01.kfk.com:8088/proxy/application_1521167375207_0004/
  58. user: kfk
  59. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  60. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  61. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  62. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  63. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  64. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  65. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  66. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  67. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  68. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  69. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  70. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  71. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  72. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  73. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  74. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  75. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  76. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  77. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  78. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  79. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  80. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  81. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  82. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  83. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  84. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  85. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  86. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  87. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  88. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  89. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  90. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  91. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  92. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  93. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  94. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  95. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  96. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  97. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  98. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  99. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  100. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  101. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  102. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  103. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  104. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  105. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  106. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  107. // :: INFO Client: Application report for application_1521167375207_0004 (state: RUNNING)
  108. // :: INFO Client: Application report for application_1521167375207_0004 (state: FINISHED)
  109. // :: INFO Client:
  110. client token: N/A
  111. diagnostics: N/A
  112. ApplicationMaster host: 192.168.86.152
  113. ApplicationMaster RPC port:
  114. queue: default
  115. start time:
  116. final status: SUCCEEDED
  117. tracking URL: http://bigdata-pro01.kfk.com:8088/proxy/application_1521167375207_0004/A
  118. user: kfk
  119. // :: INFO ShutdownHookManager: Shutdown hook called
  120. // :: INFO ShutdownHookManager: Deleting directory /tmp/spark-43f281a9-034a-424b--d6d00addfff6
  121. [kfk@bigdata-pro01 spark-2.2.-bin]$

在这里我补充一下,我们能看见终端打印这么多日志,是因为修改了这个文件

Spark2.X集群运行模式的更多相关文章

  1. 新闻实时分析系统 Spark2.X集群运行模式

    1.几种运行模式介绍 Spark几种运行模式: 1)Local 2)Standalone 3)Yarn 4)Mesos 下载IDEA并安装,可以百度一下免费文档. 2.spark Standalone ...

  2. 新闻网大数据实时分析可视化系统项目——16、Spark2.X集群运行模式

    1.几种运行模式介绍 Spark几种运行模式: 1)Local 2)Standalone 3)Yarn 4)Mesos 下载IDEA并安装,可以百度一下免费文档. 2.spark Standalone ...

  3. spark之scala程序开发(集群运行模式):单词出现次数统计

    准备工作: 将运行Scala-Eclipse的机器节点(CloudDeskTop)内存调整至4G,因为需要在该节点上跑本地(local)Spark程序,本地Spark程序会启动Worker进程耗用大量 ...

  4. spark集群运行模式

    spark的集中运行模式 Local .Standalone.Yarn 关闭防火墙:systemctl stop firewalld.service 重启网络服务:systemctl restart ...

  5. Spark运行模式_本地伪集群运行模式(单机模拟集群)

    这种运行模式,和Local[N]很像,不同的是,它会在单机启动多个进程来模拟集群下的分布式场景,而不像Local[N]这种多个线程只能在一个进程下委屈求全的共享资源.通常也是用来验证开发出来的应用程序 ...

  6. 简单说明hadoop集群运行三种模式和配置文件

    Hadoop的运行模式分为3种:本地运行模式,伪分布运行模式,集群运行模式,相应概念如下: 1.独立模式即本地运行模式(standalone或local mode)无需运行任何守护进程(daemon) ...

  7. hadoop本地运行与集群运行

    开发环境: windows10+伪分布式(虚拟机组成的集群)+IDEA(不需要装插件) 介绍: 本地开发,本地debug,不需要启动集群,不需要在集群启动hdfs yarn 需要准备什么: 1/配置w ...

  8. Spark新手入门——3.Spark集群(standalone模式)安装

    主要包括以下三部分,本文为第三部分: 一. Scala环境准备 查看二. Hadoop集群(伪分布模式)安装 查看三. Spark集群(standalone模式)安装 Spark集群(standalo ...

  9. [spark]-Spark2.x集群搭建与参数详解

    在前面的Spark发展历程和基本概念中介绍了Spark的一些基本概念,熟悉了这些基本概念对于集群的搭建是很有必要的.我们可以了解到每个参数配置的作用是什么.这里将详细介绍Spark集群搭建以及xml参 ...

随机推荐

  1. python生成随机数、随机字符串

    python生成随机数.随机字符串 import randomimport string # 随机整数:print random.randint(1,50) # 随机选取0到100间的偶数:print ...

  2. 无界面运行Jmeter压测脚本 --后知者

    原文作者---后知者 原文地址:http://www.cnblogs.com/houzhizhe/p/8119735.html [后知者的故事]:针对单一接口压测时出现了从未遇到的问题,设好并发量后用 ...

  3. SQL 中的 IFNULL和NULLIF

    sql 中的IFNULL和NULLIF很容易混淆,在此记录一下. IFNULL IFNULL(expression1, expression2) 如果expression1为null, 在函数返回ex ...

  4. Python Scrapy环境配置教程+使用Scrapy爬取李毅吧内容

    Python爬虫框架Scrapy Scrapy框架 1.Scrapy框架安装 直接通过这里安装scrapy会提示报错: error: Microsoft Visual C++ 14.0 is requ ...

  5. ul或者ol中添加li元素

    <!doctype html><html>    <head>        <meta charset="utf-8">      ...

  6. vue cli 配置信息说明

    摘自csdn http://blog.csdn.net/hongchh/article/details/55113751

  7. java中的自定义注解的使用

    https://www.cnblogs.com/acm-bingzi/p/javaAnnotation.html

  8. STL基础--算法(排序)

    STL排序算法 排序算法要求随机访问迭代器 vector, deque, container array, native array 例子 vector<int> vec = {9,1,1 ...

  9. 算法:整数与ip地址转换

    直接上代码(不要直接拷贝,中间少了一行啊):   #include <string>   #include <iostream>   using namespace std; ...

  10. WAP用户评论简单实现瀑布流加载

    wap端经常会遇到产品或者评论的加载,但是分页的体验不是很好,所以选择通过js实现瀑布流加载方式. 第一步:需要动态加载的主要内容,这里需要动态加载的是li标签的内容 <ul class=&qu ...