Spark记录-官网学习配置篇(二)
### Spark SQL Running the SET -v
command will show the entire list of the SQL configuration.
#scala
// spark is an existing SparkSession
spark.sql("SET -v").show(numRows = 200, truncate = false)
#java
// spark is an existing SparkSession
spark.sql("SET -v").show(200, false);
#python
# spark is an existing SparkSession
spark.sql("SET -v").show(n=200, truncate=False);
#R
sparkR.session()
properties <- sql("SET -v")
showDF(properties, numRows = 200, truncate = FALSE)
### Spark Streaming
Property Name | Default | Meaning |
---|---|---|
spark.streaming.backpressure.enabled |
false | Enables or disables Spark Streaming's internal backpressure mechanism (since 1.5). This enables the Spark Streaming to control the receiving rate based on the current batch scheduling delays and processing times so that the system receives only as fast as the system can process. Internally, this dynamically sets the maximum receiving rate of receivers. This rate is upper bounded by the values spark.streaming.receiver.maxRate and spark.streaming.kafka.maxRatePerPartition if they are set (see below). |
spark.streaming.backpressure.initialRate |
not set | This is the initial maximum receiving rate at which each receiver will receive data for the first batch when the backpressure mechanism is enabled. |
spark.streaming.blockInterval |
200ms | Interval at which data received by Spark Streaming receivers is chunked into blocks of data before storing them in Spark. Minimum recommended - 50 ms. See the performance tuningsection in the Spark Streaming programing guide for more details. |
spark.streaming.receiver.maxRate |
not set | Maximum rate (number of records per second) at which each receiver will receive data. Effectively, each stream will consume at most this number of records per second. Setting this configuration to 0 or a negative number will put no limit on the rate. See the deployment guide in the Spark Streaming programing guide for mode details. |
spark.streaming.receiver.writeAheadLog.enable |
false | Enable write ahead logs for receivers. All the input data received through receivers will be saved to write ahead logs that will allow it to be recovered after driver failures. See the deployment guide in the Spark Streaming programing guide for more details. |
spark.streaming.unpersist |
true | Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from Spark's memory. The raw input data received by Spark Streaming is also automatically cleared. Setting this to false will allow the raw data and persisted RDDs to be accessible outside the streaming application as they will not be cleared automatically. But it comes at the cost of higher memory usage in Spark. |
spark.streaming.stopGracefullyOnShutdown |
false | If true , Spark shuts down the StreamingContext gracefully on JVM shutdown rather than immediately. |
spark.streaming.kafka.maxRatePerPartition |
not set | Maximum rate (number of records per second) at which data will be read from each Kafka partition when using the new Kafka direct stream API. See the Kafka Integration guide for more details. |
spark.streaming.kafka.maxRetries |
1 | Maximum number of consecutive retries the driver will make in order to find the latest offsets on the leader of each partition (a default value of 1 means that the driver will make a maximum of 2 attempts). Only applies to the new Kafka direct stream API. |
spark.streaming.ui.retainedBatches |
1000 | How many batches the Spark Streaming UI and status APIs remember before garbage collecting. |
spark.streaming.driver.writeAheadLog.closeFileAfterWrite |
false | Whether to close the file after writing a write ahead log record on the driver. Set this to 'true' when you want to use S3 (or any file system that does not support flushing) for the metadata WAL on the driver. |
spark.streaming.receiver.writeAheadLog.closeFileAfterWrite |
false | Whether to close the file after writing a write ahead log record on the receivers. Set this to 'true' when you want to use S3 (or any file system that does not support flushing) for the data WAL on the receivers. |
### SparkR
Property Name | Default | Meaning |
---|---|---|
spark.r.numRBackendThreads |
2 | Number of threads used by RBackend to handle RPC calls from SparkR package. |
spark.r.command |
Rscript | Executable for executing R scripts in cluster modes for both driver and workers. |
spark.r.driver.command |
spark.r.command | Executable for executing R scripts in client modes for driver. Ignored in cluster modes. |
spark.r.shell.command |
R | Executable for executing sparkR shell in client modes for driver. Ignored in cluster modes. It is the same as environment variable SPARKR_DRIVER_R , but take precedence over it. spark.r.shell.command is used for sparkR shell while spark.r.driver.command is used for running R script. |
spark.r.backendConnectionTimeout |
6000 | Connection timeout set by R process on its connection to RBackend in seconds. |
spark.r.heartBeatInterval |
100 | Interval for heartbeats sent from SparkR backend to R process to prevent connection timeout. |
### GraphX
Property Name | Default | Meaning |
---|---|---|
spark.graphx.pregel.checkpointInterval |
-1 | Checkpoint interval for graph and message in Pregel. It used to avoid stackOverflowError due to long lineage chains after lots of iterations. The checkpoint is disabled by default. |
### Deploy
Property Name | Default | Meaning |
---|---|---|
spark.deploy.recoveryMode |
NONE | The recovery mode setting to recover submitted Spark jobs with cluster mode when it failed and relaunches. This is only applicable for cluster mode when running with Standalone or Mesos. |
spark.deploy.zookeeper.url |
None | When `spark.deploy.recoveryMode` is set to ZOOKEEPER, this configuration is used to set the zookeeper URL to connect to. |
spark.deploy.zookeeper.dir |
None | When `spark.deploy.recoveryMode` is set to ZOOKEEPER, this configuration is used to set the zookeeper directory to store recovery state. |
### Cluster Managers Each cluster manager in Spark has additional configuration options. Configurations can be found on the pages for each mode: #### [YARN](running-on-yarn.html#configuration) #### [Mesos](running-on-mesos.html#configuration) #### [Standalone Mode](spark-standalone.html#cluster-launch-scripts) # Environment Variables Certain Spark settings can be configured through environment variables, which are read from the `conf/spark-env.sh` script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). In Standalone and Mesos modes, this file can give machine specific information such as hostnames. It is also sourced when running local Spark applications or submission scripts. Note that `conf/spark-env.sh` does not exist by default when Spark is installed. However, you can copy `conf/spark-env.sh.template` to create it. Make sure you make the copy executable. The following variables can be set in `spark-env.sh`:
Environment Variable | Meaning |
---|---|
JAVA_HOME |
Location where Java is installed (if it's not on your default PATH ). |
PYSPARK_PYTHON |
Python binary executable to use for PySpark in both driver and workers (default is python2.7 if available, otherwise python ). Property spark.pyspark.python take precedence if it is set |
PYSPARK_DRIVER_PYTHON |
Python binary executable to use for PySpark in driver only (default is PYSPARK_PYTHON ). Property spark.pyspark.driver.python take precedence if it is set |
SPARKR_DRIVER_R |
R binary executable to use for SparkR shell (default is R ). Property spark.r.shell.command take precedence if it is set |
SPARK_LOCAL_IP |
IP address of the machine to bind to. |
SPARK_PUBLIC_DNS |
Hostname your Spark program will advertise to other machines. |
除上述之外,还可以选择设置Spark [独立群集脚本](spark-standalone.html#cluster-launch-scripts),例如每台机器上使用的内核数量和最大内存。由于`spark-env.sh`是一个shell脚本,其中一些可以通过程序设置 - 例如,您可以通过查找特定网络接口的IP来计算`SPARK_LOCAL_IP`。注意:在`cluster`模式下在YARN上运行Spark时,需要使用`conf / spark-defaults.conf`文件中的`spark.yarn.appMasterEnv。[EnvironmentVariableName]`属性来设置环境变量。在`spark-env.sh`中设置的环境变量不会在`cluster`模式中反映在YARN Application Master进程中。有关更多信息,请参阅[与YARN相关的Spark属性](run-on-yarn.html#spark-properties)。#配置日志记录Spark使用[log4j](http://logging.apache.org/log4j/)进行日志记录。你可以通过在`conf`目录下添加`log4j.properties`文件来配置它。一种开始的方法是复制现有的`log4j.properties.template`。#覆盖配置目录要指定不同于默认“SPARK_HOME / conf”的配置目录,可以设置SPARK_CONF_DIR。Spark将使用该目录中的配置文件(spark-defaults.conf,spark-env.sh,log4j.properties等)。#继承Hadoop集群配置如果您计划使用Spark从HDFS进行读写,则需要在Spark类路径中包含两个Hadoop配置文件:*`hdfs-site.xml`,它提供HDFS客户端的默认行为。*`core-site.xml`,其中设置了默认的文件系统名称。这些配置文件的位置因Hadoop版本而异,但常见的位置在`/ etc / hadoop / conf`中。一些工具可以即时创建配置,但提供了一个下载它们的机制。要使这些文件对Spark可见,请将`$ SPARK_HOME / spark-env.sh`中的`HADOOP_CONF_DIR`设置为包含配置文件的位置。
Spark记录-官网学习配置篇(二)的更多相关文章
- Spark记录-官网学习配置篇(一)
参考http://spark.apache.org/docs/latest/configuration.html Spark提供三个位置来配置系统: Spark属性控制大多数应用程序参数,可以使用Sp ...
- Spring官网阅读 | 总结篇
接近用了4个多月的时间,完成了整个<Spring官网阅读>系列的文章,本文主要对本系列所有的文章做一个总结,同时也将所有的目录汇总成一篇文章方便各位读者来阅读. 下面这张图是我整个的写作大 ...
- Knockout.Js官网学习(系列)
1.Knockout.Js官网学习(简介) 2.Knockout.Js官网学习(监控属性Observables) Knockout.Js官网学习(数组observable) 3.Knockout.Js ...
- 【Spark深入学习 -16】官网学习SparkSQL
----本节内容-------1.概览 1.1 Spark SQL 1.2 DatSets和DataFrame2.动手干活 2.1 契入点:SparkSess ...
- Spark源码编译,官网学习
这里以spark-1.6.0版本为例 官网网址 http://spark.apache.org/docs/1.6.0/building-spark.html#building-with-build ...
- 【重点突破】—— UniApp 微信小程序开发官网学习One
一.初步认识 uni-app官网:https://uniapp.dcloud.io/component/README HBuilderX官方IDE下载地址: http://www.dcloud.io/ ...
- 程序员必知的技术官网系列--mysql篇
mysql 官网 https://www.mysql.com/ 官网布局很简单, 其中常用的两块就是下载和文档这两块, 其中下载没什么可讲的, 本次重点依旧是文档. 首页 mysql 文档导航页 ht ...
- React官网学习笔记
欢迎指导与讨论 : ) 前言 本文主要是笔者在React英文官网学习时整理的笔记.由于笔者水平有限,如有错误恳请指出 O(∩_∩)O 一 .Tutoial 篇 1 . React的组件类名的首字母必须 ...
- Tomcat 官网知识总结篇
Tomcat 官网知识总结一.Tomcat 基本介绍 1.关键目录 a) bin 该目录包含了启动.停止和启动其他的脚本,如startup.sh.shutdown.sh等; b) conf 配置文件和 ...
随机推荐
- Reflux系列01:异步操作经验小结
写在前面 在实际项目中,应用往往充斥着大量的异步操作,如ajax请求,定时器等.一旦应用涉及异步操作,代码便会变得复杂起来.在flux体系中,让人困惑的往往有几点: 异步操作应该在actions还是s ...
- Ubuntu16.4下QT配置opencv3.1+FFmpeg
安装依赖环境 sudo apt-get install build-essential sudo apt-get install cmake git libgtk2.0-dev pkg-config ...
- Web项目开发流程 PC端
一.了解.明确需求. 这个应该是第一步了,不了解需求你就不知道为什么要做,要怎么去做这个项目的工作. (1)明确需求是相当重要的,很有必要去和产品经理.设计人员去沟通,需要明白每一个按钮,每一个开 ...
- muduo网络库学习笔记(五) 链接器Connector与监听器Acceptor
目录 muduo网络库学习笔记(五) 链接器Connector与监听器Acceptor Connector 系统函数connect 处理非阻塞connect的步骤: Connetor时序图 Accep ...
- 微软职位内部推荐-Sr. SW Engineer for Privacy Id
微软近期Open的职位: Job posting title: Senior Software Engineer for Privacy Identification Profession: Engi ...
- 【Alpha】第十次Scrum meeting
姓名 今日完成任务 所耗时间 刘乾 使用jinja2引擎成功做出第一个模板py文件和latex文件!这是零的突破!(途中遇到很多坑我也就不吐槽了,真是理想与现实差距满满啊) Issue链接:https ...
- 20135202闫佳歆--week2 一个简单的时间片轮转多道程序内核代码及分析
一个简单的时间片轮转多道程序内核代码及分析 所用代码为课程配套git库中下载得到的. 一.进程的启动 /*出自mymain.c*/ /* start process 0 by task[0] */ p ...
- 构建之法-软件测试+质量保障+稳定和发布阶段+IT行业的创新+人、绩效和职业道德
第十三章(软件测试) 要知道为什么有软件测试,首先需要知道软件开发,软件开发者一般都很难检查出自己的错误,所以才需要另外一个人测试,所以软件测试就诞生了. 书本介绍了很多测试方法,各有各的优缺点,至于 ...
- MongoDB ,cursor not found异常
查询mongoDB集合数据更新,数据有400w多.我一次用cursor(游标)取1w,处理更新.程序在某段时间运行中遍历游标时发生异常! DBCursor cursor = tabColl.find( ...
- VS2013的安装与测试
第一步:下载完成之后点击安装,在安装过程中会出现很多选择,选择社区版(c++),安装完成: 第二步:安装完成之后打开VS2013,如图所示: 第三步:按以下步骤进行 第四步:点击[OK]之后 第五 ...