CDH spark 命令行测试
一、
参考 https://www.cnblogs.com/bovenson/p/5801536.html
[root@node- test]# chown hdfs:hdfs /root/test/*
[root@node-1 test]# chown hdfs:hdfs /root/test
[root@node-1 test]# cd /var/lib/hadoop-hdfs/
[root@node-1 hadoop-hdfs]# ls
[root@node-1 hadoop-hdfs]# mkdir /var/lib/hadoop-hdfs/test
[root@node-1 hadoop-hdfs]# mv /root/test/* /var/lib/hadoop-hdfs/test
[root@node-1 hadoop-hdfs]# su hdfs
Attempting to create directory /var/lib/hadoop-hdfs/perl5
[hdfs@node-1 ~]$ hadoop fs -put test/*.txt /user/tt/me.txt
$ hadoop fs -ls -R /user/tt
-rw-r--r-- 3 hdfs supergroup 56302 2018-01-30 20:59 /user/tt/me.txt # spark-shell --master spark://node-1:7077 --executor-memory 100M --driver-memory 1000M Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_162)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc (master = spark://node-1:7077, app id = app-20180130211525-0002).
SQL context available as sqlContext. scala> val lines = sc.textFile("hdfs://node-1:8020/user/tt/me.txt")
lines: org.apache.spark.rdd.RDD[String] = hdfs://node-1:8020/user/tt/me.txt MapPartitionsRDD[1] at textFile at <console>:27 scala> lines.count()
res0: Long = 10000 scala> lines.first()
res1: String = 4 scala>
spark 文件行数统计-命令行
二、
备注:
# spark-shell --help
Usage: ./bin/spark-shell [options] Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. --conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application. --help, -h Show this help message and exit
--verbose, -v Print additional debug output
--version, Print the version of current Spark Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: ). Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: in YARN mode,
or all available cores on the worker in standalone mode) YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: ).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: ).
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
spark-shell --help
CDH spark 命令行测试的更多相关文章
- Linux命令行测试网速speedtest.net
Linux命令行测试网速speedtest.net 当发现上网速度变慢时,人们通常会先首先测试自己的电脑到网络服务提供商(通常被称为"最后一公里")的网络连接速度.在可用于测试宽带 ...
- I.MX6 Android CAN 命令行测试
/********************************************************************* * I.MX6 Android CAN 命令行测试 * 说 ...
- [转]使用Linux命令行测试网速
装speedtest-cli speedtest-cli是一个用Python编写的轻量级Linux命令行工具,在Python2.4至3.4版本下均可运行.它基于Speedtest.net的基础架构来测 ...
- 使用Linux命令行测试网速
安装speedtest speedtest是一个用Python编写的轻量级Linux命令行工具,在Python2.4至3.4版本下均可运行.它基于Speedtest.net的基础架构来测量网络的上/下 ...
- 使用Linux命令行测试网速-----speedtest-cli
https://github.com/sivel/speedtest-cli 当发现上网速度变慢时,人们通常会先首先测试自己的电脑到网络服务提供商(通常被称为“最后一公里”)的网络连接速度.在可用于测 ...
- gtest命令行测试案例
使用gtest编写的测试案例通常本身就是一个可执行文件,因此运行起来非常方便.同时,gtest也为我们提供了一系列的运行参数(环境变量.命令行参数或代码里指定),使得我们可以对案例的执行进行一些有效的 ...
- monkey命令行测试
一. 什么是Monkey monkey是google提供的一个用于稳定性与压力测试的命令行工具.monkey程序由android系统自带,位于/sdcard/system/framework/monk ...
- Junit 命令行测试 报错:Could not find class 理解及解决方法
一.报错 : 『Could not find class』 下面给出三个示例比较,其中只有第一个是正确的. 1. MyComputer:bin marikobayashi$ java -cp .:./ ...
- laravel 命令行测试 Uncaught ReflectionException: Class config does not exist
require __DIR__ . '/vendor/autoload.php'; $app = require_once __DIR__ . '/bootstrap/app.php'; config ...
随机推荐
- phpstorm激活码
激活码1 812LFWMRSH-eyJsaWNlbnNlSWQiOiI4MTJMRldNUlNIIiwibGljZW5zZWVOYW1lIjoi5q2j54mIIOaOiOadgyIsImFzc2ln ...
- C++——子类调用父类方法
原创声明:本文系博主原创文章,转载或引用请注明出处. 1. 如果类B是类A的子类,则在类B的成员方法中调用类A的方法时,可以直接以 A::method(paramlist); 来调用. 2. 若子类B ...
- git fetch, merge, pull, push需要注意的地方
在git操作中,我们经常会用到fetch, merge, pull和push等命令,以下是一些我们需要注意的地方. 给大家准备了参考资料: 1. Whatʼs a Fast Forward Merge ...
- Docuemnt 的 NamespaceURI为空问题
创建doc的方式不同,需要增加 DocumentBuilderFactory.setNamespaceAware(true); 这样Element Node.getNamespaceURI 才不为空 ...
- kubernetes 创建超级管理员和密匙
# 创建一个超级管理员adm_account="k8s-dash-admin"kubectl create serviceaccount ${adm_account} -n kub ...
- BZOJ4353 Play with tree[树剖]
复习几乎考不到的树剖.维护min以及min个数,打set和add标记即可,注意set优先级优于add. #include<iostream> #include<cstdio> ...
- postman 跟restsharp 模拟请求http
https://github.com/restsharp/RestSharp postman 生成的访问代码: 好用! Features Assemblies for .NET 4.5.2 and . ...
- docker使用 Flannel(etcd+flannel)网络
一.Flannel网络简介 Flannel是一种基于overlay网络的跨主机容器网络解决方案,也就是将TCP数据包封装在另一种网络包里面进行路由转发和通信,Flannel是CoreOS开发,专门用于 ...
- sh_17_字符串的查找和替换
sh_17_字符串的查找和替换 hello_str = "hello world" # 1. 判断是否以指定字符串开始 print(hello_str.startswith(&qu ...
- css引入第三方字体
上面图片时将字体文件放入到fonts文件夹内, 里面有一个fonts.css文件,将字体文件声明好, 然后像下面图片一样,在另外一个css内@import引入,(当然,也可以直接将声明和引用放在一个c ...