[Keras] Install and environment setting
Documentation: https://keras.io/
1. 利用anaconda 管理python库是明智的选择。
conda update conda
conda update anaconda
conda update --all
conda install mingw libpython
pip install --upgrade --no-deps theano
pip install keras
2. 测试theano
python执行:
import theano
theano.test()
import theano执行失败证明theano安装不成功,我在theano.test()时出现如下错误:
ERROR: Failure: ImportError (No module named nose_parameterized)
安装nose_parameterized即可,cmd执行:
pip install nose_parameterized
测试过程长,没完没了。
默认是tensorflow:
un@un-UX303UB$ cat ~/.keras/keras.json
{
"image_dim_ordering": "tf",
"backend": "tensorflow",
"epsilon": 1e-,
"floatx": "float32"
}
un@un-UX303UB$ vim ~/.keras/keras.json
un@un-UX303UB$ cat ~/.keras/keras.json
{
"image_dim_ordering": "th",
"backend": "theano",
"epsilon": 1e-,
"floatx": "float32"
}
可通过代码测试配置:[Keras] Develop Neural Network With Keras Step-By-Step
Docker + Spark + Keras
Ref: http://blog.csdn.net/cyh_24/article/details/49683221
第一阶段:
1. Sofeware center安装docker。
2. Sequenceiq 公司提供了一个docker容器,里面安装好了spark
- 下载:docker pull sequenceiq/spark:1.5.1
- 安装:sudo docker run -it sequenceiq/spark:1.5.1 bash
bash-4.1# cd /usr/local/spark
bash-4.1# cp conf/spark-env.sh.template conf/spark-env.sh
bash-4.1# vi conf/spark-env.sh 末尾添加:
export SPARK_LOCAL_IP=<你的IP地址>
export SPARK_MASTER_IP=<你的IP地址>
- 启动master:bash-4.1# ./sbin/start-master.sh
- 启动slave:bash-4.1# ./sbin/start-slave.sh spark://localhost:7077
提交一个应用运行测试一下:
bash-4.1# ./bin/spark-submit examples/src/main/Python/pi.py
16/12/30 19:29:02 INFO spark.SparkContext: Running Spark version 1.5.1
16/12/30 19:29:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/30 19:29:03 INFO spark.SecurityManager: Changing view acls to: root
16/12/30 19:29:03 INFO spark.SecurityManager: Changing modify acls to: root
16/12/30 19:29:03 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/12/30 19:29:03 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/12/30 19:29:03 INFO Remoting: Starting remoting
16/12/30 19:29:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@127.0.0.1:34787]
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'sparkDriver' on port 34787.
16/12/30 19:29:04 INFO spark.SparkEnv: Registering MapOutputTracker
16/12/30 19:29:04 INFO spark.SparkEnv: Registering BlockManagerMaster
16/12/30 19:29:04 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-7e33d1ab-0b51-4f73-82c0-49d97c3f3c0d
16/12/30 19:29:04 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
16/12/30 19:29:04 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/httpd-b150abe5-c149-4aa9-81fc-6a365f389cf4
16/12/30 19:29:04 INFO spark.HttpServer: Starting HTTP Server
16/12/30 19:29:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/12/30 19:29:04 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:41503
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'HTTP file server' on port 41503.
16/12/30 19:29:04 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/12/30 19:29:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/12/30 19:29:04 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/12/30 19:29:04 INFO ui.SparkUI: Started SparkUI at http://127.0.0.1:4040
16/12/30 19:29:04 INFO util.Utils: Copying /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py to /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/userFiles-3cf70968-52fd-49e9-b35d-5eb5f029ec7a/pi.py
16/12/30 19:29:04 INFO spark.SparkContext: Added file file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py at file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py with timestamp 1483144144747
16/12/30 19:29:04 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/12/30 19:29:04 INFO executor.Executor: Starting executor ID driver on host localhost
16/12/30 19:29:05 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44598.
16/12/30 19:29:05 INFO netty.NettyBlockTransferService: Server created on 44598
16/12/30 19:29:05 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/12/30 19:29:05 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:44598 with 530.3 MB RAM, BlockManagerId(driver, localhost, 44598)
16/12/30 19:29:05 INFO storage.BlockManagerMaster: Registered BlockManager
16/12/30 19:29:05 INFO spark.SparkContext: Starting job: reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Got job 0 (reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39) with 2 output partitions
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Final stage: ResultStage 0(reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39)
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Missing parents: List()
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39), which has no missing parents
16/12/30 19:29:05 INFO storage.MemoryStore: ensureFreeSpace(4136) called with curMem=0, maxMem=556038881
16/12/30 19:29:05 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.0 KB, free 530.3 MB)
16/12/30 19:29:05 INFO storage.MemoryStore: ensureFreeSpace(2760) called with curMem=4136, maxMem=556038881
16/12/30 19:29:05 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.7 KB, free 530.3 MB)
16/12/30 19:29:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:44598 (size: 2.7 KB, free: 530.3 MB)
16/12/30 19:29:05 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39)
16/12/30 19:29:05 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/12/30 19:29:05 WARN scheduler.TaskSetManager: Stage 0 contains a task of very large size (365 KB). The maximum recommended task size is 100 KB.
16/12/30 19:29:05 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 374548 bytes)
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 502351 bytes)
16/12/30 19:29:06 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)
16/12/30 19:29:06 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
16/12/30 19:29:06 INFO executor.Executor: Fetching file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py with timestamp 1483144144747
16/12/30 19:29:06 INFO util.Utils: /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py has been previously copied to /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/userFiles-3cf70968-52fd-49e9-b35d-5eb5f029ec7a/pi.py
16/12/30 19:29:06 INFO python.PythonRunner: Times: total = 306, boot = 164, init = 7, finish = 135
16/12/30 19:29:06 INFO python.PythonRunner: Times: total = 309, boot = 162, init = 11, finish = 136
16/12/30 19:29:06 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 998 bytes result sent to driver
16/12/30 19:29:06 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 998 bytes result sent to driver
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 419 ms on localhost (1/2)
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 450 ms on localhost (2/2)
16/12/30 19:29:06 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/12/30 19:29:06 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39) finished in 0.464 s
16/12/30 19:29:06 INFO scheduler.DAGScheduler: Job 0 finished: reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39, took 0.668884 s
Pi is roughly 3.146120
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/12/30 19:29:06 INFO ui.SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
16/12/30 19:29:06 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/12/30 19:29:06 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/12/30 19:29:06 INFO storage.MemoryStore: MemoryStore cleared
16/12/30 19:29:06 INFO storage.BlockManager: BlockManager stopped
16/12/30 19:29:06 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/12/30 19:29:06 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/12/30 19:29:06 INFO spark.SparkContext: Successfully stopped SparkContext
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/12/30 19:29:07 INFO util.ShutdownHookManager: Shutdown hook called
16/12/30 19:29:07 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae
Log
恭喜,跑了一个spark的应用程序!
3. 安装elephas:
unsw@unsw-UX303UB$ pythonpip install elephas
pythonpip: command not found
unsw@unsw-UX303UB$ pip install elephas
Collecting elephas
Downloading elephas-0.3.tar.gz
Requirement already satisfied: keras in /usr/local/anaconda3/lib/python3.5/site-packages (from elephas)
Collecting hyperas (from elephas)
Downloading hyperas-0.3.tar.gz
Requirement already satisfied: pyyaml in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Requirement already satisfied: theano in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Requirement already satisfied: six in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Collecting hyperopt (from hyperas->elephas)
Downloading hyperopt-0.1.tar.gz (98kB)
100% |████████████████████████████████| 102kB 1.7MB/s
Collecting entrypoints (from hyperas->elephas)
Downloading entrypoints-0.2.2-py2.py3-none-any.whl
Requirement already satisfied: jupyter in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: nbformat in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: nbconvert in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: numpy>=1.7.1 in /usr/local/anaconda3/lib/python3.5/site-packages (from theano->keras->elephas)
Requirement already satisfied: scipy>=0.11 in /usr/local/anaconda3/lib/python3.5/site-packages (from theano->keras->elephas)
Requirement already satisfied: nose in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperopt->hyperas->elephas)
Collecting pymongo (from hyperopt->hyperas->elephas)
Downloading pymongo-3.4.0-cp35-cp35m-manylinux1_x86_64.whl (359kB)
100% |████████████████████████████████| 368kB 1.5MB/s
Requirement already satisfied: networkx in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperopt->hyperas->elephas)
Collecting future (from hyperopt->hyperas->elephas)
Downloading future-0.16.0.tar.gz (824kB)
100% |████████████████████████████████| 829kB 1.5MB/s
Requirement already satisfied: decorator>=3.4.0 in /usr/local/anaconda3/lib/python3.5/site-packages (from networkx->hyperopt->hyperas->elephas)
Building wheels for collected packages: elephas, hyperas, hyperopt, future
Running setup.py bdist_wheel for elephas ... done
Stored in directory: /home/unsw/.cache/pip/wheels/b6/fe/74/8e079673e5048a583b547a0dc5d83a7fea883933472da1cefb
Running setup.py bdist_wheel for hyperas ... done
Stored in directory: /home/unsw/.cache/pip/wheels/85/7d/da/b417ee5e31b62d51c75afa6eb2ada9ccf8b7aff2de71d82c1b
Running setup.py bdist_wheel for hyperopt ... done
Stored in directory: /home/unsw/.cache/pip/wheels/4b/0f/9d/1166e48523d3bf7478800f250b0fceae31ac6a08b8a7cca820
Running setup.py bdist_wheel for future ... done
Stored in directory: /home/unsw/.cache/pip/wheels/c2/50/7c/0d83b4baac4f63ff7a765bd16390d2ab43c93587fac9d6017a
Successfully built elephas hyperas hyperopt future
Installing collected packages: pymongo, future, hyperopt, entrypoints, hyperas, elephas
Successfully installed elephas-0.3 entrypoints-0.2.2 future-0.16.0 hyperas-0.3 hyperopt-0.1 pymongo-3.4.0
Log
第二阶段:
√ 如果你的机器有多个CPU(假设24个):
你可以只开一个docker,然后很简单的使用spark结合elephas来并行(利用24个cpu)计算CNN。
√ 如果你的机器有多个GPU(假设4个):
你可以开4个docker镜像,修改每个镜像内的~/.theanorc来选择特定的GPU来并行(4个GPU)计算。(需自行安装cuda)
[Keras] Install and environment setting的更多相关文章
- Java Environment Setting
As a non-Java developer, I am quit stuck in Java environment setting because I am not familiar with ...
- install erlang environment on centos
#(erlide in linux can't detect the runtime if build from source, but erlang shell works correctly)su ...
- How to change Visual Studio default environment setting
如何改变 Visual Studio 的默认环境设置: 1. 工具栏 Tools --> Import and Export Settings... 2. 选择 Reset All Settin ...
- bigdata learning unit two--Spark environment setting
1.下载 Spark安装之前的准备 文件的解压与改名 tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz rm -rf spark-2.2.0-bin-hadoop2.7. ...
- bigdata learning unit one--Hadoop environment setting
1.配置ssh,使集群服务器之间的通讯,不再每次都输入密码进行认证. 2. [root@hc--uatbeta2 hadoop]# start-all.shStarting namenodes on ...
- Mac environment setting
java 7 jdk http://www.ifunmac.com/2013/04/mac-jdk-7/ http://blog.sina.com.cn/s/blog_6dce99b101016744 ...
- 本人AI知识体系导航 - AI menu
Relevant Readable Links Name Interesting topic Comment Edwin Chen 非参贝叶斯 徐亦达老板 Dirichlet Process 学习 ...
- docker-compose RabbitMQ与Nodejs接收端同时运行时的错误
首先讲一下背景: 我现在在开发的一个项目,需要运行RabbitMQ和Nodejs接收端(amqplib库),但是在Nodejs接收端运行时,无法连接至RabbitMQ端,经常提示说 connect E ...
- Initializing a Build Environment
This section describes how to set up your local work environment to build the Android source files. ...
随机推荐
- Php:学习笔记(一):版本选择
(注:本文来自网络) 超过75%的网站使用了PHP作为开发语言,wordpress,phpmyadmin和其他一些开源项目的盛行,带来了一大批的长尾用户.然而,他们一般安装之后却很少升级.下图是目前P ...
- [IOS]cocoapos 两个ruby源的对比
最近需要使用一些动态类库,cocoapods比较好用,能帮助管理这些类库,百度一下也能找到很多cocoapods配置方法,这里不赘述,我想要讲的是在配置的时候一般都会推荐这样做 $ gem sourc ...
- mySql常用函数说明
#mySql的数学函数select ABS(-5); #绝对值select ceiling(-5.8); #取大整数select floor(-5.8); #取小整数select LEAST(10,3 ...
- VIPM 发布功能总结
前言 上一篇中,我们分析介绍了LabVIEW自带的安装发布功能,今天总结一下VIPM的发布功能. VIPM 提到LabVIEW,不能不提VI Package Manager (VIPM)这个工具包 ...
- php-(/usr/local/php)安装编译选项
./configure \ --prefix=/usr/local/php \ --with-config-file-path=/usr/local/php/etc \ --enable-fpm \ ...
- SSH配置与讲解
一.Struts 首先介绍Struts,在web项目中加入Struts的jar包,并在Web.xml中添加Struts的配置: <filter> <filter-name ...
- python1
leetcode上面的很简单的题目 Given an integer (signed 32 bits), write a function to check whether it is a power ...
- ABP理论学习之MVC控制器(新增)
返回总目录 本篇目录 介绍 AbpController基类 本地化 异常处理 响应结果的包装 审计日志 授权 工作单元 其他 介绍 ABP通过Abp.Web.Mvc nuget包集成了ASP.NET ...
- C#中使用Socket请求Web服务器过程
最开始我们需要明白一件事情,因为这是这篇文章的前提: HTTP协议只是一个应用层协议,它底层是通过TCP进行传输数据的.因此,浏览器访问Web服务器的过程必须先有“连接建立”的发生. 而有人或许会问: ...
- 逻辑回归(LR)总结复习
摘要: 1.算法概述 2.算法推导 3.算法特性及优缺点 4.注意事项 5.实现和具体例子 6.适用场合 内容: 1.算法概述 最基本的LR分类器适合于对两分类(类0,类1)目标进行分类:这个模型以样 ...