Documentation: https://keras.io/


1. 利用anaconda 管理python库是明智的选择。

conda update conda
conda update anaconda
conda update --all
conda install mingw libpython
pip install --upgrade --no-deps theano
pip install keras

2. 测试theano

python执行:

import theano
theano.test()

import theano执行失败证明theano安装不成功,我在theano.test()时出现如下错误:

ERROR: Failure: ImportError (No module named nose_parameterized)

安装nose_parameterized即可,cmd执行:

pip install nose_parameterized

测试过程长,没完没了。

默认是tensorflow:

un@un-UX303UB$ cat ~/.keras/keras.json
{
  "image_dim_ordering": "tf",
  "backend": "tensorflow",
  "epsilon": 1e-,
  "floatx": "float32"
}
un@un-UX303UB$ vim ~/.keras/keras.json
un@un-UX303UB$ cat ~/.keras/keras.json
{
  "image_dim_ordering": "th",
  "backend": "theano",
  "epsilon": 1e-,
  "floatx": "float32"
}

可通过代码测试配置:[Keras] Develop Neural Network With Keras Step-By-Step


Docker + Spark + Keras

Ref: http://blog.csdn.net/cyh_24/article/details/49683221

第一阶段:

1. Sofeware center安装docker。

2. Sequenceiq 公司提供了一个docker容器,里面安装好了spark

  • 下载:docker pull sequenceiq/spark:1.5.1
  • 安装:sudo docker run -it sequenceiq/spark:1.5.1 bash
bash-4.1# cd /usr/local/spark
bash-4.1# cp conf/spark-env.sh.template conf/spark-env.sh
bash-4.1# vi conf/spark-env.sh 末尾添加:
export SPARK_LOCAL_IP=<你的IP地址>
export SPARK_MASTER_IP=<你的IP地址>
  • 启动master:bash-4.1# ./sbin/start-master.sh
  • 启动slave:bash-4.1# ./sbin/start-slave.sh spark://localhost:7077

提交一个应用运行测试一下:

bash-4.1# ./bin/spark-submit examples/src/main/Python/pi.py
16/12/30 19:29:02 INFO spark.SparkContext: Running Spark version 1.5.1
16/12/30 19:29:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/30 19:29:03 INFO spark.SecurityManager: Changing view acls to: root
16/12/30 19:29:03 INFO spark.SecurityManager: Changing modify acls to: root
16/12/30 19:29:03 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/12/30 19:29:03 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/12/30 19:29:03 INFO Remoting: Starting remoting
16/12/30 19:29:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@127.0.0.1:34787]
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'sparkDriver' on port 34787.
16/12/30 19:29:04 INFO spark.SparkEnv: Registering MapOutputTracker
16/12/30 19:29:04 INFO spark.SparkEnv: Registering BlockManagerMaster
16/12/30 19:29:04 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-7e33d1ab-0b51-4f73-82c0-49d97c3f3c0d
16/12/30 19:29:04 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
16/12/30 19:29:04 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/httpd-b150abe5-c149-4aa9-81fc-6a365f389cf4
16/12/30 19:29:04 INFO spark.HttpServer: Starting HTTP Server
16/12/30 19:29:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/12/30 19:29:04 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:41503
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'HTTP file server' on port 41503.
16/12/30 19:29:04 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/12/30 19:29:04 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/12/30 19:29:04 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/12/30 19:29:04 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/12/30 19:29:04 INFO ui.SparkUI: Started SparkUI at http://127.0.0.1:4040
16/12/30 19:29:04 INFO util.Utils: Copying /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py to /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/userFiles-3cf70968-52fd-49e9-b35d-5eb5f029ec7a/pi.py
16/12/30 19:29:04 INFO spark.SparkContext: Added file file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py at file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py with timestamp 1483144144747
16/12/30 19:29:04 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/12/30 19:29:04 INFO executor.Executor: Starting executor ID driver on host localhost
16/12/30 19:29:05 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44598.
16/12/30 19:29:05 INFO netty.NettyBlockTransferService: Server created on 44598
16/12/30 19:29:05 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/12/30 19:29:05 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:44598 with 530.3 MB RAM, BlockManagerId(driver, localhost, 44598)
16/12/30 19:29:05 INFO storage.BlockManagerMaster: Registered BlockManager
16/12/30 19:29:05 INFO spark.SparkContext: Starting job: reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Got job 0 (reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39) with 2 output partitions
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Final stage: ResultStage 0(reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39)
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Missing parents: List()
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39), which has no missing parents
16/12/30 19:29:05 INFO storage.MemoryStore: ensureFreeSpace(4136) called with curMem=0, maxMem=556038881
16/12/30 19:29:05 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.0 KB, free 530.3 MB)
16/12/30 19:29:05 INFO storage.MemoryStore: ensureFreeSpace(2760) called with curMem=4136, maxMem=556038881
16/12/30 19:29:05 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.7 KB, free 530.3 MB)
16/12/30 19:29:05 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:44598 (size: 2.7 KB, free: 530.3 MB)
16/12/30 19:29:05 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/12/30 19:29:05 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39)
16/12/30 19:29:05 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/12/30 19:29:05 WARN scheduler.TaskSetManager: Stage 0 contains a task of very large size (365 KB). The maximum recommended task size is 100 KB.
16/12/30 19:29:05 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 374548 bytes)
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 502351 bytes)
16/12/30 19:29:06 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)
16/12/30 19:29:06 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
16/12/30 19:29:06 INFO executor.Executor: Fetching file:/usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py with timestamp 1483144144747
16/12/30 19:29:06 INFO util.Utils: /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py has been previously copied to /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae/userFiles-3cf70968-52fd-49e9-b35d-5eb5f029ec7a/pi.py
16/12/30 19:29:06 INFO python.PythonRunner: Times: total = 306, boot = 164, init = 7, finish = 135
16/12/30 19:29:06 INFO python.PythonRunner: Times: total = 309, boot = 162, init = 11, finish = 136
16/12/30 19:29:06 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 998 bytes result sent to driver
16/12/30 19:29:06 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 998 bytes result sent to driver
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 419 ms on localhost (1/2)
16/12/30 19:29:06 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 450 ms on localhost (2/2)
16/12/30 19:29:06 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/12/30 19:29:06 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39) finished in 0.464 s
16/12/30 19:29:06 INFO scheduler.DAGScheduler: Job 0 finished: reduce at /usr/local/spark-1.5.1-bin-hadoop2.6/examples/src/main/python/pi.py:39, took 0.668884 s
Pi is roughly 3.146120
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/12/30 19:29:06 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/12/30 19:29:06 INFO ui.SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
16/12/30 19:29:06 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/12/30 19:29:06 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/12/30 19:29:06 INFO storage.MemoryStore: MemoryStore cleared
16/12/30 19:29:06 INFO storage.BlockManager: BlockManager stopped
16/12/30 19:29:06 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/12/30 19:29:06 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/12/30 19:29:06 INFO spark.SparkContext: Successfully stopped SparkContext
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/12/30 19:29:06 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/12/30 19:29:07 INFO util.ShutdownHookManager: Shutdown hook called
16/12/30 19:29:07 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-fa0f4397-bef6-4261-b167-005113a0b5ae

Log

恭喜,跑了一个spark的应用程序!

3. 安装elephas:

unsw@unsw-UX303UB$ pythonpip install elephas
pythonpip: command not found
unsw@unsw-UX303UB$ pip install elephas
Collecting elephas
Downloading elephas-0.3.tar.gz
Requirement already satisfied: keras in /usr/local/anaconda3/lib/python3.5/site-packages (from elephas)
Collecting hyperas (from elephas)
Downloading hyperas-0.3.tar.gz
Requirement already satisfied: pyyaml in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Requirement already satisfied: theano in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Requirement already satisfied: six in /usr/local/anaconda3/lib/python3.5/site-packages (from keras->elephas)
Collecting hyperopt (from hyperas->elephas)
Downloading hyperopt-0.1.tar.gz (98kB)
100% |████████████████████████████████| 102kB 1.7MB/s
Collecting entrypoints (from hyperas->elephas)
Downloading entrypoints-0.2.2-py2.py3-none-any.whl
Requirement already satisfied: jupyter in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: nbformat in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: nbconvert in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperas->elephas)
Requirement already satisfied: numpy>=1.7.1 in /usr/local/anaconda3/lib/python3.5/site-packages (from theano->keras->elephas)
Requirement already satisfied: scipy>=0.11 in /usr/local/anaconda3/lib/python3.5/site-packages (from theano->keras->elephas)
Requirement already satisfied: nose in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperopt->hyperas->elephas)
Collecting pymongo (from hyperopt->hyperas->elephas)
Downloading pymongo-3.4.0-cp35-cp35m-manylinux1_x86_64.whl (359kB)
100% |████████████████████████████████| 368kB 1.5MB/s
Requirement already satisfied: networkx in /usr/local/anaconda3/lib/python3.5/site-packages (from hyperopt->hyperas->elephas)
Collecting future (from hyperopt->hyperas->elephas)
Downloading future-0.16.0.tar.gz (824kB)
100% |████████████████████████████████| 829kB 1.5MB/s
Requirement already satisfied: decorator>=3.4.0 in /usr/local/anaconda3/lib/python3.5/site-packages (from networkx->hyperopt->hyperas->elephas)
Building wheels for collected packages: elephas, hyperas, hyperopt, future
Running setup.py bdist_wheel for elephas ... done
Stored in directory: /home/unsw/.cache/pip/wheels/b6/fe/74/8e079673e5048a583b547a0dc5d83a7fea883933472da1cefb
Running setup.py bdist_wheel for hyperas ... done
Stored in directory: /home/unsw/.cache/pip/wheels/85/7d/da/b417ee5e31b62d51c75afa6eb2ada9ccf8b7aff2de71d82c1b
Running setup.py bdist_wheel for hyperopt ... done
Stored in directory: /home/unsw/.cache/pip/wheels/4b/0f/9d/1166e48523d3bf7478800f250b0fceae31ac6a08b8a7cca820
Running setup.py bdist_wheel for future ... done
Stored in directory: /home/unsw/.cache/pip/wheels/c2/50/7c/0d83b4baac4f63ff7a765bd16390d2ab43c93587fac9d6017a
Successfully built elephas hyperas hyperopt future
Installing collected packages: pymongo, future, hyperopt, entrypoints, hyperas, elephas
Successfully installed elephas-0.3 entrypoints-0.2.2 future-0.16.0 hyperas-0.3 hyperopt-0.1 pymongo-3.4.0

Log

第二阶段:

 如果你的机器有多个CPU(假设24个):

你可以只开一个docker,然后很简单的使用spark结合elephas来并行(利用24个cpu)计算CNN。

 如果你的机器有多个GPU(假设4个):

你可以开4个docker镜像,修改每个镜像内的~/.theanorc来选择特定的GPU来并行(4个GPU)计算。(需自行安装cuda)

 

[Keras] Install and environment setting的更多相关文章

  1. Java Environment Setting

    As a non-Java developer, I am quit stuck in Java environment setting because I am not familiar with ...

  2. install erlang environment on centos

    #(erlide in linux can't detect the runtime if build from source, but erlang shell works correctly)su ...

  3. How to change Visual Studio default environment setting

    如何改变 Visual Studio 的默认环境设置: 1. 工具栏 Tools --> Import and Export Settings... 2. 选择 Reset All Settin ...

  4. bigdata learning unit two--Spark environment setting

    1.下载 Spark安装之前的准备 文件的解压与改名 tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz rm -rf spark-2.2.0-bin-hadoop2.7. ...

  5. bigdata learning unit one--Hadoop environment setting

    1.配置ssh,使集群服务器之间的通讯,不再每次都输入密码进行认证. 2. [root@hc--uatbeta2 hadoop]# start-all.shStarting namenodes on ...

  6. Mac environment setting

    java 7 jdk http://www.ifunmac.com/2013/04/mac-jdk-7/ http://blog.sina.com.cn/s/blog_6dce99b101016744 ...

  7. 本人AI知识体系导航 - AI menu

    Relevant Readable Links Name Interesting topic Comment Edwin Chen 非参贝叶斯   徐亦达老板 Dirichlet Process 学习 ...

  8. docker-compose RabbitMQ与Nodejs接收端同时运行时的错误

    首先讲一下背景: 我现在在开发的一个项目,需要运行RabbitMQ和Nodejs接收端(amqplib库),但是在Nodejs接收端运行时,无法连接至RabbitMQ端,经常提示说 connect E ...

  9. Initializing a Build Environment

    This section describes how to set up your local work environment to build the Android source files. ...

随机推荐

  1. IIS 连接 oracle报Oracle.DataAccess版本错误解决办法

    通过IIS连接oracle时报“Could not load file or assembly 'Oracle.DataAccess, Version=2.112.3.0, Culture=neutr ...

  2. js判断浏览器是否为IE浏览器

    if (!!window.ActiveXObject || "ActiveXObject" in window) {//判断是否IE浏览器 } MSIE这样关键字之类的判断,IE1 ...

  3. flex弹性盒模型布局

    容器属性:1.flex-direction:项目的排列方向(1)row 主轴方向排列(2)row-reverse 主轴反方向排列(3)column 纵向排列(4)column-reverse 纵向反方 ...

  4. Spring的三种通过XML实现DataSource注入方式

    Spring的三种通过XML实现DataSource注入方式: 1.使用Spring自带的DriverManagerDataSource 2.使用DBCP连接池 3.使用Tomcat提供的JNDI

  5. *CF2.D(哥德巴赫猜想)

    D. Taxes time limit per test 2 seconds memory limit per test 256 megabytes input standard input outp ...

  6. Struts2登录小例子

    前面实现了一个数据显示的例子,下面我来实现以下使用Struts2登录 首先是配置不用过多解释 注意名字要和类名保持一致 因为实现的是action这个方法所以需要用action.log来跳转到类里面 解 ...

  7. log4j.xml的实用例子

    大多数讲log4j配置的教程用的都是log4j.properties文件,我觉得xml或许更好一点,在这里我提供一个我已经用于生产环境的log4j.xml的例子,先上代码,然后再解释: <?xm ...

  8. [.net 面向对象程序设计进阶] (19) 异步(Asynchronous) 使用异步创建快速响应和可伸缩性的应用程序

    [.net 面向对象程序设计进阶] (19) 异步(Asynchronous) 使用异步创建快速响应和可伸缩性的应用程序 本节导读: 本节主要说明使用异步进行程序设计的优缺点及如何通过异步编程. 使用 ...

  9. awk神器

      序   产品经理(PM)过来找你要最近某某的数据,而你知道这些数据目前只能通过日志文件去分析,因为我们知道,我们不可能把所有数据都放入db中(这不科学啊!).每当有这样任务的时候,你就用php或j ...

  10. Azure PowerShell (9) 使用PowerShell导出订阅下所有的Azure VM的Public IP和Private IP

    <Windows Azure Platform 系列文章目录> 笔者在之前的工作中,有客户提出想一次性查看Azure订阅下的所有Azure VM的Public IP和Private IP. ...