1.首先添加hosts文件

  1. vim /etc/hosts
  2.  
  3. 192.168.0.1 MSJTVL-DSJC-H01
  4. 192.168.0.2 MSJTVL-DSJC-H03
  5. 192.168.0.3 MSJTVL-DSJC-H05
  6. 192.168.0.4 MSJTVL-DSJC-H02
  7. 192.168.0.5 MSJTVL-DSJC-H04

2.几台机器做互信

  1. Setup passphraseless ssh
  2.  
  3. Now check that you can ssh to the localhost without a passphrase:
  4.  
  5. $ ssh localhost
  6. If you cannot ssh to localhost without a passphrase, execute the following commands:
  7.  
  8. $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
  9. $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

把其他几台机器的秘钥文件复制到MSJTVL-DSJC-H01的authorized_keys文件中

  1. [hadoop@MSJTVL-DSJC-H01 .ssh]$ scp hadoop@MSJTVL-DSJC-H02:/hadoop/.ssh/id_dsa.pub ./id_dsa.pub2
  2. [hadoop@MSJTVL-DSJC-H01 .ssh]$ scp hadoop@MSJTVL-DSJC-H03:/hadoop/.ssh/id_dsa.pub ./id_dsa.pub3
  3. [hadoop@MSJTVL-DSJC-H01 .ssh]$ scp hadoop@MSJTVL-DSJC-H04:/hadoop/.ssh/id_dsa.pub ./id_dsa.pub4
  4. [hadoop@MSJTVL-DSJC-H01 .ssh]$ scp hadoop@MSJTVL-DSJC-H05:/hadoop/.ssh/id_dsa.pub ./id_dsa.pub5
  5.  
  6. [hadoop@MSJTVL-DSJC-H01 .ssh]$ cat ~/.ssh/id_dsa.pub2 >> ~/.ssh/authorized_keys
  7. [hadoop@MSJTVL-DSJC-H01 .ssh]$ cat ~/.ssh/id_dsa.pub3 >> ~/.ssh/authorized_keys
  8. [hadoop@MSJTVL-DSJC-H01 .ssh]$ cat ~/.ssh/id_dsa.pub4 >> ~/.ssh/authorized_keys
  9. [hadoop@MSJTVL-DSJC-H01 .ssh]$ cat ~/.ssh/id_dsa.pub5 >> ~/.ssh/authorized_keys

以上操作实现了MSJTVL-DSJC-H02,3,4,5对MSJTVL-DSJC-H01的无密码登录

要是实现MSJTVL-DSJC-H01-5的全部互信则把MSJTVL-DSJC-H01上的authorized_keys文件COPY到其他机器上去

  1. [hadoop@MSJTVL-DSJC-H02 ~]$ scp hadoop@MSJTVL-DSJC-H01:/hadoop/.ssh/authorized_keys /hadoop/.ssh/authorized_keys

 

下载相应的tar包

  1. wget http://apache.fayea.com/hadoop/common/hadoop-2.6.4/hadoop-2.6.4.tar.gz

解压tar包并且建立相应的软链接

  1. [hadoop@MSJTVL-DSJC-H01 ~]$ tar -zxvf hadoop-2.6.4.tar.gz
  2. [hadoop@MSJTVL-DSJC-H01 ~]$ ln -sf hadoop-2.6.4 hadoop

进到hadoop相应的配置文件路径,修改hadoop-env.sh的内容

  1. [hadoop@MSJTVL-DSJC-H01 ~]$ cd hadoop/etc/hadoop/
  2. [hadoop@MSJTVL-DSJC-H01 hadoop]$ vim hadoop-env.sh

修改hadoop-env.sh里java_home的参数信息

接下来修改hdfs-site.xml中的相关内容,来源http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

首先配置一个匿名服务dfs.nameservices

  1. [hadoop@MSJTVL-DSJC-H01 hadoop]$ vim hdfs-site.xml
  2. <configuration>
  3. //配置服务的名称,可以进行相应的修改
  4. <property>
  5. <name>dfs.nameservices</name>
  6. <value>mycluster</value>
  7. </property>
  8.  
  9. //配置namenode的名称,mycluster需要和前面的保持一致,nn1和nn2只是名称无所谓叫啥
  10. <property>
  11. <name>dfs.ha.namenodes.mycluster</name>
  12. <value>nn1,nn2</value>
  13. </property>
  14.  
  15. //配置RPC协议的端口,两个namenode的RPC协议和端口,需要修改servicesname和value中的主机名称,MSJTVL-DSJC-H01和MSJTVL-DSJC-H02是两个namenode的主机名称
  16. <property>
  17. <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  18. <value>MSJTVL-DSJC-H01:8020</value>
  19. </property>
  20. <property>
  21. <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  22. <value>MSJTVL-DSJC-H02:8020</value>
  23. </property>
  24.  
  25. //配置下面是http的主机和端口
  26. <property>
  27. <name>dfs.namenode.http-address.mycluster.nn1</name>
  28. <value>MSJTVL-DSJC-H01:50070</value>
  29. </property>
  30. <property>
  31. <name>dfs.namenode.http-address.mycluster.nn2</name>
  32. <value>MSJTVL-DSJC-H02:50070</value>
  33. </property>
  34.  
  35. //接下来配置的是JournalNodes的URL地址
  36. <property>
  37. <name>dfs.namenode.shared.edits.dir</name>
  38. <value>qjournal://MSJTVL-DSJC-H03:8485;MSJTVL-DSJC-H04:8485;MSJTVL-DSJC-H05:8485/mycluster</value>
  39. </property>
  40.  
  41. //然后是固定的一个客户端使用的类(需要修改serversname的名称),客户端通过这个类找到
  42. <property>
  43. <name>dfs.client.failover.proxy.provider.mycluster</name>
  44. <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  45. </property>
  46.  
  47. //sshfence - SSH to the Active NameNode and kill the process,注意为hadoop下.ssh目录中生成的秘钥文件
  48. <property>
  49. <name>dfs.ha.fencing.methods</name>
  50. <value>sshfence</value>
  51. </property>
  52. <property>
  53. <name>dfs.ha.fencing.ssh.private-key-files</name>
  54. <value>/hadoop/.ssh/id_dsa</value>
  55. </property>
  56.  
  57. //JournalNodes的工作目录
  58. <property>
  59. <name>dfs.journalnode.edits.dir</name>
  60. <value>/hadoop/jn/data</value>
  61. </property>
  62.  
  63. //开启自动切换namenode
  64. <property>
  65. <name>dfs.ha.automatic-failover.enabled</name>
  66. <value>true</value>
  67. </property>
  68. </configuration>

接下来编辑core-site.xml的配置文件

  1. //首先配置namenode的入口,同样注意serversname的名称
  2. <property>
  3. <name>fs.defaultFS</name>
  4. <value>hdfs://mycluster</value>
  5. </property>
  6.  
  7. //配置zookeeper的集群
  8. <property>
  9. <name>ha.zookeeper.quorum</name>
  10. <value>MSJTVL-DSJC-H03:2181,MSJTVL-DSJC-H04:2181,MSJTVL-DSJC-H05:2181</value>
  11. </property>
  12.  
  13. //hadoop的临时目录
  14. <property>
  15. <name>hadoop.tmp.dir</name>
  16. <value>/hadoop/tmp</value>
  17. </property>

配置slaves

  1. MSJTVL-DSJC-H03
  2. MSJTVL-DSJC-H04
  3. MSJTVL-DSJC-H05

安装zookeeper

直接解压

修改相应的配置文件

  1. [zookeeper@MSJTVL-DSJC-H03 conf]$ vim zoo.cfg
  2. //修改dataDir=/opt/zookeeper/data,不要放到tmp下
  3. dataDir=/opt/zookeeper/data
  4.  
  5. #autopurge.purgeInterval=1
  6. server.1=MSJTVL-DSJC-H03:2888:3888
  7. server.2=MSJTVL-DSJC-H04:2888:3888
  8. server.3=MSJTVL-DSJC-H05:2888:3888
  9.  
  10. 在/opt/zookeeper/data下建立myid里面存储跟server一样的数字

启动zookeeper(zkServer.sh start),jps查看启动状态

启动HA集群

1.首先启动JournalNodes,到sbin目录下

./hadoop-daemon.sh start journalnode

  1. [hadoop@MSJTVL-DSJC-H03 sbin]$ ./hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-journalnode-MSJTVL-DSJC-H03.out
  3. [hadoop@MSJTVL-DSJC-H03 sbin]$ jps
  4. 3204 JournalNode
  5. 3252 Jps
  6. [hadoop@MSJTVL-DSJC-H03 sbin]$

2.在一台namenode上进行格式化

  1. [hadoop@MSJTVL-DSJC-H01 bin]$ ./hdfs namenode -format

初始化之后会在/hadoop/tmp/dfs/name/current下产生相应的元数据文件

  1. [hadoop@MSJTVL-DSJC-H01 ~]$ cd tmp/
  2. [hadoop@MSJTVL-DSJC-H01 tmp]$ ll
  3. 总用量 4
  4. drwxr-xr-x. 3 hadoop hadoop 4096 9 6 16:54 dfs
  5. [hadoop@MSJTVL-DSJC-H01 tmp]$ cd dfs/
  6. [hadoop@MSJTVL-DSJC-H01 dfs]$ ll
  7. 总用量 4
  8. drwxr-xr-x. 3 hadoop hadoop 4096 9 6 16:54 name
  9. [hadoop@MSJTVL-DSJC-H01 dfs]$ cd name/
  10. [hadoop@MSJTVL-DSJC-H01 name]$ ll
  11. 总用量 4
  12. drwxr-xr-x. 2 hadoop hadoop 4096 9 6 16:54 current
  13. [hadoop@MSJTVL-DSJC-H01 name]$ cd current/
  14. [hadoop@MSJTVL-DSJC-H01 current]$ ll
  15. 总用量 16
  16. -rw-r--r--. 1 hadoop hadoop 352 9 6 16:54 fsimage_0000000000000000000
  17. -rw-r--r--. 1 hadoop hadoop 62 9 6 16:54 fsimage_0000000000000000000.md5
  18. -rw-r--r--. 1 hadoop hadoop 2 9 6 16:54 seen_txid
  19. -rw-r--r--. 1 hadoop hadoop 201 9 6 16:54 VERSION
  20. [hadoop@MSJTVL-DSJC-H01 current]$ pwd
  21. /hadoop/tmp/dfs/name/current
  22. [hadoop@MSJTVL-DSJC-H01 current]$

3.把初始化的元数据文件COPY到其他的namenode上去,COPY之前需要先启动格式化的namenode

  1. [hadoop@MSJTVL-DSJC-H01 sbin]$ ./hadoop-daemon.sh start namenode
  2. starting namenode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-namenode-MSJTVL-DSJC-H01.out
  3. [hadoop@MSJTVL-DSJC-H01 sbin]$ jps
  4. 3324 NameNode
  5. 3396 Jps
  6. [hadoop@MSJTVL-DSJC-H01 sbin]$

然后在没有格式化的namenode上执行hdfs namenode -bootstrapStandby,执行完后查看元数据文件是一样的表示成功。

  1. [hadoop@MSJTVL-DSJC-H02 bin]$ hdfs namenode -bootstrapStandby

  

4.初始化ZKFC,在任意一台机器上执行hdfs zkfc -formatZK初始化ZKFC

5.重启整个HDFS集群

  1. [hadoop@MSJTVL-DSJC-H01 sbin]$ ./start-dfs.sh
  2. 16/09/06 17:10:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. Starting namenodes on [MSJTVL-DSJC-H01 MSJTVL-DSJC-H02]
  4. MSJTVL-DSJC-H02: starting namenode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-namenode-MSJTVL-DSJC-H02.out
  5. MSJTVL-DSJC-H01: starting namenode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-namenode-MSJTVL-DSJC-H01.out
  6. MSJTVL-DSJC-H03: starting datanode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-datanode-MSJTVL-DSJC-H03.out
  7. MSJTVL-DSJC-H04: starting datanode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-datanode-MSJTVL-DSJC-H04.out
  8. MSJTVL-DSJC-H05: starting datanode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-datanode-MSJTVL-DSJC-H05.out
  9. Starting journal nodes [MSJTVL-DSJC-H03 MSJTVL-DSJC-H04 MSJTVL-DSJC-H05]
  10. MSJTVL-DSJC-H03: starting journalnode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-journalnode-MSJTVL-DSJC-H03.out
  11. MSJTVL-DSJC-H04: starting journalnode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-journalnode-MSJTVL-DSJC-H04.out
  12. MSJTVL-DSJC-H05: starting journalnode, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-journalnode-MSJTVL-DSJC-H05.out
  13. 16/09/06 17:10:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  14. Starting ZK Failover Controllers on NN hosts [MSJTVL-DSJC-H01 MSJTVL-DSJC-H02]
  15. MSJTVL-DSJC-H02: starting zkfc, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-zkfc-MSJTVL-DSJC-H02.out
  16. MSJTVL-DSJC-H01: starting zkfc, logging to /hadoop/hadoop-2.6.4/logs/hadoop-hadoop-zkfc-MSJTVL-DSJC-H01.out
  17. [hadoop@MSJTVL-DSJC-H01 sbin]$ jps
  18. 4345 Jps
  19. 4279 DFSZKFailoverController
  20. 3993 NameNode

6.创建一个目录

  1. ./hdfs dfs -mkdir -p /usr/file
    ./hdfs dfs -put /hadoop/tian.txt /usr/file

放上一个文件可以在网页中查看相应的文件。

MR高可用

配置yarn-site.xml

  1. <configuration>
  2. <!--启用RM高可用-->
  3.  
  4. <property>
  5.  
  6. <name>yarn.resourcemanager.ha.enabled</name>
  7.  
  8. <value>true</value>
  9.  
  10. </property>
  11.  
  12. <!--RM集群标识符-->
  13.  
  14. <property>
  15.  
  16. <name>yarn.resourcemanager.cluster-id</name>
  17.  
  18. <value>rm-cluster</value>
  19.  
  20. </property>
  21.  
  22. <property>
  23.  
  24. <!--指定两台RM主机名标识符-->
  25.  
  26. <name>yarn.resourcemanager.ha.rm-ids</name>
  27.  
  28. <value>rm1,rm2</value>
  29.  
  30. </property>
  31.  
  32. <!--RM故障自动切换-->
  33.  
  34. <property>
  35.  
  36. <name>yarn.resourcemanager.ha.automatic-failover.recover.enabled</name>
  37.  
  38. <value>true</value>
  39.  
  40. </property>
  41.  
  42. <!--RM故障自动恢复-->
  43.  
  44. <property>
  45.  
  46. <name>yarn.resourcemanager.recovery.enabled</name>
  47.  
  48. <value>true</value>
  49.  
  50. </property> -->
  51.  
  52. <!--RM主机1-->
  53.  
  54. <property>
  55.  
  56. <name>yarn.resourcemanager.hostname.rm1</name>
  57.  
  58. <value>MSJTVL-DSJC-H01</value>
  59.  
  60. </property>
  61.  
  62. <!--RM主机2-->
  63.  
  64. <property>
  65.  
  66. <name>yarn.resourcemanager.hostname.rm2</name>
  67.  
  68. <value>MSJTVL-DSJC-H02</value>
  69.  
  70. </property>
  71.  
  72. <!--RM状态信息存储方式,一种基于内存(MemStore),另一种基于ZK(ZKStore)-->
  73.  
  74. <property>
  75.  
  76. <name>yarn.resourcemanager.store.class</name>
  77.  
  78. <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  79.  
  80. </property>
  81.  
  82. <!--使用ZK集群保存状态信息-->
  83.  
  84. <property>
  85.  
  86. <name>yarn.resourcemanager.zk-address</name>
  87.  
  88. <value>MSJTVL-DSJC-H03:2181,MSJTVL-DSJC-H04:2181,MSJTVL-DSJC-H05:2181</value>
  89.  
  90. </property>
  91.  
  92. <!--向RM调度资源地址-->
  93.  
  94. <property>
  95.  
  96. <name>yarn.resourcemanager.scheduler.address.rm1</name>
  97.  
  98. <value>MSJTVL-DSJC-H01:8030</value>
  99.  
  100. </property>
  101.  
  102. <property>
  103.  
  104. <name>yarn.resourcemanager.scheduler.address.rm2</name>
  105.  
  106. <value>MSJTVL-DSJC-H02:8030</value>
  107.  
  108. </property>
  109.  
  110. <!--NodeManager通过该地址交换信息-->
  111.  
  112. <property>
  113.  
  114. <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
  115.  
  116. <value>MSJTVL-DSJC-H01:8031</value>
  117.  
  118. </property>
  119.  
  120. <property>
  121.  
  122. <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
  123.  
  124. <value>MSJTVL-DSJC-H02:8031</value>
  125.  
  126. </property>
  127.  
  128. <!--客户端通过该地址向RM提交对应用程序操作-->
  129.  
  130. <property>
  131.  
  132. <name>yarn.resourcemanager.address.rm1</name>
  133.  
  134. <value>MSJTVL-DSJC-H01:8032</value>
  135.  
  136. </property>
  137.  
  138. <property>
  139.  
  140. <name>yarn.resourcemanager.address.rm2</name>
  141.  
  142. <value>MSJTVL-DSJC-H02:8032</value>
  143.  
  144. </property>
  145.  
  146. <!--管理员通过该地址向RM发送管理命令-->
  147.  
  148. <property>
  149.  
  150. <name>yarn.resourcemanager.admin.address.rm1</name>
  151.  
  152. <value>MSJTVL-DSJC-H01:8033</value>
  153.  
  154. </property>
  155.  
  156. <property>
  157.  
  158. <name>yarn.resourcemanager.admin.address.rm2</name>
  159.  
  160. <value>MSJTVL-DSJC-H02:8033</value>
  161.  
  162. </property>
  163.  
  164. <!--RM HTTP访问地址,查看集群信息-->
  165.  
  166. <property>
  167.  
  168. <name>yarn.resourcemanager.webapp.address.rm1</name>
  169.  
  170. <value>MSJTVL-DSJC-H01:8088</value>
  171.  
  172. </property>
  173.  
  174. <property>
  175.  
  176. <name>yarn.resourcemanager.webapp.address.rm2</name>
  177.  
  178. <value>MSJTVL-DSJC-H02:8088</value>
  179.  
  180. </property>
  181.  
  182. </configuration>

  

配置mapred-site.xml

  1. //指定mr框架为yarn方式
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>

standby的MR需要手动启动

  1. [hadoop@MSJTVL-DSJC-H02 sbin]$ yarn-daemon.sh start resourcemanager
  2. starting resourcemanager, logging to /hadoop/hadoop-2.6.4/logs/yarn-hadoop-resourcemanager-MSJTVL-DSJC-H02.out
  3. [hadoop@MSJTVL-DSJC-H02 sbin]$ jps
  4. 3000 ResourceManager
  5. 2812 NameNode
  6. 3055 Jps
  7. 2922 DFSZKFailoverController
  8. [hadoop@MSJTVL-DSJC-H02 sbin]$

  

  

 

Hadoop HA的搭建的更多相关文章

  1. HBase HA + Hadoop HA 搭建

    HBase 使用的是 1.2.9 的版本.  Hadoop HA 的搭建见我的另外一篇:Hadoop 2.7.3 HA 搭建及遇到的一些问题 以下目录均为 HBase 解压后的目录. 1. 修改 co ...

  2. Spark HA 的搭建

    接hadoop HA的搭建,因为你zookeeper已经部署完成,所以直接安装spark就可以 tar –xzf spark-1.6.1-bin-hadoop2.6.tgz -C ../service ...

  3. Hadoop_33_Hadoop HA的搭建

    Hadoop HA的搭建,可参考链接:https://blog.csdn.net/mrbcy/article/details/64939623 说明:    1.在hadoop2.0中通常由两个Nam ...

  4. 攻城狮在路上(陆)-- hadoop分布式环境搭建(HA模式)

    一.环境说明: 操作系统:Centos6.5 Linux node1 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 ...

  5. Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)

    声明:作者原创,转载注明出处. 作者:帅气陈吃苹果 一.服务器环境 主机名 IP 用户名 密码 安装目录 master188 192.168.29.188 hadoop hadoop /home/ha ...

  6. hadoop HA分布式集群搭建

    概述 hadoop2中NameNode可以有多个(目前只支持2个).每一个都有相同的职能.一个是active状态的,一个是standby状态的.当集群运行时,只有active状态的NameNode是正 ...

  7. hadoop完全分布式搭建HA(高可用)

    2018年03月25日 16:25:26 D调的Stanley 阅读数:2725 标签: hadoop HAssh免密登录hdfs HA配置hadoop完全分布式搭建zookeeper 配置 更多 个 ...

  8. Hadoop生产环境搭建(含HA、Federation)

    Hadoop生产环境搭建 1. 将安装包hadoop-2.x.x.tar.gz存放到某一目录下,并解压. 2. 修改解压后的目录中的文件夹etc/hadoop下的配置文件(若文件不存在,自己创建.) ...

  9. 1、hadoop HA分布式集群搭建

    概述 hadoop2中NameNode可以有多个(目前只支持2个).每一个都有相同的职能.一个是active状态的,一个是standby状态的.当集群运行时,只有active状态的NameNode是正 ...

随机推荐

  1. python logging模块使用

    近来再弄一个小项目,已经到收尾阶段了.希望加入写log机制来增加程序出错后的判断分析.尝试使用了python logging模块. #-*- coding:utf-8 -*- import loggi ...

  2. JS 操作Dom节点之CURD

    许多优秀的Javascript库,已经封装好了丰富的Dom操作函数,这可以加快项目开发效率.但是对于非常注重网页性能的项目来说,使用Dom的原生操作方法还是必要的. 1. 查找节点 document. ...

  3. 转:Zend Framework 2.0 分析

    文章来自于:http://bbs.phpchina.com/thread-268362-1-1.html ZF2已经发布,与ZF1相比,MVC这一模块内部的实现机制可谓大相径庭,许多用过ZF1的PHP ...

  4. 最近国外很拉风的,,基于.net 的一个手表

    site:http://agentwatches.com/ 这个项目是一个国外工作室,筹集资金 创立的. 直接用c# 代码编译显示在手机上.能和智能手机通信等. 并且是开源的. 很酷 其次.它提供了. ...

  5. poj 1364

    http://poj.org/problem?id=1364 #include<cstdio> #include<cstring> #include<algorithm& ...

  6. COM实践经验

    1. COM不能单独建立,必须有一个Delphi工程的实体,EXE或者DLL都行 2. 自动生成Project1_TLB.pas文件 3. 自动生成Unit2.pas文件,其中最重要的包含内容有: i ...

  7. VMware 11安装Mac OS X 10.10

    http://jingyan.baidu.com/article/ff411625b9011212e48237b4.html

  8. HDOJ(HDU) 1877 又一版 A+B(进制、、)

    Problem Description 输入两个不超过整型定义的非负10进制整数A和B(<=231-1),输出A+B的m (1 < m <10)进制数. Input 输入格式:测试输 ...

  9. 【转】SqlLite .Net 4.0 System.IO.FileLoadException”类型的未经处理的异常出现在XXX

    原文地址:http://www.csharpcity.com/2010/sqlite-ado-net-c-4-0/ ---------------------- 解决方法: Paste the fol ...

  10. 【用PS3手柄在安卓设备上玩游戏系列】谈安卓游戏对手柄的支持

    不同的游戏对于手柄的支持程度是不一样的,对应所需要进行的手柄设置也不尽相同.我没有这样的时间和精力,针对每一款游戏去写博客,但找出不同游戏中的共同点,针对同一类的游戏去写博客,应该是可行的.我把安卓上 ...