在很多应用场景中都会出现在系统中需要某类Actor的唯一实例(only instance)。这个实例在集群环境中可能在任何一个节点上,但保证它是唯一的。Akka的Cluster-Singleton提供对这种Singleton Actor模式的支持,能做到当这个实例所在节点出现问题需要脱离集群时自动在另一个节点上构建一个同样的Actor,并重新转交控制。当然,由于涉及了一个新构建的Actor,内部状态会在这个过程中丢失。Single-Actor的主要应用包括某种对外部只能支持一个接入的程序接口,或者一种带有由多个其它Actor运算结果产生的内部状态的累积型Actor(aggregator)。当然,如果使用一种带有内部状态的Singleton-Actor,可以考虑使用PersistenceActor来实现内部状态的自动恢复。如此Cluster-Singleton变成了一种非常实用的模式,可以在许多场合下应用。

Cluster-Singleton模式也恰恰因为它的唯一性特点存在着一些隐忧,需要特别关注。唯一性容易造成的隐忧包括:容易造成超负荷、无法保证稳定在线、无法保证消息投递。这些需要用户在编程时增加特别处理。

好了,我们设计个例子来了解Cluster-Singleton,先看看Singleton-Actor的功能:

class SingletonActor extends PersistentActor with ActorLogging {
import SingletonActor._
val cluster = Cluster(context.system) var freeHoles =
var freeTrees =
var ttlMatches = override def persistenceId = self.path.parent.name + "-" + self.path.name def updateState(evt: Event): Unit = evt match {
case AddHole =>
if (freeTrees > ) {
ttlMatches +=
freeTrees -=
} else freeHoles +=
case AddTree =>
if (freeHoles > ) {
ttlMatches +=
freeHoles -=
} else freeTrees += } override def receiveRecover: Receive = {
case evt: Event => updateState(evt)
case SnapshotOffer(_,ss: State) =>
freeHoles = ss.nHoles
freeTrees = ss.nTrees
ttlMatches = ss.nMatches
} override def receiveCommand: Receive = {
case Dig =>
persist(AddHole){evt =>
updateState(evt)
}
sender() ! AckDig //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Plant =>
persist(AddTree) {evt =>
updateState(evt)
}
sender() ! AckPlant //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Disconnect => //this node exits cluster. expect switch to another node
log.info(s"${cluster.selfAddress} is leaving cluster ...")
cluster.leave(cluster.selfAddress)
case CleanUp =>
//clean up ...
self ! PoisonPill
} }

这个SingletonActor就是一种特殊的Actor,它继承了PersistentActor,所以需要实现PersistentActor的抽象函数。SingletonActor维护了几个内部状态,分别是各类运算的当前累积结果freeHoles,freeTrees,ttlMatches。SingletonActor模拟的是一个种树场景:当收到Dig指令后产生登记树坑AddHole事件,在这个事件中更新当前状态值;当收到Plant指令后产生AddTree事件并更新状态。因为Cluster-Singleton模式无法保证消息安全投递所以应该加个回复机制AckDig,AckPlant让消息发送者可用根据情况补发消息。我们是用Cluster.selfAddress来确认当前集群节点的转换。

我们需要在所有承载SingletonActor的集群节点上构建部署ClusterSingletonManager,如下:

  def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
.withFallback(ConfigFactory.load())
val singletonSystem = ActorSystem("SingletonClusterSystem",config) startupSharedJournal(singletonSystem, (port == ), path =
ActorPath.fromString("akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/store")) val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
singletonProps = Props[SingletonActor],
terminationMessage = CleanUp,
settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
), name = "singletonManager") }

可以看的出来,ClusterSingletonManager也是一种Actor,通过ClusterSingletonManager.props配置其所管理的SingletonActor。我们的目的主要是去求证当前集群节点出现故障需要退出集群时,这个SingletonActor是否能够自动转移到其它在线的节点上。ClusterSingletonManager的工作原理是首先在所有选定的集群节点上构建和部署,然后在最先部署的节点上启动SingletonActor,当这个节点不可使用时(unreachable)自动在次先部署的节点上重新构建部署SingletonActor。

同样作为一种Actor,ClusterSingletonProxy是通过与ClusterSingletonManager消息沟通来调用SingletonActor的。ClusterSingletonProxy动态跟踪在线的SingletonActor,为用户提供它的ActorRef。我们可以通过下面的代码来具体调用SingletonActor:

object SingletonUser {
def create = {
val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
.withFallback(ConfigFactory.load())
val suSystem = ActorSystem("SingletonClusterSystem",config) val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
singletonManagerPath = "/user/singletonManager",
settings = ClusterSingletonProxySettings(suSystem).withRole(None)
), name= "singletonUser") import suSystem.dispatcher
//send Dig messages every 2 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Dig) //send Plant messages every 3 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Plant) //send kill message to hosting node every 30 seconds
suSystem.scheduler.schedule(.seconds,.seconds,singletonProxy,SingletonActor.Disconnect)
} }

我们分不同的时间段通过ClusterSingletonProxy向SingletonActor发送Dig和Plant消息。然后每隔30秒向SingletonActor发送一个Disconnect消息通知它所在节点开始脱离集群。然后我们用下面的代码来试着运行:

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser object ClusterSingletonDemo extends App { SingletonActor.create() //seed-node SingletonActor.create() //ClusterSingletonManager node
SingletonActor.create()
SingletonActor.create()
SingletonActor.create() SingletonUser.create //ClusterSingletonProxy node }

运算结果如下:

[INFO] [// ::28.210] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.334] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:2551]
[INFO] [// ::28.489] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.493] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55839]
[INFO] [// ::28.514] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.528] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55840]
[INFO] [// ::28.566] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.571] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55841]
[INFO] [// ::28.595] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.600] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55842]
[INFO] [// ::28.620] [main] [akka.remote.Remoting] Starting remoting
[INFO] [// ::28.624] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://SingletonClusterSystem@127.0.0.1:55843]
[INFO] [// ::28.794] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton]
[INFO] [// ::28.817] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=0,ttlMatches=0
[INFO] [// ::29.679] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=1,freeTrees=0,ttlMatches=0
...
[INFO] [// ::38.676] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] akka.tcp://SingletonClusterSystem@127.0.0.1:2551 is leaving cluster ...
[INFO] [// ::39.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=1,ttlMatches=4
[INFO] [// ::40.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=2,ttlMatches=4
[INFO] [// ::41.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:2551:freeHoles=0,freeTrees=1,ttlMatches=5
[INFO] [// ::42.518] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton]
[INFO] [// ::43.653] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=2,ttlMatches=5
[INFO] [// ::43.672] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=1,ttlMatches=6
[INFO] [// ::45.665] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=2,ttlMatches=6
[INFO] [// ::46.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=3,ttlMatches=6
...
[INFO] [// ::53.673] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] akka.tcp://SingletonClusterSystem@127.0.0.1:55839 is leaving cluster ...
[INFO] [// ::55.654] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=4,ttlMatches=9
[INFO] [// ::55.664] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55839/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55839:freeHoles=0,freeTrees=3,ttlMatches=10
[INFO] [// ::56.646] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55843/user/singletonUser] Singleton identified at [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton]
[INFO] [// ::57.662] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55840:freeHoles=0,freeTrees=4,ttlMatches=10
[INFO] [// ::58.652] [SingletonClusterSystem-akka.actor.default-dispatcher-] [akka.tcp://SingletonClusterSystem@127.0.0.1:55840/user/singletonManager/singleton] State on akka.tcp://SingletonClusterSystem@127.0.0.1:55840:freeHoles=0,freeTrees=5,ttlMatches=10

从结果显示里我们可以观察到随着节点脱离集群,SingletonActor自动转换到其它的集群节点上继续运行。

值得再三注意的是:以此等简单的编码就可以实现那么复杂的集群式分布运算程序,说明Akka是一种具有广阔前景的实用编程工具!

下面是本次示范的完整源代码:

build.sbt

name := "cluster-singleton"

version := "1.0"

scalaVersion := "2.11.9"

resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"

val akkaversion = "2.4.8"

libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaversion,
"com.typesafe.akka" %% "akka-remote" % akkaversion,
"com.typesafe.akka" %% "akka-cluster" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-tools" % akkaversion,
"com.typesafe.akka" %% "akka-cluster-sharding" % akkaversion,
"com.typesafe.akka" %% "akka-persistence" % "2.4.8",
"com.typesafe.akka" %% "akka-contrib" % akkaversion,
"org.iq80.leveldb" % "leveldb" % "0.7",
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8")

application.conf

akka.actor.warn-about-java-serializer-usage = off
akka.log-dead-letters-during-shutdown = off
akka.log-dead-letters = off akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
} remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port =
}
} cluster {
seed-nodes = [
"akka.tcp://SingletonClusterSystem@127.0.0.1:2551"]
log-info = off
} persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshots"
}
}

SingletonActor.scala

package clustersingleton.sa

import akka.actor._
import akka.cluster._
import akka.persistence._
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._
import akka.persistence.journal.leveldb._
import akka.util.Timeout
import akka.pattern._ object SingletonActor {
sealed trait Command
case object Dig extends Command
case object Plant extends Command
case object AckDig extends Command //acknowledge
case object AckPlant extends Command //acknowledge case object Disconnect extends Command //force node to leave cluster
case object CleanUp extends Command //clean up when actor ends sealed trait Event
case object AddHole extends Event
case object AddTree extends Event case class State(nHoles: Int, nTrees: Int, nMatches: Int) def create(port: Int) = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
.withFallback(ConfigFactory.load())
val singletonSystem = ActorSystem("SingletonClusterSystem",config) startupSharedJournal(singletonSystem, (port == ), path =
ActorPath.fromString("akka.tcp://SingletonClusterSystem@127.0.0.1:2551/user/store")) val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
singletonProps = Props[SingletonActor],
terminationMessage = CleanUp,
settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
), name = "singletonManager") } def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
// Start the shared journal one one node (don't crash this SPOF)
// This will not be needed with a distributed journal
if (startStore)
system.actorOf(Props[SharedLeveldbStore], "store")
// register the shared journal
import system.dispatcher
implicit val timeout = Timeout(.seconds)
val f = (system.actorSelection(path) ? Identify(None))
f.onSuccess {
case ActorIdentity(_, Some(ref)) =>
SharedLeveldbJournal.setStore(ref, system)
case _ =>
system.log.error("Shared journal not started at {}", path)
system.terminate()
}
f.onFailure {
case _ =>
system.log.error("Lookup of shared journal at {} timed out", path)
system.terminate()
}
} } class SingletonActor extends PersistentActor with ActorLogging {
import SingletonActor._
val cluster = Cluster(context.system) var freeHoles =
var freeTrees =
var ttlMatches = override def persistenceId = self.path.parent.name + "-" + self.path.name def updateState(evt: Event): Unit = evt match {
case AddHole =>
if (freeTrees > ) {
ttlMatches +=
freeTrees -=
} else freeHoles +=
case AddTree =>
if (freeHoles > ) {
ttlMatches +=
freeHoles -=
} else freeTrees += } override def receiveRecover: Receive = {
case evt: Event => updateState(evt)
case SnapshotOffer(_,ss: State) =>
freeHoles = ss.nHoles
freeTrees = ss.nTrees
ttlMatches = ss.nMatches
} override def receiveCommand: Receive = {
case Dig =>
persist(AddHole){evt =>
updateState(evt)
}
sender() ! AckDig //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Plant =>
persist(AddTree) {evt =>
updateState(evt)
}
sender() ! AckPlant //notify sender message received
log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches") case Disconnect => //this node exits cluster. expect switch to another node
log.info(s"${cluster.selfAddress} is leaving cluster ...")
cluster.leave(cluster.selfAddress)
case CleanUp =>
//clean up ...
self ! PoisonPill
} }

SingletonUser.scala

package clustersingleton.frontend
import akka.actor._
import clustersingleton.sa.SingletonActor
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._ object SingletonUser {
def create = {
val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
.withFallback(ConfigFactory.load())
val suSystem = ActorSystem("SingletonClusterSystem",config) val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
singletonManagerPath = "/user/singletonManager",
settings = ClusterSingletonProxySettings(suSystem).withRole(None)
), name= "singletonUser") import suSystem.dispatcher
//send Dig messages every 2 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Dig) //send Plant messages every 3 seconds to SingletonActor through prox
suSystem.scheduler.schedule(.seconds,.second,singletonProxy,SingletonActor.Plant) //send kill message to hosting node every 30 seconds
suSystem.scheduler.schedule(.seconds,.seconds,singletonProxy,SingletonActor.Disconnect)
} }

ClusterSingletonDemo.scala

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser object ClusterSingletonDemo extends App { SingletonActor.create() //seed-node SingletonActor.create() //ClusterSingletonManager node
SingletonActor.create()
SingletonActor.create()
SingletonActor.create() SingletonUser.create //ClusterSingletonProxy node }

Akka(12): 分布式运算:Cluster-Singleton-让运算在集群节点中自动转移的更多相关文章

  1. 分布式架构高可用架构篇_01_zookeeper集群的安装、配置、高可用测试

    参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...

  2. 实现Redis Cluster并实现Python链接集群

    目录 一.Redis Cluster简单介绍 二.背景 三.环境准备 3.1 主机环境 3.2 主机规划 四.部署Redis 4.1 安装Redis软件 4.2 编辑Redis配置文件 4.3 启动R ...

  3. 分布式ID系列(4)——Redis集群实现的分布式ID适合做分布式ID吗

    首先是项目地址: https://github.com/maqiankun/distributed-id-redis-generator 关于Redis集群生成分布式ID,这里要先了解redis使用l ...

  4. Redis Cluster 集群节点维护 (三)

    Redis Cluster 集群节点维护: 集群运行很久之后,难免由于硬件故障,网络规划,业务增长,等原因对已有集群进行相应的调整,比如增加redis nodes 节点,减少节点,节点迁移,更换服务器 ...

  5. Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    目录 [TOC] 1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0. ...

  6. 分布式架构高可用架构篇_03-redis3集群的安装高可用测试

    参考文档 Redis 官方集群指南:http://redis.io/topics/cluster-tutorial Redis 官方集群规范:http://redis.io/topics/cluste ...

  7. [转载] Centos7的安装、Docker1.12.3的安装,以及Docker Swarm集群的简单实例

    1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0.33 c4 -&g ...

  8. Redis Cluster 4.0高可用集群安装、在线迁移操作记录

    之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...

  9. Redis Cluster 集群节点信息 维护篇(二)

    集群信息文件: # cluster 集群内部信息对应文件,由集群自动维护. /data/soft/redis/6379data/nodes-6379.conf 集群信息查看: ./redis-trib ...

随机推荐

  1. 利用HTTP-only Cookie缓解XSS之痛

    在Web安全领域,跨站脚本攻击时最为常见的一种攻击形式,也是长久以来的一个老大难问题,而本文将向读者介绍的是一种用以缓解这种压力的技术,即HTTP-only cookie. 我们首先对HTTP-onl ...

  2. 开涛spring3(5.3) - Spring表达式语言 之 5.3 SpEL语法

    5.3  SpEL语法 5.3.1  基本表达式 一.字面量表达式: SpEL支持的字面量包括:字符串.数字类型(int.long.float.double).布尔类型.null类型. 类型 示例 字 ...

  3. 六、 从Controller中访问模板数据(ASP.NET MVC5 系列)

    在这一章节中,我们将创建一个新的MoviesController类,写代码获取movie数据并用视图模板将它们显示到浏览器中. 在我们进行下一操作之前先Build the application.如果 ...

  4. Git详细教程(3)---结合gitHub使用

    1.GitHub的基本使用 GitHub就是一个网站,本身是基于Git,可以完成版本控制,可以托管代码. 英文版的. 在使用GitHub之前,首先需要注册一个账号. 登录,就可以完成相关的一些操作. ...

  5. 普通自适应遗传算法AGA的PC和PM公式解读

    写在前面 本文对于普通自适应遗传算法的Pm和Pc的公式进行了解读,此公式为M.Srinivas 和 L .M. Patnaik在1994年的<Adaptive Probabilities of ...

  6. C#码农的大数据之路 - 使用C#编写MR作业

    系列目录 写在前面 从Hadoop出现至今,大数据几乎就是Java平台专属一般.虽然Hadoop或Spark也提供了接口可以与其他语言一起使用,但作为基于JVM运行的框架,Java系语言有着天生优势. ...

  7. (原创)用Java实现链表结构对象:单向无环链表

    转载请注明本文出处:http://www.cnblogs.com/Starshot/p/6918569.html 链表的结构是由一个一个节点组成的,所谓链,就是每个节点的头尾连在一起.而单向链表就是: ...

  8. javaSE_07Java中类和对象-封装特性

    一.谈谈什么是面向对象的思维 理解面向对象,重点是要思考以下的问题 面向过程 vs 面向对象 Ø 谈谈什么是面向过程的编程思想? Ø 为什么有面向过程还要有面向对象? Ø 谈谈什么是面向对象的编程思想 ...

  9. C#开发移动应用系列(1.环境搭建)

    前言 是时候蹭一波热度了..咳咳..我什么都没说.. 其实也是有感而发,昨天看到Jesse写的博文(是时候开始用C#快速开发移动应用了),才幡然醒悟 , 原来我们的Xamarin已经如此的成熟了... ...

  10. sql拼接,String和Stringbuffer的问题

    首先提出来一个问题: 下边两种拼字符串的方式,哪种更好一些,或者还有更好的方式? StringBuffer hql=new StringBuffer(); hql.append("from ...