分布式程序运算是一种水平扩展(scale-out)运算模式,其核心思想是能够充分利用服务器集群中每个服务器节点的计算资源,包括:CPU、内存、硬盘、IO总线等。首先对计算任务进行分割,然后把细分的任务分派给各节点去运算。细分的任务相互之间可以有关联或者各自为独立运算,使用akka-cluster可以把任务按照各节点运算资源的负载情况进行均匀的分配,从而达到资源的合理充分利用以实现运算效率最大化的目的。如果一项工作可以被分割成多个独立的运算任务,那么我们只需要关注如何合理地对细分任务进行分配以实现集群节点的负载均衡,这实际上是一种对无需维护内部状态的运算任务的分配方式:fire and forget。由于承担运算任务的目标actor具体的部署位置是由算法决定的,所以我们一般不需要控制指定的actor或者读取它的内部状态。当然,如果需要的话我们还是可以通过嵌入消息的方式来实现这样的功能。

集群节点负载均衡是一种任务中央分配方式,其实是在集群环境下的router/routees运算模式,只是现在的router可以把任务发送给跨服务器上的actor。当然,任务分派是通过算法实现的,包括所有普通router的routing算法如:round-robin, random等等。 akka提供了一种基于节点运算资源负载的算法,在配置文件中定义:

akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]

下面的例子可以提供metrics基本作用的解释:

akka.actor.deployment {
/frontend/dispatcher = {
# Router type provided by metrics extension.
router = cluster-metrics-adaptive-group
# Router parameter specific for metrics extension.
# metrics-selector = heap
# metrics-selector = load
# metrics-selector = cpu
metrics-selector = mix
#
routees.paths = ["/user/backend"]
cluster {
enabled = on
use-role = backend
allow-local-routees = off
}
}
}

dispatcher代表router, backend/目录下的actor代表routees。

假如我们把一个大型的数据处理程序分割成多个独立的数据库操作。为了保证每项操作都能在任何情况下安全进行,包括出现异常,我们可以用BackoffSupervisor来支持负责操作的actor,如下:

val supervisor = BackoffSupervisor.props(
Backoff.onFailure( // Backoff.OnStop
childProps = workerProps(client),
childName = "worker",
minBackoff = second,
maxBackoff = seconds,
randomFactor = 0.20
).withAutoReset(resetBackoff = seconds)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
decider.orElse(SupervisorStrategy.defaultDecider)
)
)

在这里要特别注明一下Backoff.OnFailure和Backoff.OnStop的使用场景和作用,这部分与官方文档有些出入。首先,这两种方法都不会造成childActor的重启动作(restart),而是重新创建并启动一个新的实例。具体情况请参考下面测试程序的输出:

package my.akka

import akka.actor.{Actor, ActorRef, ActorSystem, PoisonPill, Props}
import akka.pattern.{Backoff, BackoffSupervisor, ask} import scala.concurrent.Await
import scala.concurrent.duration._ class Child extends Actor {
println(s"[Child]: created. (path = ${this.self.path}, instance = ${this})") override def preStart(): Unit = {
println(s"[Child]: preStart called. (path = ${this.self.path}, instance = ${this})")
super.preStart()
} override def postStop(): Unit = {
println(s"[Child]: postStop called. (path = ${this.self.path}, instance = ${this})")
super.postStop()
} override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
println(s"[Child]: preRestart called with ($reason, $message). (path = ${this.self.path}, instance = ${this})")
super.preRestart(reason, message)
} override def postRestart(reason: Throwable): Unit = {
println(s"[Child]: postRestart called with ($reason). (path = ${this.self.path}, instance = ${this})")
super.postRestart(reason)
} def receive = {
case "boom" =>
throw new Exception("kaboom")
case "get ref" =>
sender() ! self
case a: Any =>
println(s"[Child]: received ${a}")
}
} object Child {
def props: Props
= Props(new Child) def backOffOnFailureProps: Props
= BackoffSupervisor.props(
Backoff.onFailure(
Child.props,
childName = "myEcho",
minBackoff = .seconds,
maxBackoff = .seconds,
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
)) def backOffOnStopProps: Props
= BackoffSupervisor.props(
Backoff.onStop(
Child.props,
childName = "myEcho",
minBackoff = .seconds,
maxBackoff = .seconds,
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
))
} object BackoffSuperVisorApp {
def defaultSuperVisorCase(): Unit = {
println(
"""
|default ---------------------------
""".stripMargin) val system = ActorSystem("app")
try{
/**
* Let's see if "hello" message is received by the child
*/
val child = system.actorOf(Child.props, "child")
Thread.sleep()
child ! "hello"
//[Child]: received hello /**
* Now restart the child with an exception within its receive method
* and see if the `child` ActorRef is still valid (i.e. ActorRef incarnation remains same)
*/
child ! "boom"
Thread.sleep() child ! "hello after normal exception"
//[Child]: received hello after normal exception /**
* PoisonPill causes the child actor to `Stop`, different from restart.
* The ActorRef incarnation gets updated.
*/
child ! PoisonPill
Thread.sleep() /**
* This causes delivery to deadLetter, since the "incarnation" of ActorRef `child` became obsolete
* after child is "Stopped"
*
* An incarnation is tied to an ActorRef (NOT to its internal actor instance)
* and the same incarnation means "you can keep using the same ActorRef"
*/
child ! "hello after PoisonPill"
// [akka://app/user/parent/child-1] Message [java.lang.String] without sender to Actor[akka://app/user/child#-767539042]
// was not delivered. [1] dead letters encountered. Thread.sleep()
}
finally{
system.terminate()
Thread.sleep()
}
} def backOffOnStopCase(): Unit ={
println(
"""
|backoff onStop ---------------------------
""".stripMargin) val system = ActorSystem("app")
try{
/**
* Let's see if "hello" message is forwarded to the child
* by the backoff supervisor onStop
*/
implicit val futureTimeout: akka.util.Timeout = .second
val backoffSupervisorActor = system.actorOf(Child.backOffOnStopProps, "child")
Thread.sleep() backoffSupervisorActor ! "hello to backoff supervisor" //forwarded to child
//[Child]: received hello to backoff supervisor /**
* Now "Restart" the child with an exception from its receive method.
* As with the default supervisory strategy, the `child` ActorRef remains valid. (i.e. incarnation kept same)
*/
val child = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
child ! "boom"
Thread.sleep() child ! "hello to child after normal exception"
//[Child]: received hello to child after normal exception /**
* Backoff Supervisor can still forward the message
*/
backoffSupervisorActor ! "hello to backoffSupervisorActor after normal exception"
//[Child]: received hello to backoffSupervisorActor after normal exception Thread.sleep() /**
* PoisonPill causes the child actor to `Stop`, different from restart.
* The `child` ActorRef incarnation gets updated.
*/
child ! PoisonPill
Thread.sleep() child ! "hello to child ref after PoisonPill"
//delivered to deadLetters /**
* Backoff Supervisor can forward the message to its child with the new incarnation
*/
backoffSupervisorActor ! "hello to backoffSupervisorActor after PoisonPill"
//[Child]: received hello to backoffSupervisorActor after PoisonPill Thread.sleep()
}
finally{
system.terminate()
Thread.sleep()
}
} def backOffOnFailureCase(): Unit ={
println(
"""
|backoff onFailure ---------------------------
""".stripMargin) val system = ActorSystem("app")
try{
/**
* Let's see if "hello" message is forwarded to the child
* by the backoff supervisor onFailure
*/
implicit val futureTimeout: akka.util.Timeout = .second
val backoffSupervisorActor = system.actorOf(Child.backOffOnFailureProps, "child")
Thread.sleep() backoffSupervisorActor ! "hello to backoff supervisor" //forwarded to child
//[Child]: received hello to backoff supervisor /**
* Now "Stop" the child with an exception from its receive method.
* You'll see the difference between "Restart" and "Stop" from here:
*/
val child = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
child ! "boom"
Thread.sleep() /**
* Note that this is after normal exception, not after PoisonPill,
* but child is completely "Stopped" and its ActorRef "incarnation" became obsolete
*
* So, the message to the `child` ActorRef is delivered to deadLetters
*/
child ! "hello to child after normal exception"
//causes delivery to deadLetter /**
* Backoff Supervisor can still forward the message to the new child ActorRef incarnation
*/
backoffSupervisorActor ! "hello to backoffSupervisorActor after normal exception"
//[Child]: received hello to backoffSupervisorActor after normal exception /**
* You can get a new ActorRef which represents the new incarnation
*/
val newChildRef = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
newChildRef ! "hello to new child ref after normal exception"
//[Child]: received hello to new child ref after normal exception Thread.sleep() /**
* No matter whether the supervisory strategy is default or backoff,
* PoisonPill causes the actor to "Stop", not "Restart"
*/
newChildRef ! PoisonPill
Thread.sleep() newChildRef ! "hello to new child ref after PoisonPill"
//delivered to deadLetters Thread.sleep()
}
finally{
system.terminate()
Thread.sleep()
}
} def main(args: Array[String]): Unit ={
defaultSuperVisorCase()
backOffOnStopCase()
backOffOnFailureCase()
}
}

OnStop:不响应child-actor发生的异常,采用SupervisorStrategy异常处理方式。对正常停止动作,如PoisonPill, context.stop作用:重新构建新的实例并启动。

OnFailure:不响应child-actor正常停止,任其终止。发生异常时重新构建新的实例并启动。

很明显,通常我们需要在运算发生异常时重新启动运算,所以用OnFailure才是正确的选择。

下面是我之前介绍关于BackoffSupervisor时用的一个例子的代码示范:

package backoffSupervisorDemo
import akka.actor._
import akka.pattern._
import backoffSupervisorDemo.InnerChild.TestMessage import scala.concurrent.duration._ object InnerChild {
case class TestMessage(msg: String)
class ChildException extends Exception def props = Props[InnerChild]
}
class InnerChild extends Actor with ActorLogging {
import InnerChild._
override def receive: Receive = {
case TestMessage(msg) => //模拟子级功能
log.info(s"Child received message: ${msg}")
}
}
object Supervisor {
def props: Props = { //在这里定义了监管策略和child Actor构建
def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
case _: InnerChild.ChildException => SupervisorStrategy.Restart
} val options = Backoff.onFailure(InnerChild.props, "innerChild", second, seconds, 0.0)
.withManualReset
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
decider.orElse(SupervisorStrategy.defaultDecider)
)
)
BackoffSupervisor.props(options)
}
}
//注意:下面是Supervisor的父级,不是InnerChild的父级
object ParentalActor {
case class SendToSupervisor(msg: InnerChild.TestMessage)
case class SendToInnerChild(msg: InnerChild.TestMessage)
case class SendToChildSelection(msg: InnerChild.TestMessage)
def props = Props[ParentalActor]
}
class ParentalActor extends Actor with ActorLogging {
import ParentalActor._
//在这里构建子级Actor supervisor
val supervisor = context.actorOf(Supervisor.props,"supervisor")
supervisor ! BackoffSupervisor.getCurrentChild //要求supervisor返回当前子级Actor
var innerChild: Option[ActorRef] = None //返回的当前子级ActorRef
val selectedChild = context.actorSelection("/user/parent/supervisor/innerChild")
override def receive: Receive = {
case BackoffSupervisor.CurrentChild(ref) => //收到子级Actor信息
innerChild = ref
case SendToSupervisor(msg) => supervisor ! msg
case SendToChildSelection(msg) => selectedChild ! msg
case SendToInnerChild(msg) => innerChild foreach(child => child ! msg)
} }
object BackoffSupervisorDemo extends App {
import ParentalActor._
val testSystem = ActorSystem("testSystem")
val parent = testSystem.actorOf(ParentalActor.props,"parent") Thread.sleep() //wait for BackoffSupervisor.CurrentChild(ref) received parent ! SendToSupervisor(TestMessage("Hello message 1 to supervisor"))
parent ! SendToInnerChild(TestMessage("Hello message 2 to innerChild"))
parent ! SendToChildSelection(TestMessage("Hello message 3 to selectedChild")) scala.io.StdIn.readLine() testSystem.terminate() }

好了,现在我们就开始实现一个在集群中进行数据库操作的例子,看看akka-cluster是如何把一串操作分派给各节点上去操作的。首先是这个Worker:

import akka.actor._
import scala.concurrent.duration._ object Backend {
case class SaveFormula(op1: Int, op2: Int)
def workerProps = Props(new Worker)
} class Worker extends Actor with ActorLogging {
import Backend._ context.setReceiveTimeout( milliseconds) override def receive: Receive = {
case SaveFormula(op1,op2) => {
val res = op1 * op2
// saveToDB(op1,op2,res)
log.info(s"******* $op1 X $op2 = $res save to DB by $self *******")
}
case ReceiveTimeout =>
log.info(s"******* $self receive timout! *******")
throw new RuntimeException("Worker idle timeout!")
}
}

这应该是一个最普通的actor了。我们把它放在一个BackoffSupervisor下面:

  def superProps: Props = {
def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
case _: DBException => SupervisorStrategy.Restart
} val options = Backoff.onFailure(
childProps = workerProps,
childName = "worker",
minBackoff = second,
maxBackoff = seconds,
randomFactor = 0.20
).withAutoReset(resetBackoff = seconds)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
decider.orElse(SupervisorStrategy.defaultDecider)
)
) BackoffSupervisor.props(options)
} def create(port: Int): Unit = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[backend]"))
.withFallback(ConfigFactory.load()) val system = ActorSystem("ClusterSystem", config) val Backend = system.actorOf(superProps,"backend") }

下面是负责分配任务的router,或者前端frontend的定义:

import akka.actor._
import akka.routing._
import com.typesafe.config.ConfigFactory
import scala.concurrent.duration._
import scala.util._
import akka.cluster._ object Frontend {
private var _frontend: ActorRef = _ case class Multiply(op1: Int, op2: Int)
def create(port: Int) = { val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.posrt=$port")
.withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[frontend]"))
.withFallback(ConfigFactory.load())
val system = ActorSystem("ClusterSystem",config) Cluster(system).registerOnMemberUp{
_frontend = system.actorOf(Props[Frontend],"frontend")
} system.actorOf(Props[Frontend],"frontend") }
def getFrontend = _frontend
} class Frontend extends Actor with ActorLogging {
import Frontend._
import Backend._
import context.dispatcher //just lookup routees, routing strategy is responsible for deployment
val backend = context.actorOf(FromConfig.props(/* Props.empty */),"dispatcher") context.system.scheduler.schedule(.seconds, .seconds, self,
Multiply(Random.nextInt(), Random.nextInt())) override def receive: Receive = {
case Multiply(op1,op2) =>
backend ! SaveFormula(op1,op2)
case msg @ _ =>
log.info(s"******* unrecognized message: $msg! ******")
}
}

我们需要在Frontend里构建Backend。但是,Backend actor 即routees ,我们已经在Backend构建时进行了部署,所以在这里只需要用FromConfig.props(Props.empty)能lookup routees就可以了,不需要重复部署。

下面是具体的数据库存储操作示范:

  def superProps: Props = {
def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
case _: DBException => SupervisorStrategy.Restart
}
val clientSettings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings {b =>
b.hosts(List(new ServerAddress("localhost:27017")).asJava)
}.build() val client: MongoClient = MongoClient(clientSettings) val options = Backoff.onFailure(
childProps = workerProps(client),
childName = "worker",
minBackoff = second,
maxBackoff = seconds,
randomFactor = 0.20
).withAutoReset(resetBackoff = seconds)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
decider.orElse(SupervisorStrategy.defaultDecider)
)
) BackoffSupervisor.props(options)
}

注意,我们是在superProps里做数据库的连接的。这样Backend在实例化或者因为某种原因重启的话,特别是换了另一个JVM时可以正确的构建MongoClient。数据库操作是标准的MongoEngine方式:

 import monix.execution.Scheduler.Implicits.global
implicit val mongoClient = client;
val ctx = MGOContext("testdb","mulrecs") def saveToDB(op1: Int, op2: Int, by: String) = {
val doc = Document("by" -> by, "op1" -> op1, "op2" -> op2, "res" -> op1 * op2)
val cmd = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
val task = mgoUpdate[Completed](cmd).toTask
task.runOnComplete {
case Success(s) => log.info("operations completed successfully.")
case Failure(exception) => log.error(s"error: ${exception.getMessage}")
}
}

数据库操作是在另一个ExecutionContext里进行的。

下面是本次示范的完整源代码:

project/scalapb.sbt

addSbtPlugin("com.thesamet" % "sbt-protoc" % "0.99.18")

libraryDependencies ++= Seq(
"com.thesamet.scalapb" %% "compilerplugin" % "0.7.4"
)

build.sbt

import scalapb.compiler.Version.scalapbVersion
import scalapb.compiler.Version.grpcJavaVersion name := "cluster-load-balance" version := "0.1" scalaVersion := "2.12.8" scalacOptions += "-Ypartial-unification" libraryDependencies ++= {
val akkaVersion = "2.5.19"
Seq(
"com.typesafe.akka" %% "akka-actor" % akkaVersion,
"com.typesafe.akka" %% "akka-cluster" % akkaVersion,
"com.typesafe.akka" %% "akka-cluster-metrics" % akkaVersion,
"com.thesamet.scalapb" %% "scalapb-runtime" % scalapbVersion % "protobuf",
"com.thesamet.scalapb" %% "scalapb-runtime-grpc" % scalapbVersion,
//for mongodb 4.0
"org.mongodb.scala" %% "mongo-scala-driver" % "2.4.0",
"com.lightbend.akka" %% "akka-stream-alpakka-mongodb" % "0.20",
//other dependencies
"co.fs2" %% "fs2-core" % "0.9.7",
"ch.qos.logback" % "logback-classic" % "1.2.3",
"org.typelevel" %% "cats-core" % "0.9.0",
"io.monix" %% "monix-execution" % "3.0.0-RC1",
"io.monix" %% "monix-eval" % "3.0.0-RC1"
)
} PB.targets in Compile := Seq(
scalapb.gen() -> (sourceManaged in Compile).value
)

resources/application.conf

akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port =
}
} cluster {
seed-nodes = [
"akka.tcp://ClusterSystem@127.0.0.1:2551",
"akka.tcp://ClusterSystem@127.0.0.1:2552"] # auto-down-unreachable-after = 10s
}
} akka.cluster.min-nr-of-members = akka.cluster.role {
frontend.min-nr-of-members =
backend.min-nr-of-members =
} akka.actor.deployment {
/frontend/dispatcher = {
# Router type provided by metrics extension.
router = cluster-metrics-adaptive-group
# Router parameter specific for metrics extension.
# metrics-selector = heap
# metrics-selector = load
# metrics-selector = cpu
metrics-selector = mix
#
routees.paths = ["/user/backend"]
cluster {
enabled = on
use-role = backend
allow-local-routees = off
}
}
}

protobuf/sdp.proto

syntax = "proto3";

import "google/protobuf/wrappers.proto";
import "google/protobuf/any.proto";
import "scalapb/scalapb.proto"; option (scalapb.options) = {
// use a custom Scala package name
// package_name: "io.ontherocks.introgrpc.demo" // don't append file name to package
flat_package: true // generate one Scala file for all messages (services still get their own file)
single_file: true // add imports to generated file
// useful when extending traits or using custom types
// import: "io.ontherocks.hellogrpc.RockingMessage" // code to put at the top of generated file
// works only with `single_file: true`
//preamble: "sealed trait SomeSealedTrait"
}; package sdp.grpc.services; message ProtoDate {
int32 yyyy = ;
int32 mm = ;
int32 dd = ;
} message ProtoTime {
int32 hh = ;
int32 mm = ;
int32 ss = ;
int32 nnn = ;
} message ProtoDateTime {
ProtoDate date = ;
ProtoTime time = ;
} message ProtoAny {
bytes value = ;
}

protobuf/mgo.proto

import "google/protobuf/any.proto";
import "scalapb/scalapb.proto"; option (scalapb.options) = {
// use a custom Scala package name
// package_name: "io.ontherocks.introgrpc.demo" // don't append file name to package
flat_package: true // generate one Scala file for all messages (services still get their own file)
single_file: true // add imports to generated file
// useful when extending traits or using custom types
// import: "io.ontherocks.hellogrpc.RockingMessage" // code to put at the top of generated file
// works only with `single_file: true`
//preamble: "sealed trait SomeSealedTrait"
}; /*
* Demoes various customization options provided by ScalaPBs.
*/ package sdp.grpc.services; import "misc/sdp.proto"; message ProtoMGOBson {
bytes bson = ;
} message ProtoMGODocument {
bytes document = ;
} message ProtoMGOResultOption { //FindObservable
int32 optType = ;
ProtoMGOBson bsonParam = ;
int32 valueParam = ;
} message ProtoMGOAdmin{
string tarName = ;
repeated ProtoMGOBson bsonParam = ;
ProtoAny options = ;
string objName = ;
} message ProtoMGOContext { //MGOContext
string dbName = ;
string collName = ;
int32 commandType = ;
repeated ProtoMGOBson bsonParam = ;
repeated ProtoMGOResultOption resultOptions = ;
repeated string targets = ;
ProtoAny options = ;
repeated ProtoMGODocument documents = ;
google.protobuf.BoolValue only = ;
ProtoMGOAdmin adminOptions = ;
} message ProtoMultiply {
int32 op1 = ;
int32 op2 = ;
}

Backend.scala

import akka.actor._
import com.typesafe.config.ConfigFactory
import akka.pattern._
import scala.concurrent.duration._
import sdp.grpc.services._
import org.mongodb.scala._
import sdp.mongo.engine.MGOClasses._
import sdp.mongo.engine.MGOEngine._
import sdp.result.DBOResult._
import scala.collection.JavaConverters._
import scala.util._ object Backend {
case class SaveFormula(op1: Int, op2: Int)
case class SavedToDB(res: Int)
class DBException(errmsg: String) extends Exception(errmsg) def workerProps(client: MongoClient) = Props(new Worker(client)) def superProps: Props = {
def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
case _: DBException => SupervisorStrategy.Restart
}
val clientSettings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings {b =>
b.hosts(List(new ServerAddress("localhost:27017")).asJava)
}.build() val client: MongoClient = MongoClient(clientSettings) val options = Backoff.onFailure(
childProps = workerProps(client),
childName = "worker",
minBackoff = second,
maxBackoff = seconds,
randomFactor = 0.20
).withAutoReset(resetBackoff = seconds)
.withSupervisorStrategy(
OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
decider.orElse(SupervisorStrategy.defaultDecider)
)
) BackoffSupervisor.props(options)
} def create(port: Int): Unit = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[backend]"))
.withFallback(ConfigFactory.load()) val system = ActorSystem("ClusterSystem", config) val Backend = system.actorOf(superProps,"backend") } } class Worker(client: MongoClient) extends Actor with ActorLogging {
import Backend._
//use allocated threads for io
// implicit val executionContext = context.system.dispatchers.lookup("dbwork-dispatcher")
import monix.execution.Scheduler.Implicits.global
implicit val mongoClient = client;
val ctx = MGOContext("testdb","mulrecs") def saveToDB(op1: Int, op2: Int, by: String) = {
val doc = Document("by" -> by, "op1" -> op1, "op2" -> op2, "res" -> op1 * op2)
val cmd = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
val task = mgoUpdate[Completed](cmd).toTask
task.runOnComplete {
case Success(s) => log.info("operations completed successfully.")
case Failure(exception) => log.error(s"error: ${exception.getMessage}")
}
}
context.setReceiveTimeout( seconds) override def receive: Receive = {
case ProtoMultiply(op1,op2) => {
val res = op1 * op2
saveToDB(op1, op2, s"$self") log.info(s"******* $op1 X $op2 = $res save to DB by $self *******")
}
case SavedToDB(res) =>
log.info(s"******* result of ${res} saved to database. *******")
case ReceiveTimeout =>
log.info(s"******* $self receive timout! *******")
throw new DBException("worker idle timeout!")
}
}

Frontend.scala

import akka.actor._
import akka.routing._
import com.typesafe.config.ConfigFactory
import scala.concurrent.duration._
import scala.util._
import akka.cluster._
import sdp.grpc.services._ object Frontend {
private var _frontend: ActorRef = _ case class Multiply(op1: Int, op2: Int)
def create(port: Int) = { val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.posrt=$port")
.withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[frontend]"))
.withFallback(ConfigFactory.load())
val system = ActorSystem("ClusterSystem",config) Cluster(system).registerOnMemberUp{
_frontend = system.actorOf(Props[Frontend],"frontend")
} }
def getFrontend = _frontend
} class Frontend extends Actor with ActorLogging {
import Frontend._
import Backend._
import context.dispatcher //just lookup routees, routing strategy is responsible for deployment
val backend = context.actorOf(FromConfig.props(/* Props.empty */),"dispatcher") context.system.scheduler.schedule(.seconds, .seconds, self,
Multiply(Random.nextInt(), Random.nextInt())) override def receive: Receive = {
case Multiply(op1,op2) =>
backend ! ProtoMultiply(op1,op2)
case msg @ _ =>
log.info(s"******* unrecognized message: $msg! ******")
}
}

LoadBalanceDemo.scala

object LoadBalancingApp extends App {
//
//
//initiate three nodes from backend
Backend.create()
//
Backend.create()
//
Backend.create()
//
//initiate frontend node
Frontend.create()
//
}

converters/BytesConverter.scala

package protobuf.bytes
import java.io.{ByteArrayInputStream,ByteArrayOutputStream,ObjectInputStream,ObjectOutputStream}
import com.google.protobuf.ByteString
object Converter { def marshal(value: Any): ByteString = {
val stream: ByteArrayOutputStream = new ByteArrayOutputStream()
val oos = new ObjectOutputStream(stream)
oos.writeObject(value)
oos.close()
ByteString.copyFrom(stream.toByteArray())
} def unmarshal[A](bytes: ByteString): A = {
val ois = new ObjectInputStream(new ByteArrayInputStream(bytes.toByteArray))
val value = ois.readObject()
ois.close()
value.asInstanceOf[A]
} }

converters/DBOResultType.scala

package sdp.result

import cats._
import cats.data.EitherT
import cats.data.OptionT
import monix.eval.Task
import cats.implicits._ import scala.concurrent._ import scala.collection.TraversableOnce object DBOResult { type DBOError[A] = EitherT[Task,Throwable,A]
type DBOResult[A] = OptionT[DBOError,A] implicit def valueToDBOResult[A](a: A): DBOResult[A] =
Applicative[DBOResult].pure(a)
implicit def optionToDBOResult[A](o: Option[A]): DBOResult[A] =
OptionT((o: Option[A]).pure[DBOError])
implicit def eitherToDBOResult[A](e: Either[Throwable,A]): DBOResult[A] = {
// val error: DBOError[A] = EitherT[Task,Throwable, A](Task.eval(e))
OptionT.liftF(EitherT.fromEither[Task](e))
}
implicit def futureToDBOResult[A](fut: Future[A]): DBOResult[A] = {
val task = Task.fromFuture[A](fut)
val et = EitherT.liftF[Task,Throwable,A](task)
OptionT.liftF(et)
} implicit class DBOResultToTask[A](r: DBOResult[A]) {
def toTask = r.value.value
} implicit class DBOResultToOption[A](r:Either[Throwable,Option[A]]) {
def someValue: Option[A] = r match {
case Left(err) => (None: Option[A])
case Right(oa) => oa
}
} def wrapCollectionInOption[A, C[_] <: TraversableOnce[_]](coll: C[A]): DBOResult[C[A]] =
if (coll.isEmpty)
optionToDBOResult(None: Option[C[A]])
else
optionToDBOResult(Some(coll): Option[C[A]])
}

filestream/FileStreaming.scala

package sdp.file

import java.io.{ByteArrayInputStream, InputStream}
import java.nio.ByteBuffer
import java.nio.file.Paths import akka.stream.Materializer
import akka.stream.scaladsl.{FileIO, StreamConverters}
import akka.util._ import scala.concurrent.Await
import scala.concurrent.duration._ object Streaming {
def FileToByteBuffer(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer):ByteBuffer = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
(Await.result(fut, timeOut)).toByteBuffer
} def FileToByteArray(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer): Array[Byte] = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
(Await.result(fut, timeOut)).toArray
} def FileToInputStream(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer): InputStream = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
val buf = (Await.result(fut, timeOut)).toArray
new ByteArrayInputStream(buf)
} def ByteBufferToFile(byteBuf: ByteBuffer, fileName: String)(
implicit mat: Materializer) = {
val ba = new Array[Byte](byteBuf.remaining())
byteBuf.get(ba,,ba.length)
val baInput = new ByteArrayInputStream(ba)
val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
source.runWith(FileIO.toPath(Paths.get(fileName)))
} def ByteArrayToFile(bytes: Array[Byte], fileName: String)(
implicit mat: Materializer) = {
val bb = ByteBuffer.wrap(bytes)
val baInput = new ByteArrayInputStream(bytes)
val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
source.runWith(FileIO.toPath(Paths.get(fileName)))
} def InputStreamToFile(is: InputStream, fileName: String)(
implicit mat: Materializer) = {
val source = StreamConverters.fromInputStream(() => is)
source.runWith(FileIO.toPath(Paths.get(fileName)))
} }

logging/Log.scala

package sdp.logging

import org.slf4j.Logger

/**
* Logger which just wraps org.slf4j.Logger internally.
*
* @param logger logger
*/
class Log(logger: Logger) { // use var consciously to enable squeezing later
var isDebugEnabled: Boolean = logger.isDebugEnabled
var isInfoEnabled: Boolean = logger.isInfoEnabled
var isWarnEnabled: Boolean = logger.isWarnEnabled
var isErrorEnabled: Boolean = logger.isErrorEnabled def withLevel(level: Symbol)(msg: => String, e: Throwable = null): Unit = {
level match {
case 'debug | 'DEBUG => debug(msg)
case 'info | 'INFO => info(msg)
case 'warn | 'WARN => warn(msg)
case 'error | 'ERROR => error(msg)
case _ => // nothing to do
}
} def debug(msg: => String): Unit = {
if (isDebugEnabled && logger.isDebugEnabled) {
logger.debug(msg)
}
} def debug(msg: => String, e: Throwable): Unit = {
if (isDebugEnabled && logger.isDebugEnabled) {
logger.debug(msg, e)
}
} def info(msg: => String): Unit = {
if (isInfoEnabled && logger.isInfoEnabled) {
logger.info(msg)
}
} def info(msg: => String, e: Throwable): Unit = {
if (isInfoEnabled && logger.isInfoEnabled) {
logger.info(msg, e)
}
} def warn(msg: => String): Unit = {
if (isWarnEnabled && logger.isWarnEnabled) {
logger.warn(msg)
}
} def warn(msg: => String, e: Throwable): Unit = {
if (isWarnEnabled && logger.isWarnEnabled) {
logger.warn(msg, e)
}
} def error(msg: => String): Unit = {
if (isErrorEnabled && logger.isErrorEnabled) {
logger.error(msg)
}
} def error(msg: => String, e: Throwable): Unit = {
if (isErrorEnabled && logger.isErrorEnabled) {
logger.error(msg, e)
}
} }

logging/LogSupport.scala

package sdp.logging

import org.slf4j.LoggerFactory

trait LogSupport {

  /**
* Logger
*/
protected val log = new Log(LoggerFactory.getLogger(this.getClass)) }

mgo.engine/MGOProtoConversions.scala

package sdp.mongo.engine
import org.mongodb.scala.bson.collection.immutable.Document
import org.bson.conversions.Bson
import sdp.grpc.services._
import protobuf.bytes.Converter._
import MGOClasses._
import MGOAdmins._
import MGOCommands._
import org.bson.BsonDocument
import org.bson.codecs.configuration.CodecRegistry
import org.mongodb.scala.bson.codecs.DEFAULT_CODEC_REGISTRY
import org.mongodb.scala.FindObservable object MGOProtoConversion { type MGO_COMMAND_TYPE = Int
val MGO_COMMAND_FIND =
val MGO_COMMAND_COUNT =
val MGO_COMMAND_DISTICT =
val MGO_COMMAND_DOCUMENTSTREAM =
val MGO_COMMAND_AGGREGATE =
val MGO_COMMAND_INSERT =
val MGO_COMMAND_DELETE =
val MGO_COMMAND_REPLACE =
val MGO_COMMAND_UPDATE = val MGO_ADMIN_DROPCOLLECTION =
val MGO_ADMIN_CREATECOLLECTION =
val MGO_ADMIN_LISTCOLLECTION =
val MGO_ADMIN_CREATEVIEW =
val MGO_ADMIN_CREATEINDEX =
val MGO_ADMIN_DROPINDEXBYNAME =
val MGO_ADMIN_DROPINDEXBYKEY =
val MGO_ADMIN_DROPALLINDEXES = case class AdminContext(
tarName: String = "",
bsonParam: Seq[Bson] = Nil,
options: Option[Any] = None,
objName: String = ""
){
def toProto = sdp.grpc.services.ProtoMGOAdmin(
tarName = this.tarName,
bsonParam = this.bsonParam.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
objName = this.objName,
options = this.options.map(b => ProtoAny(marshal(b))) )
} object AdminContext {
def fromProto(msg: sdp.grpc.services.ProtoMGOAdmin) = new AdminContext(
tarName = msg.tarName,
bsonParam = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
objName = msg.objName,
options = msg.options.map(b => unmarshal[Any](b.value))
)
} case class Context(
dbName: String = "",
collName: String = "",
commandType: MGO_COMMAND_TYPE,
bsonParam: Seq[Bson] = Nil,
resultOptions: Seq[ResultOptions] = Nil,
options: Option[Any] = None,
documents: Seq[Document] = Nil,
targets: Seq[String] = Nil,
only: Boolean = false,
adminOptions: Option[AdminContext] = None
){ def toProto = new sdp.grpc.services.ProtoMGOContext(
dbName = this.dbName,
collName = this.collName,
commandType = this.commandType,
bsonParam = this.bsonParam.map(bsonToProto),
resultOptions = this.resultOptions.map(_.toProto),
options = { if(this.options == None)
None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else
Some(ProtoAny(marshal(this.options.get))) },
documents = this.documents.map(d => sdp.grpc.services.ProtoMGODocument(marshal(d))),
targets = this.targets,
only = Some(this.only),
adminOptions = this.adminOptions.map(_.toProto)
) } object MGODocument {
def fromProto(msg: sdp.grpc.services.ProtoMGODocument): Document =
unmarshal[Document](msg.document)
def toProto(doc: Document): sdp.grpc.services.ProtoMGODocument =
new ProtoMGODocument(marshal(doc))
} object MGOProtoMsg {
def fromProto(msg: sdp.grpc.services.ProtoMGOContext) = new Context(
dbName = msg.dbName,
collName = msg.collName,
commandType = msg.commandType,
bsonParam = msg.bsonParam.map(protoToBson),
resultOptions = msg.resultOptions.map(r => ResultOptions.fromProto(r)),
options = msg.options.map(a => unmarshal[Any](a.value)),
documents = msg.documents.map(doc => unmarshal[Document](doc.document)),
targets = msg.targets,
adminOptions = msg.adminOptions.map(ado => AdminContext.fromProto(ado))
)
} def bsonToProto(bson: Bson) =
ProtoMGOBson(marshal(bson.toBsonDocument(
classOf[org.mongodb.scala.bson.collection.immutable.Document],DEFAULT_CODEC_REGISTRY))) def protoToBson(proto: ProtoMGOBson): Bson = new Bson {
val bsdoc = unmarshal[BsonDocument](proto.bson)
override def toBsonDocument[TDocument](documentClass: Class[TDocument], codecRegistry: CodecRegistry): BsonDocument = bsdoc
} def ctxFromProto(proto: ProtoMGOContext): MGOContext = proto.commandType match {
case MGO_COMMAND_FIND => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Find())
)
def toResultOption(rts: Seq[ProtoMGOResultOption]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => ResultOptions.fromProto(a).toFindObservable(b)) (proto.bsonParam, proto.resultOptions, proto.only) match {
case (Nil, Nil, None) => ctx
case (Nil, Nil, Some(b)) => ctx.setCommand(Find(firstOnly = b))
case (bp,Nil,None) => ctx.setCommand(
Find(filter = Some(protoToBson(bp.head))))
case (bp,Nil,Some(b)) => ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)), firstOnly = b))
case (bp,fo,None) => {
ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)),
andThen = fo.map(ResultOptions.fromProto)
))
}
case (bp,fo,Some(b)) => {
ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)),
andThen = fo.map(ResultOptions.fromProto),
firstOnly = b))
}
case _ => ctx
}
}
case MGO_COMMAND_COUNT => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Count())
)
(proto.bsonParam, proto.options) match {
case (Nil, None) => ctx
case (bp, None) => ctx.setCommand(
Count(filter = Some(protoToBson(bp.head)))
)
case (Nil,Some(o)) => ctx.setCommand(
Count(options = Some(unmarshal[Any](o.value)))
)
case _ => ctx
}
}
case MGO_COMMAND_DISTICT => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Distict(fieldName = proto.targets.head))
)
(proto.bsonParam) match {
case Nil => ctx
case bp: Seq[ProtoMGOBson] => ctx.setCommand(
Distict(fieldName = proto.targets.head,filter = Some(protoToBson(bp.head)))
)
case _ => ctx
}
}
case MGO_COMMAND_AGGREGATE => {
new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Aggregate(proto.bsonParam.map(p => protoToBson(p))))
)
}
case MGO_ADMIN_LISTCOLLECTION => {
new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(ListCollection(proto.dbName)))
}
case MGO_COMMAND_INSERT => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_UPDATE,
action = Some(Insert(
newdocs = proto.documents.map(doc => unmarshal[Document](doc.document))))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(Insert(
newdocs = proto.documents.map(doc => unmarshal[Document](doc.document)),
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_COMMAND_DELETE => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_UPDATE,
action = Some(Delete(
filter = protoToBson(proto.bsonParam.head)))
)
(proto.options, proto.only) match {
case (None,None) => ctx
case (None,Some(b)) => ctx.setCommand(Delete(
filter = protoToBson(proto.bsonParam.head),
onlyOne = b))
case (Some(o),None) => ctx.setCommand(Delete(
filter = protoToBson(proto.bsonParam.head),
options = Some(unmarshal[Any](o.value)))
)
case (Some(o),Some(b)) => ctx.setCommand(Delete(
filter = protoToBson(proto.bsonParam.head),
options = Some(unmarshal[Any](o.value)),
onlyOne = b)
)
}
}
case MGO_COMMAND_REPLACE => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_UPDATE,
action = Some(Replace(
filter = protoToBson(proto.bsonParam.head),
replacement = unmarshal[Document](proto.documents.head.document)))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(Replace(
filter = protoToBson(proto.bsonParam.head),
replacement = unmarshal[Document](proto.documents.head.document),
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_COMMAND_UPDATE => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_UPDATE,
action = Some(Update(
filter = protoToBson(proto.bsonParam.head),
update = protoToBson(proto.bsonParam.tail.head)))
)
(proto.options, proto.only) match {
case (None,None) => ctx
case (None,Some(b)) => ctx.setCommand(Update(
filter = protoToBson(proto.bsonParam.head),
update = protoToBson(proto.bsonParam.tail.head),
onlyOne = b))
case (Some(o),None) => ctx.setCommand(Update(
filter = protoToBson(proto.bsonParam.head),
update = protoToBson(proto.bsonParam.tail.head),
options = Some(unmarshal[Any](o.value)))
)
case (Some(o),Some(b)) => ctx.setCommand(Update(
filter = protoToBson(proto.bsonParam.head),
update = protoToBson(proto.bsonParam.tail.head),
options = Some(unmarshal[Any](o.value)),
onlyOne = b)
)
}
}
case MGO_ADMIN_DROPCOLLECTION =>
new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(DropCollection(proto.collName))
)
case MGO_ADMIN_CREATECOLLECTION => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(CreateCollection(proto.collName))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(CreateCollection(proto.collName,
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_ADMIN_CREATEVIEW => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(CreateView(viewName = proto.targets.head,
viewOn = proto.targets.tail.head,
pipeline = proto.bsonParam.map(p => protoToBson(p))))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(CreateView(viewName = proto.targets.head,
viewOn = proto.targets.tail.head,
pipeline = proto.bsonParam.map(p => protoToBson(p)),
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_ADMIN_CREATEINDEX=> {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(CreateIndex(key = protoToBson(proto.bsonParam.head)))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(CreateIndex(key = protoToBson(proto.bsonParam.head),
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_ADMIN_DROPINDEXBYNAME=> {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(DropIndexByName(indexName = proto.targets.head))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(DropIndexByName(indexName = proto.targets.head,
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_ADMIN_DROPINDEXBYKEY=> {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(DropIndexByKey(key = protoToBson(proto.bsonParam.head)))
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(DropIndexByKey(key = protoToBson(proto.bsonParam.head),
options = Some(unmarshal[Any](o.value)))
)
}
}
case MGO_ADMIN_DROPALLINDEXES=> {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_ADMIN,
action = Some(DropAllIndexes())
)
proto.options match {
case None => ctx
case Some(o) => ctx.setCommand(DropAllIndexes(
options = Some(unmarshal[Any](o.value)))
)
}
} } def ctxToProto(ctx: MGOContext): Option[sdp.grpc.services.ProtoMGOContext] = ctx.action match {
case None => None
case Some(act) => act match {
case Count(filter, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_COUNT,
bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
else Seq(bsonToProto(filter.get))},
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
))
case Distict(fieldName, filter) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_DISTICT,
bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
else Seq(bsonToProto(filter.get))},
targets = Seq(fieldName) )) case Find(filter, andThen, firstOnly) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_FIND,
bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
else Seq(bsonToProto(filter.get))},
resultOptions = andThen.map(_.toProto)
)) case Aggregate(pipeLine) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_AGGREGATE,
bsonParam = pipeLine.map(bsonToProto)
)) case Insert(newdocs, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_INSERT,
documents = newdocs.map(d => ProtoMGODocument(marshal(d))),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) case Delete(filter, options, onlyOne) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_DELETE,
bsonParam = Seq(bsonToProto(filter)),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) },
only = Some(onlyOne)
)) case Replace(filter, replacement, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_REPLACE,
bsonParam = Seq(bsonToProto(filter)),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) },
documents = Seq(ProtoMGODocument(marshal(replacement)))
)) case Update(filter, update, options, onlyOne) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_COMMAND_UPDATE,
bsonParam = Seq(bsonToProto(filter),bsonToProto(update)),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) },
only = Some(onlyOne)
)) case DropCollection(coll) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = coll,
commandType = MGO_ADMIN_DROPCOLLECTION
)) case CreateCollection(coll, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = coll,
commandType = MGO_ADMIN_CREATECOLLECTION,
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) case ListCollection(dbName) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
commandType = MGO_ADMIN_LISTCOLLECTION
)) case CreateView(viewName, viewOn, pipeline, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_ADMIN_CREATEVIEW,
bsonParam = pipeline.map(bsonToProto),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) },
targets = Seq(viewName,viewOn)
)) case CreateIndex(key, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_ADMIN_CREATEINDEX,
bsonParam = Seq(bsonToProto(key)),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) case DropIndexByName(indexName, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_ADMIN_DROPINDEXBYNAME,
targets = Seq(indexName),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) case DropIndexByKey(key, options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_ADMIN_DROPINDEXBYKEY,
bsonParam = Seq(bsonToProto(key)),
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) case DropAllIndexes(options) =>
Some(new sdp.grpc.services.ProtoMGOContext(
dbName = ctx.dbName,
collName = ctx.collName,
commandType = MGO_ADMIN_DROPALLINDEXES,
options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else Some(ProtoAny(marshal(options.get))) }
)) }
} }

mgo.engine/MongoEngine.scala

package sdp.mongo.engine

import java.text.SimpleDateFormat
import java.util.Calendar import akka.NotUsed
import akka.stream.Materializer
import akka.stream.alpakka.mongodb.scaladsl._
import akka.stream.scaladsl.{Flow, Source}
import org.bson.conversions.Bson
import org.mongodb.scala.bson.collection.immutable.Document
import org.mongodb.scala.bson.{BsonArray, BsonBinary}
import org.mongodb.scala.model._
import org.mongodb.scala.{MongoClient, _}
import protobuf.bytes.Converter._
import sdp.file.Streaming._
import sdp.logging.LogSupport import scala.collection.JavaConverters._
import scala.concurrent._
import scala.concurrent.duration._ object MGOClasses {
type MGO_ACTION_TYPE = Int
val MGO_QUERY =
val MGO_UPDATE =
val MGO_ADMIN = /* org.mongodb.scala.FindObservable
import com.mongodb.async.client.FindIterable
val resultDocType = FindIterable[Document]
val resultOption = FindObservable(resultDocType)
.maxScan(...)
.limit(...)
.sort(...)
.project(...) */ type FOD_TYPE = Int
val FOD_FIRST = //def first(): SingleObservable[TResult], return the first item
val FOD_FILTER = //def filter(filter: Bson): FindObservable[TResult]
val FOD_LIMIT = //def limit(limit: Int): FindObservable[TResult]
val FOD_SKIP = //def skip(skip: Int): FindObservable[TResult]
val FOD_PROJECTION = //def projection(projection: Bson): FindObservable[TResult]
//Sets a document describing the fields to return for all matching documents
val FOD_SORT = //def sort(sort: Bson): FindObservable[TResult]
val FOD_PARTIAL = //def partial(partial: Boolean): FindObservable[TResult]
//Get partial results from a sharded cluster if one or more shards are unreachable (instead of throwing an error)
val FOD_CURSORTYPE = //def cursorType(cursorType: CursorType): FindObservable[TResult]
//Sets the cursor type
val FOD_HINT = //def hint(hint: Bson): FindObservable[TResult]
//Sets the hint for which index to use. A null value means no hint is set
val FOD_MAX = //def max(max: Bson): FindObservable[TResult]
//Sets the exclusive upper bound for a specific index. A null value means no max is set
val FOD_MIN = //def min(min: Bson): FindObservable[TResult]
//Sets the minimum inclusive lower bound for a specific index. A null value means no max is set
val FOD_RETURNKEY = //def returnKey(returnKey: Boolean): FindObservable[TResult]
//Sets the returnKey. If true the find operation will return only the index keys in the resulting documents
val FOD_SHOWRECORDID= //def showRecordId(showRecordId: Boolean): FindObservable[TResult]
//Sets the showRecordId. Set to true to add a field `\$recordId` to the returned documents case class ResultOptions(
optType: FOD_TYPE,
bson: Option[Bson] = None,
value: Int = ){
def toProto = new sdp.grpc.services.ProtoMGOResultOption(
optType = this.optType,
bsonParam = this.bson.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
valueParam = this.value
)
def toFindObservable: FindObservable[Document] => FindObservable[Document] = find => {
optType match {
case FOD_FIRST => find
case FOD_FILTER => find.filter(bson.get)
case FOD_LIMIT => find.limit(value)
case FOD_SKIP => find.skip(value)
case FOD_PROJECTION => find.projection(bson.get)
case FOD_SORT => find.sort(bson.get)
case FOD_PARTIAL => find.partial(value != )
case FOD_CURSORTYPE => find
case FOD_HINT => find.hint(bson.get)
case FOD_MAX => find.max(bson.get)
case FOD_MIN => find.min(bson.get)
case FOD_RETURNKEY => find.returnKey(value != )
case FOD_SHOWRECORDID => find.showRecordId(value != ) }
}
}
object ResultOptions {
def fromProto(msg: sdp.grpc.services.ProtoMGOResultOption) = new ResultOptions(
optType = msg.optType,
bson = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
value = msg.valueParam
) } trait MGOCommands object MGOCommands { case class Count(filter: Option[Bson] = None, options: Option[Any] = None) extends MGOCommands case class Distict(fieldName: String, filter: Option[Bson] = None) extends MGOCommands /* org.mongodb.scala.FindObservable
import com.mongodb.async.client.FindIterable
val resultDocType = FindIterable[Document]
val resultOption = FindObservable(resultDocType)
.maxScan(...)
.limit(...)
.sort(...)
.project(...) */
case class Find(filter: Option[Bson] = None,
andThen: Seq[ResultOptions] = Seq.empty[ResultOptions],
firstOnly: Boolean = false) extends MGOCommands case class Aggregate(pipeLine: Seq[Bson]) extends MGOCommands case class MapReduce(mapFunction: String, reduceFunction: String) extends MGOCommands case class Insert(newdocs: Seq[Document], options: Option[Any] = None) extends MGOCommands case class Delete(filter: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class Replace(filter: Bson, replacement: Document, options: Option[Any] = None) extends MGOCommands case class Update(filter: Bson, update: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class BulkWrite(commands: List[WriteModel[Document]], options: Option[Any] = None) extends MGOCommands } object MGOAdmins { case class DropCollection(collName: String) extends MGOCommands case class CreateCollection(collName: String, options: Option[Any] = None) extends MGOCommands case class ListCollection(dbName: String) extends MGOCommands case class CreateView(viewName: String, viewOn: String, pipeline: Seq[Bson], options: Option[Any] = None) extends MGOCommands case class CreateIndex(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropIndexByName(indexName: String, options: Option[Any] = None) extends MGOCommands case class DropIndexByKey(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropAllIndexes(options: Option[Any] = None) extends MGOCommands } case class MGOContext(
dbName: String,
collName: String,
actionType: MGO_ACTION_TYPE = MGO_QUERY,
action: Option[MGOCommands] = None,
actionOptions: Option[Any] = None,
actionTargets: Seq[String] = Nil
) {
ctx =>
def setDbName(name: String): MGOContext = ctx.copy(dbName = name) def setCollName(name: String): MGOContext = ctx.copy(collName = name) def setActionType(at: MGO_ACTION_TYPE): MGOContext = ctx.copy(actionType = at) def setCommand(cmd: MGOCommands): MGOContext = ctx.copy(action = Some(cmd)) def toSomeProto = MGOProtoConversion.ctxToProto(this) } object MGOContext {
def apply(db: String, coll: String) = new MGOContext(db, coll)
def fromProto(proto: sdp.grpc.services.ProtoMGOContext): MGOContext =
MGOProtoConversion.ctxFromProto(proto)
} case class MGOBatContext(contexts: Seq[MGOContext], tx: Boolean = false) {
ctxs =>
def setTx(txopt: Boolean): MGOBatContext = ctxs.copy(tx = txopt)
def appendContext(ctx: MGOContext): MGOBatContext =
ctxs.copy(contexts = contexts :+ ctx)
} object MGOBatContext {
def apply(ctxs: Seq[MGOContext], tx: Boolean = false) = new MGOBatContext(ctxs,tx)
} type MGODate = java.util.Date
def mgoDate(yyyy: Int, mm: Int, dd: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd)
ca.getTime()
}
def mgoDateTime(yyyy: Int, mm: Int, dd: Int, hr: Int, min: Int, sec: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd,hr,min,sec)
ca.getTime()
}
def mgoDateTimeNow: MGODate = {
val ca = Calendar.getInstance()
ca.getTime
} def mgoDateToString(dt: MGODate, formatString: String): String = {
val fmt= new SimpleDateFormat(formatString)
fmt.format(dt)
} type MGOBlob = BsonBinary
type MGOArray = BsonArray def fileToMGOBlob(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer) = FileToByteArray(fileName,timeOut) def mgoBlobToFile(blob: MGOBlob, fileName: String)(
implicit mat: Materializer) = ByteArrayToFile(blob.getData,fileName) def mgoGetStringOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getString(fieldName))
else None
}
def mgoGetIntOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getInteger(fieldName))
else None
}
def mgoGetLonggOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getLong(fieldName))
else None
}
def mgoGetDoubleOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDouble(fieldName))
else None
}
def mgoGetBoolOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getBoolean(fieldName))
else None
}
def mgoGetDateOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDate(fieldName))
else None
}
def mgoGetBlobOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOBlob]]
else None
}
def mgoGetArrayOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOArray]]
else None
} def mgoArrayToDocumentList(arr: MGOArray): scala.collection.immutable.List[org.bson.BsonDocument] = {
(arr.getValues.asScala.toList)
.asInstanceOf[scala.collection.immutable.List[org.bson.BsonDocument]]
} type MGOFilterResult = FindObservable[Document] => FindObservable[Document]
} object MGOEngine extends LogSupport { import MGOClasses._
import MGOAdmins._
import MGOCommands._
import sdp.result.DBOResult._ object TxUpdateMode {
private def mgoTxUpdate(ctxs: MGOBatContext, observable: SingleObservable[ClientSession])(
implicit client: MongoClient, ec: ExecutionContext): SingleObservable[ClientSession] = {
log.info(s"mgoTxUpdate> calling ...")
observable.map(clientSession => { val transactionOptions =
TransactionOptions.builder()
.readConcern(ReadConcern.SNAPSHOT)
.writeConcern(WriteConcern.MAJORITY).build() clientSession.startTransaction(transactionOptions)
/*
val fut = Future.traverse(ctxs.contexts) { ctx =>
mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
}
Await.ready(fut, 3 seconds) */ ctxs.contexts.foreach { ctx =>
mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
}
clientSession
})
} private def commitAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
log.info(s"commitAndRetry> calling ...")
observable.recoverWith({
case e: MongoException if e.hasErrorLabel(MongoException.UNKNOWN_TRANSACTION_COMMIT_RESULT_LABEL) => {
log.warn("commitAndRetry> UnknownTransactionCommitResult, retrying commit operation ...")
commitAndRetry(observable)
}
case e: Exception => {
log.error(s"commitAndRetry> Exception during commit ...: $e")
throw e
}
})
} private def runTransactionAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
log.info(s"runTransactionAndRetry> calling ...")
observable.recoverWith({
case e: MongoException if e.hasErrorLabel(MongoException.TRANSIENT_TRANSACTION_ERROR_LABEL) => {
log.warn("runTransactionAndRetry> TransientTransactionError, aborting transaction and retrying ...")
runTransactionAndRetry(observable)
}
})
} def mgoTxBatch(ctxs: MGOBatContext)(
implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = { log.info(s"mgoTxBatch> MGOBatContext: ${ctxs}") val updateObservable: Observable[ClientSession] = mgoTxUpdate(ctxs, client.startSession())
val commitTransactionObservable: SingleObservable[Completed] =
updateObservable.flatMap(clientSession => clientSession.commitTransaction())
val commitAndRetryObservable: SingleObservable[Completed] = commitAndRetry(commitTransactionObservable) runTransactionAndRetry(commitAndRetryObservable) valueToDBOResult(Completed()) }
} def mgoUpdateBatch(ctxs: MGOBatContext)(implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = {
log.info(s"mgoUpdateBatch> MGOBatContext: ${ctxs}")
if (ctxs.tx) {
TxUpdateMode.mgoTxBatch(ctxs)
} else {
/*
val fut = Future.traverse(ctxs.contexts) { ctx =>
mgoUpdate[Completed](ctx).map(identity) } Await.ready(fut, 3 seconds)
Future.successful(new Completed) */
ctxs.contexts.foreach { ctx =>
mgoUpdate[Completed](ctx).map(identity) } valueToDBOResult(Completed())
} } def mongoStream(ctx: MGOContext)(
implicit client: MongoClient, ec: ExecutionContextExecutor): Source[Document, NotUsed] = {
log.info(s"mongoStream> MGOContext: ${ctx}") def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => a.toFindObservable(b)) val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mongoStream> uery action cannot be null!")
throw new IllegalArgumentException("query action cannot be null!")
}
try {
ctx.action.get match {
case Find(None, Nil, false) => //FindObservable
MongoSource(coll.find())
case Find(None, Nil, true) => //FindObservable
MongoSource(coll.find().first())
case Find(Some(filter), Nil, false) => //FindObservable
MongoSource(coll.find(filter))
case Find(Some(filter), Nil, true) => //FindObservable
MongoSource(coll.find(filter).first())
case Find(None, sro, _) => //FindObservable
val next = toResultOption(sro)
MongoSource(next(coll.find[Document]()))
case Find(Some(filter), sro, _) => //FindObservable
val next = toResultOption(sro)
MongoSource(next(coll.find[Document](filter)))
case _ =>
log.error(s"mongoStream> unsupported streaming query [${ctx.action.get}]")
throw new RuntimeException(s"mongoStream> unsupported streaming query [${ctx.action.get}]") }
}
catch { case e: Exception =>
log.error(s"mongoStream> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mongoStream> Error: ${e.getMessage}")
} } // T => FindIterable e.g List[Document]
def mgoQuery[T](ctx: MGOContext, Converter: Option[Document => Any] = None)(implicit client: MongoClient): DBOResult[T] = {
log.info(s"mgoQuery> MGOContext: ${ctx}") val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName) def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => a.toFindObservable(b)) if ( ctx.action == None) {
log.error(s"mgoQuery> uery action cannot be null!")
Left(new IllegalArgumentException("query action cannot be null!"))
}
try {
ctx.action.get match {
/* count */
case Count(Some(filter), Some(opt)) => //SingleObservable
coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter), None) => //SingleObservable
coll.countDocuments(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None, None) => //SingleObservable
coll.countDocuments().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field, Some(filter)) => //DistinctObservable
coll.distinct(field, filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field, None) => //DistinctObservable
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None, Nil, false) => //FindObservable
if (Converter == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(None, Nil, true) => //FindObservable
if (Converter == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(Converter.get).head().asInstanceOf[Future[T]]
case Find(Some(filter), Nil, false) => //FindObservable
if (Converter == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), Nil, true) => //FindObservable
if (Converter == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(Converter.get).head().asInstanceOf[Future[T]]
case Find(None, sro, _) => //FindObservable
val next = toResultOption(sro)
if (Converter == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), sro, _) => //FindObservable
val next = toResultOption(sro)
if (Converter == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(Converter.get).toFuture().asInstanceOf[Future[T]]
/* aggregate AggregateObservable*/
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce MapReduceObservable*/
case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]] }
}
catch { case e: Exception =>
log.error(s"mgoQuery> runtime error: ${e.getMessage}")
Left(new RuntimeException(s"mgoQuery> Error: ${e.getMessage}"))
}
}
//T => Completed, result.UpdateResult, result.DeleteResult
def mgoUpdate[T](ctx: MGOContext)(implicit client: MongoClient): DBOResult[T] =
try {
mgoUpdateObservable[T](ctx).toFuture()
}
catch { case e: Exception =>
log.error(s"mgoUpdate> runtime error: ${e.getMessage}")
Left(new RuntimeException(s"mgoUpdate> Error: ${e.getMessage}"))
} def mgoUpdateObservable[T](ctx: MGOContext)(implicit client: MongoClient): SingleObservable[T] = {
log.info(s"mgoUpdateObservable> MGOContext: ${ctx}") val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mgoUpdateObservable> uery action cannot be null!")
throw new IllegalArgumentException("mgoUpdateObservable> query action cannot be null!")
}
try {
ctx.action.get match {
/* insert */
case Insert(docs, Some(opt)) => //SingleObservable[Completed]
if (docs.size > )
coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).asInstanceOf[SingleObservable[T]]
else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).asInstanceOf[SingleObservable[T]]
case Insert(docs, None) => //SingleObservable
if (docs.size > ) coll.insertMany(docs).asInstanceOf[SingleObservable[T]]
else coll.insertOne(docs.head).asInstanceOf[SingleObservable[T]]
/* delete */
case Delete(filter, None, onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter).asInstanceOf[SingleObservable[T]]
else coll.deleteMany(filter).asInstanceOf[SingleObservable[T]]
case Delete(filter, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
/* replace */
case Replace(filter, replacement, None) => //SingleObservable
coll.replaceOne(filter, replacement).asInstanceOf[SingleObservable[T]]
case Replace(filter, replacement, Some(opt)) => //SingleObservable
coll.replaceOne(filter, replacement, opt.asInstanceOf[ReplaceOptions]).asInstanceOf[SingleObservable[T]]
/* update */
case Update(filter, update, None, onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update).asInstanceOf[SingleObservable[T]]
else coll.updateMany(filter, update).asInstanceOf[SingleObservable[T]]
case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
/* bulkWrite */
case BulkWrite(commands, None) => //SingleObservable
coll.bulkWrite(commands).asInstanceOf[SingleObservable[T]]
case BulkWrite(commands, Some(opt)) => //SingleObservable
coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).asInstanceOf[SingleObservable[T]]
}
}
catch { case e: Exception =>
log.error(s"mgoUpdateObservable> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mgoUpdateObservable> Error: ${e.getMessage}")
}
} def mgoAdmin(ctx: MGOContext)(implicit client: MongoClient): DBOResult[Completed] = {
log.info(s"mgoAdmin> MGOContext: ${ctx}") val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mgoAdmin> uery action cannot be null!")
Left(new IllegalArgumentException("mgoAdmin> query action cannot be null!"))
}
try {
ctx.action.get match {
/* drop collection */
case DropCollection(collName) => //SingleObservable
val coll = db.getCollection(collName)
coll.drop().toFuture()
/* create collection */
case CreateCollection(collName, None) => //SingleObservable
db.createCollection(collName).toFuture()
case CreateCollection(collName, Some(opt)) => //SingleObservable
db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture()
/* list collection
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
*/
/* create view */
case CreateView(viewName, viewOn, pline, None) => //SingleObservable
db.createView(viewName, viewOn, pline).toFuture()
case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture()
/* create index */
case CreateIndex(key, None) => //SingleObservable
coll.createIndex(key).toFuture().asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
case CreateIndex(key, Some(opt)) => //SingleObservable
coll.createIndex(key, opt.asInstanceOf[IndexOptions]).asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
/* drop index */
case DropIndexByName(indexName, None) => //SingleObservable
coll.dropIndex(indexName).toFuture()
case DropIndexByName(indexName, Some(opt)) => //SingleObservable
coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture()
case DropIndexByKey(key, None) => //SingleObservable
coll.dropIndex(key).toFuture()
case DropIndexByKey(key, Some(opt)) => //SingleObservable
coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture()
case DropAllIndexes(None) => //SingleObservable
coll.dropIndexes().toFuture()
case DropAllIndexes(Some(opt)) => //SingleObservable
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture()
}
}
catch { case e: Exception =>
log.error(s"mgoAdmin> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mgoAdmin> Error: ${e.getMessage}")
} } /*
def mgoExecute[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
ctx.action match {
/* count */
case Count(Some(filter), Some(opt)) => //SingleObservable
coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter), None) => //SingleObservable
coll.countDocuments(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None, None) => //SingleObservable
coll.countDocuments().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field, Some(filter)) => //DistinctObservable
coll.distinct(field, filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field, None) => //DistinctObservable
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None, None, optConv, false) => //FindObservable
if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(None, None, optConv, true) => //FindObservable
if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, false) => //FindObservable
if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, true) => //FindObservable
if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(None, Some(next), optConv, _) => //FindObservable
if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), Some(next), optConv, _) => //FindObservable
if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
/* aggregate AggregateObservable*/
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce MapReduceObservable*/
case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
/* insert */
case Insert(docs, Some(opt)) => //SingleObservable[Completed]
if (docs.size > ) coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).toFuture()
.asInstanceOf[Future[T]]
else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).toFuture()
.asInstanceOf[Future[T]]
case Insert(docs, None) => //SingleObservable
if (docs.size > ) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
/* delete */
case Delete(filter, None, onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
case Delete(filter, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
/* replace */
case Replace(filter, replacement, None) => //SingleObservable
coll.replaceOne(filter, replacement).toFuture().asInstanceOf[Future[T]]
case Replace(filter, replacement, Some(opt)) => //SingleObservable
coll.replaceOne(filter, replacement, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* update */
case Update(filter, update, None, onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update).toFuture().asInstanceOf[Future[T]]
case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* bulkWrite */
case BulkWrite(commands, None) => //SingleObservable
coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
case BulkWrite(commands, Some(opt)) => //SingleObservable
coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]] /* drop collection */
case DropCollection(collName) => //SingleObservable
val coll = db.getCollection(collName)
coll.drop().toFuture().asInstanceOf[Future[T]]
/* create collection */
case CreateCollection(collName, None) => //SingleObservable
db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
case CreateCollection(collName, Some(opt)) => //SingleObservable
db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
/* create view */
case CreateView(viewName, viewOn, pline, None) => //SingleObservable
db.createView(viewName, viewOn, pline).toFuture().asInstanceOf[Future[T]]
case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
/* create index */
case CreateIndex(key, None) => //SingleObservable
coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
case CreateIndex(key, Some(opt)) => //SingleObservable
coll.createIndex(key, opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
/* drop index */
case DropIndexByName(indexName, None) => //SingleObservable
coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
case DropIndexByName(indexName, Some(opt)) => //SingleObservable
coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, None) => //SingleObservable
coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, Some(opt)) => //SingleObservable
coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(None) => //SingleObservable
coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(Some(opt)) => //SingleObservable
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
}
}
*/ } object MongoActionStream { import MGOClasses._ case class StreamingInsert[A](dbName: String,
collName: String,
converter: A => Document,
parallelism: Int =
) extends MGOCommands case class StreamingDelete[A](dbName: String,
collName: String,
toFilter: A => Bson,
parallelism: Int = ,
justOne: Boolean = false
) extends MGOCommands case class StreamingUpdate[A](dbName: String,
collName: String,
toFilter: A => Bson,
toUpdate: A => Bson,
parallelism: Int = ,
justOne: Boolean = false
) extends MGOCommands case class InsertAction[A](ctx: StreamingInsert[A])(
implicit mongoClient: MongoClient) { val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName) def performOnRow(implicit ec: ExecutionContext): Flow[A, Document, NotUsed] =
Flow[A].map(ctx.converter)
.mapAsync(ctx.parallelism)(doc => collection.insertOne(doc).toFuture().map(_ => doc))
} case class UpdateAction[A](ctx: StreamingUpdate[A])(
implicit mongoClient: MongoClient) { val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName) def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
if (ctx.justOne) {
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.updateOne(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
} else
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.updateMany(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
} case class DeleteAction[A](ctx: StreamingDelete[A])(
implicit mongoClient: MongoClient) { val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName) def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
if (ctx.justOne) {
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.deleteOne(ctx.toFilter(a)).toFuture().map(_ => a))
} else
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.deleteMany(ctx.toFilter(a)).toFuture().map(_ => a))
} } object MGOHelpers { implicit class DocumentObservable[C](val observable: Observable[Document]) extends ImplicitObservable[Document] {
override val converter: (Document) => String = (doc) => doc.toJson
} implicit class GenericObservable[C](val observable: Observable[C]) extends ImplicitObservable[C] {
override val converter: (C) => String = (doc) => doc.toString
} trait ImplicitObservable[C] {
val observable: Observable[C]
val converter: (C) => String def results(): Seq[C] = Await.result(observable.toFuture(), seconds) def headResult() = Await.result(observable.head(), seconds) def printResults(initial: String = ""): Unit = {
if (initial.length > ) print(initial)
results().foreach(res => println(converter(res)))
} def printHeadResult(initial: String = ""): Unit = println(s"${initial}${converter(headResult())}")
} def getResult[T](fut: Future[T], timeOut: Duration = second): T = {
Await.result(fut, timeOut)
} def getResults[T](fut: Future[Iterable[T]], timeOut: Duration = second): Iterable[T] = {
Await.result(fut, timeOut)
} import monix.eval.Task
import monix.execution.Scheduler.Implicits.global final class FutureToTask[A](x: => Future[A]) {
def asTask: Task[A] = Task.deferFuture[A](x)
} final class TaskToFuture[A](x: => Task[A]) {
def asFuture: Future[A] = x.runAsync
} }

Akka-Cluster(5)- load-balancing with backoff-supervised stateless computation - 无状态任务集群节点均衡分配的更多相关文章

  1. Scalable MySQL Cluster with Master-Slave Replication, ProxySQL Load Balancing and Orchestrator

    MySQL is one of the most popular open-source relational databases, used by lots of projects around t ...

  2. How Network Load Balancing Technology Works--reference

    http://technet.microsoft.com/en-us/library/cc756878(v=ws.10).aspx In this section Network Load Balan ...

  3. Network Load Balancing Technical Overview--reference

    http://technet.microsoft.com/en-us/library/bb742455.aspx Abstract Network Load Balancing, a clusteri ...

  4. Akka系列(十):Akka集群之Akka Cluster

    前言........... 上一篇文章我们讲了Akka Remote,理解了Akka中的远程通信,其实Akka Cluster可以看成Akka Remote的扩展,由原来的两点变成由多点组成的通信网络 ...

  5. Redis Cluster 集群节点维护 (三)

    Redis Cluster 集群节点维护: 集群运行很久之后,难免由于硬件故障,网络规划,业务增长,等原因对已有集群进行相应的调整,比如增加redis nodes 节点,减少节点,节点迁移,更换服务器 ...

  6. 【架构】How To Use HAProxy to Set Up MySQL Load Balancing

    How To Use HAProxy to Set Up MySQL Load Balancing Dec  2, 2013 MySQL, Scaling, Server Optimization U ...

  7. How Node.js Multiprocess Load Balancing Works

    As of version 0.6.0 of node, load multiple process load balancing is available for node. The concept ...

  8. Akka Cluster简介与基本环境搭建

      akka集群是高容错.去中心化.不存在单点故障以及不存在单点瓶颈的集群.它使用gossip协议通信以及具备故障自动检测功能. Gossip收敛   集群中每一个节点被其他节点监督(默认的最大数量为 ...

  9. Windows Server 2008配置Network Load Balancing(服务群集)

          最近配置SharePoint 2013 WFE 时,客户提到要让多台WFE能load balance,于是研究了下Network Load Balancing.       当把一台服务器 ...

  10. akka cluster 初体验

    cluster 配置 akka { actor { provider = "akka.cluster.ClusterActorRefProvider" } remote { log ...

随机推荐

  1. .gitinore配置失效问题

    问题:在.gitinore中配置忽略项,配置失效 原因:新增加忽略项已经提交过,在暂存区或分支上被版本控制 解决:删除暂存区或分支上的文件(本地需要使用, 只是不希望这个文件被版本控制), 可以使用 ...

  2. mac环境下mongodb的安装和使用

    mac环境下mongodb的安装和使用 简介 MongoDB是一个基于分布式文件存储的数据库.由C++语言编写.旨在为WEB应用提供可扩展的高性能数据存储解决方案. MongoDB 是一个介于关系数据 ...

  3. 7K - find your present (2)

    In the new year party, everybody will get a "special present".Now it's your turn to get yo ...

  4. vue间通信

    1,父子组件通信 props 传递 父组件: 子组件: 2,子父组件通信 父组件: 子组件: 3,子组件与子组件传递 使用bus.js  如下 传递子组件:  接收子组件

  5. python基础 (装饰器,内置函数)

    https://docs.python.org/zh-cn/3.7/library/functions.html 1.闭包回顾 在学习装饰器之前,可以先复习一下什么是闭包? 在嵌套函数内部的函数可以使 ...

  6. oracle异机恢复 open resetlogs 报:ORA-00392

    参考文档:ALTER DATABASE OPEN RESETLOGS fails with ORA-00392 (Doc ID 1352133.1) 打开一个克隆数据库报以下错误: SQL> a ...

  7. ABB机器人设置安全区(案例版)

    ABB机器人设置安全区.中断(案例版) 1.概述 在如今机器人中普遍会设置机器人的安全区域,也可以理解为工作范围.主要目的是为了机器人运行时的安全性和可靠性.ABB机器人也不例外,下面我们就讲讲ABB ...

  8. sparse_matrix

    (1)ndarray 与 scipy.sparse.csr.csr_matrix 的互转 import numpy as npfrom scipy import sparse 1.1 ndarry 转 ...

  9. laravel-更换语言包

    第一步:找语言包 找到比较靠谱的语言包(根据下载量与收藏量综合判断),而且要是laravel的 扩展的链接:https://packagist.org/packages/caouecs/laravel ...

  10. mysql安装完启动问题解决

    一.初始化报错问题: 1./usr/local/mysql/bin/mysqld --user=mysql --basedir=/usr/local/mysql --datadir=/usr/loca ...