分布式程序运算是一种水平扩展(scale-out)运算模式,其核心思想是能够充分利用服务器集群中每个服务器节点的计算资源,包括:CPU、内存、硬盘、IO总线等。首先对计算任务进行分割,然后把细分的任务分派给各节点去运算。细分的任务相互之间可以有关联或者各自为独立运算,使用akka-cluster可以把任务按照各节点运算资源的负载情况进行均匀的分配,从而达到资源的合理充分利用以实现运算效率最大化的目的。如果一项工作可以被分割成多个独立的运算任务,那么我们只需要关注如何合理地对细分任务进行分配以实现集群节点的负载均衡,这实际上是一种对无需维护内部状态的运算任务的分配方式:fire and forget。由于承担运算任务的目标actor具体的部署位置是由算法决定的,所以我们一般不需要控制指定的actor或者读取它的内部状态。当然,如果需要的话我们还是可以通过嵌入消息的方式来实现这样的功能。

集群节点负载均衡是一种任务中央分配方式,其实是在集群环境下的router/routees运算模式,只是现在的router可以把任务发送给跨服务器上的actor。当然,任务分派是通过算法实现的,包括所有普通router的routing算法如:round-robin, random等等。 akka提供了一种基于节点运算资源负载的算法,在配置文件中定义:

  1. akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]

下面的例子可以提供metrics基本作用的解释:

  1. akka.actor.deployment {
  2. /frontend/dispatcher = {
  3. # Router type provided by metrics extension.
  4. router = cluster-metrics-adaptive-group
  5. # Router parameter specific for metrics extension.
  6. # metrics-selector = heap
  7. # metrics-selector = load
  8. # metrics-selector = cpu
  9. metrics-selector = mix
  10. #
  11. routees.paths = ["/user/backend"]
  12. cluster {
  13. enabled = on
  14. use-role = backend
  15. allow-local-routees = off
  16. }
  17. }
  18. }

dispatcher代表router, backend/目录下的actor代表routees。

假如我们把一个大型的数据处理程序分割成多个独立的数据库操作。为了保证每项操作都能在任何情况下安全进行,包括出现异常,我们可以用BackoffSupervisor来支持负责操作的actor,如下:

  1. val supervisor = BackoffSupervisor.props(
  2. Backoff.onFailure( // Backoff.OnStop
  3. childProps = workerProps(client),
  4. childName = "worker",
  5. minBackoff = second,
  6. maxBackoff = seconds,
  7. randomFactor = 0.20
  8. ).withAutoReset(resetBackoff = seconds)
  9. .withSupervisorStrategy(
  10. OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
  11. decider.orElse(SupervisorStrategy.defaultDecider)
  12. )
  13. )

在这里要特别注明一下Backoff.OnFailure和Backoff.OnStop的使用场景和作用,这部分与官方文档有些出入。首先,这两种方法都不会造成childActor的重启动作(restart),而是重新创建并启动一个新的实例。具体情况请参考下面测试程序的输出:

  1. package my.akka
  2.  
  3. import akka.actor.{Actor, ActorRef, ActorSystem, PoisonPill, Props}
  4. import akka.pattern.{Backoff, BackoffSupervisor, ask}
  5.  
  6. import scala.concurrent.Await
  7. import scala.concurrent.duration._
  8.  
  9. class Child extends Actor {
  10. println(s"[Child]: created. (path = ${this.self.path}, instance = ${this})")
  11.  
  12. override def preStart(): Unit = {
  13. println(s"[Child]: preStart called. (path = ${this.self.path}, instance = ${this})")
  14. super.preStart()
  15. }
  16.  
  17. override def postStop(): Unit = {
  18. println(s"[Child]: postStop called. (path = ${this.self.path}, instance = ${this})")
  19. super.postStop()
  20. }
  21.  
  22. override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
  23. println(s"[Child]: preRestart called with ($reason, $message). (path = ${this.self.path}, instance = ${this})")
  24. super.preRestart(reason, message)
  25. }
  26.  
  27. override def postRestart(reason: Throwable): Unit = {
  28. println(s"[Child]: postRestart called with ($reason). (path = ${this.self.path}, instance = ${this})")
  29. super.postRestart(reason)
  30. }
  31.  
  32. def receive = {
  33. case "boom" =>
  34. throw new Exception("kaboom")
  35. case "get ref" =>
  36. sender() ! self
  37. case a: Any =>
  38. println(s"[Child]: received ${a}")
  39. }
  40. }
  41.  
  42. object Child {
  43. def props: Props
  44. = Props(new Child)
  45.  
  46. def backOffOnFailureProps: Props
  47. = BackoffSupervisor.props(
  48. Backoff.onFailure(
  49. Child.props,
  50. childName = "myEcho",
  51. minBackoff = .seconds,
  52. maxBackoff = .seconds,
  53. randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
  54. ))
  55.  
  56. def backOffOnStopProps: Props
  57. = BackoffSupervisor.props(
  58. Backoff.onStop(
  59. Child.props,
  60. childName = "myEcho",
  61. minBackoff = .seconds,
  62. maxBackoff = .seconds,
  63. randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
  64. ))
  65. }
  66.  
  67. object BackoffSuperVisorApp {
  68. def defaultSuperVisorCase(): Unit = {
  69. println(
  70. """
  71. |default ---------------------------
  72. """.stripMargin)
  73.  
  74. val system = ActorSystem("app")
  75. try{
  76. /**
  77. * Let's see if "hello" message is received by the child
  78. */
  79. val child = system.actorOf(Child.props, "child")
  80. Thread.sleep()
  81. child ! "hello"
  82. //[Child]: received hello
  83.  
  84. /**
  85. * Now restart the child with an exception within its receive method
  86. * and see if the `child` ActorRef is still valid (i.e. ActorRef incarnation remains same)
  87. */
  88. child ! "boom"
  89. Thread.sleep()
  90.  
  91. child ! "hello after normal exception"
  92. //[Child]: received hello after normal exception
  93.  
  94. /**
  95. * PoisonPill causes the child actor to `Stop`, different from restart.
  96. * The ActorRef incarnation gets updated.
  97. */
  98. child ! PoisonPill
  99. Thread.sleep()
  100.  
  101. /**
  102. * This causes delivery to deadLetter, since the "incarnation" of ActorRef `child` became obsolete
  103. * after child is "Stopped"
  104. *
  105. * An incarnation is tied to an ActorRef (NOT to its internal actor instance)
  106. * and the same incarnation means "you can keep using the same ActorRef"
  107. */
  108. child ! "hello after PoisonPill"
  109. // [akka://app/user/parent/child-1] Message [java.lang.String] without sender to Actor[akka://app/user/child#-767539042]
  110. // was not delivered. [1] dead letters encountered.
  111.  
  112. Thread.sleep()
  113. }
  114. finally{
  115. system.terminate()
  116. Thread.sleep()
  117. }
  118. }
  119.  
  120. def backOffOnStopCase(): Unit ={
  121. println(
  122. """
  123. |backoff onStop ---------------------------
  124. """.stripMargin)
  125.  
  126. val system = ActorSystem("app")
  127. try{
  128. /**
  129. * Let's see if "hello" message is forwarded to the child
  130. * by the backoff supervisor onStop
  131. */
  132. implicit val futureTimeout: akka.util.Timeout = .second
  133. val backoffSupervisorActor = system.actorOf(Child.backOffOnStopProps, "child")
  134. Thread.sleep()
  135.  
  136. backoffSupervisorActor ! "hello to backoff supervisor" //forwarded to child
  137. //[Child]: received hello to backoff supervisor
  138.  
  139. /**
  140. * Now "Restart" the child with an exception from its receive method.
  141. * As with the default supervisory strategy, the `child` ActorRef remains valid. (i.e. incarnation kept same)
  142. */
  143. val child = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
  144. child ! "boom"
  145. Thread.sleep()
  146.  
  147. child ! "hello to child after normal exception"
  148. //[Child]: received hello to child after normal exception
  149.  
  150. /**
  151. * Backoff Supervisor can still forward the message
  152. */
  153. backoffSupervisorActor ! "hello to backoffSupervisorActor after normal exception"
  154. //[Child]: received hello to backoffSupervisorActor after normal exception
  155.  
  156. Thread.sleep()
  157.  
  158. /**
  159. * PoisonPill causes the child actor to `Stop`, different from restart.
  160. * The `child` ActorRef incarnation gets updated.
  161. */
  162. child ! PoisonPill
  163. Thread.sleep()
  164.  
  165. child ! "hello to child ref after PoisonPill"
  166. //delivered to deadLetters
  167.  
  168. /**
  169. * Backoff Supervisor can forward the message to its child with the new incarnation
  170. */
  171. backoffSupervisorActor ! "hello to backoffSupervisorActor after PoisonPill"
  172. //[Child]: received hello to backoffSupervisorActor after PoisonPill
  173.  
  174. Thread.sleep()
  175. }
  176. finally{
  177. system.terminate()
  178. Thread.sleep()
  179. }
  180. }
  181.  
  182. def backOffOnFailureCase(): Unit ={
  183. println(
  184. """
  185. |backoff onFailure ---------------------------
  186. """.stripMargin)
  187.  
  188. val system = ActorSystem("app")
  189. try{
  190. /**
  191. * Let's see if "hello" message is forwarded to the child
  192. * by the backoff supervisor onFailure
  193. */
  194. implicit val futureTimeout: akka.util.Timeout = .second
  195. val backoffSupervisorActor = system.actorOf(Child.backOffOnFailureProps, "child")
  196. Thread.sleep()
  197.  
  198. backoffSupervisorActor ! "hello to backoff supervisor" //forwarded to child
  199. //[Child]: received hello to backoff supervisor
  200.  
  201. /**
  202. * Now "Stop" the child with an exception from its receive method.
  203. * You'll see the difference between "Restart" and "Stop" from here:
  204. */
  205. val child = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
  206. child ! "boom"
  207. Thread.sleep()
  208.  
  209. /**
  210. * Note that this is after normal exception, not after PoisonPill,
  211. * but child is completely "Stopped" and its ActorRef "incarnation" became obsolete
  212. *
  213. * So, the message to the `child` ActorRef is delivered to deadLetters
  214. */
  215. child ! "hello to child after normal exception"
  216. //causes delivery to deadLetter
  217.  
  218. /**
  219. * Backoff Supervisor can still forward the message to the new child ActorRef incarnation
  220. */
  221. backoffSupervisorActor ! "hello to backoffSupervisorActor after normal exception"
  222. //[Child]: received hello to backoffSupervisorActor after normal exception
  223.  
  224. /**
  225. * You can get a new ActorRef which represents the new incarnation
  226. */
  227. val newChildRef = Await.result(backoffSupervisorActor ? "get ref", .second).asInstanceOf[ActorRef]
  228. newChildRef ! "hello to new child ref after normal exception"
  229. //[Child]: received hello to new child ref after normal exception
  230.  
  231. Thread.sleep()
  232.  
  233. /**
  234. * No matter whether the supervisory strategy is default or backoff,
  235. * PoisonPill causes the actor to "Stop", not "Restart"
  236. */
  237. newChildRef ! PoisonPill
  238. Thread.sleep()
  239.  
  240. newChildRef ! "hello to new child ref after PoisonPill"
  241. //delivered to deadLetters
  242.  
  243. Thread.sleep()
  244. }
  245. finally{
  246. system.terminate()
  247. Thread.sleep()
  248. }
  249. }
  250.  
  251. def main(args: Array[String]): Unit ={
  252. defaultSuperVisorCase()
  253. backOffOnStopCase()
  254. backOffOnFailureCase()
  255. }
  256. }

OnStop:不响应child-actor发生的异常,采用SupervisorStrategy异常处理方式。对正常停止动作,如PoisonPill, context.stop作用:重新构建新的实例并启动。

OnFailure:不响应child-actor正常停止,任其终止。发生异常时重新构建新的实例并启动。

很明显,通常我们需要在运算发生异常时重新启动运算,所以用OnFailure才是正确的选择。

下面是我之前介绍关于BackoffSupervisor时用的一个例子的代码示范:

  1. package backoffSupervisorDemo
  2. import akka.actor._
  3. import akka.pattern._
  4. import backoffSupervisorDemo.InnerChild.TestMessage
  5.  
  6. import scala.concurrent.duration._
  7.  
  8. object InnerChild {
  9. case class TestMessage(msg: String)
  10. class ChildException extends Exception
  11.  
  12. def props = Props[InnerChild]
  13. }
  14. class InnerChild extends Actor with ActorLogging {
  15. import InnerChild._
  16. override def receive: Receive = {
  17. case TestMessage(msg) => //模拟子级功能
  18. log.info(s"Child received message: ${msg}")
  19. }
  20. }
  21. object Supervisor {
  22. def props: Props = { //在这里定义了监管策略和child Actor构建
  23. def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
  24. case _: InnerChild.ChildException => SupervisorStrategy.Restart
  25. }
  26.  
  27. val options = Backoff.onFailure(InnerChild.props, "innerChild", second, seconds, 0.0)
  28. .withManualReset
  29. .withSupervisorStrategy(
  30. OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
  31. decider.orElse(SupervisorStrategy.defaultDecider)
  32. )
  33. )
  34. BackoffSupervisor.props(options)
  35. }
  36. }
  37. //注意:下面是Supervisor的父级,不是InnerChild的父级
  38. object ParentalActor {
  39. case class SendToSupervisor(msg: InnerChild.TestMessage)
  40. case class SendToInnerChild(msg: InnerChild.TestMessage)
  41. case class SendToChildSelection(msg: InnerChild.TestMessage)
  42. def props = Props[ParentalActor]
  43. }
  44. class ParentalActor extends Actor with ActorLogging {
  45. import ParentalActor._
  46. //在这里构建子级Actor supervisor
  47. val supervisor = context.actorOf(Supervisor.props,"supervisor")
  48. supervisor ! BackoffSupervisor.getCurrentChild //要求supervisor返回当前子级Actor
  49. var innerChild: Option[ActorRef] = None //返回的当前子级ActorRef
  50. val selectedChild = context.actorSelection("/user/parent/supervisor/innerChild")
  51. override def receive: Receive = {
  52. case BackoffSupervisor.CurrentChild(ref) => //收到子级Actor信息
  53. innerChild = ref
  54. case SendToSupervisor(msg) => supervisor ! msg
  55. case SendToChildSelection(msg) => selectedChild ! msg
  56. case SendToInnerChild(msg) => innerChild foreach(child => child ! msg)
  57. }
  58.  
  59. }
  60. object BackoffSupervisorDemo extends App {
  61. import ParentalActor._
  62. val testSystem = ActorSystem("testSystem")
  63. val parent = testSystem.actorOf(ParentalActor.props,"parent")
  64.  
  65. Thread.sleep() //wait for BackoffSupervisor.CurrentChild(ref) received
  66.  
  67. parent ! SendToSupervisor(TestMessage("Hello message 1 to supervisor"))
  68. parent ! SendToInnerChild(TestMessage("Hello message 2 to innerChild"))
  69. parent ! SendToChildSelection(TestMessage("Hello message 3 to selectedChild"))
  70.  
  71. scala.io.StdIn.readLine()
  72.  
  73. testSystem.terminate()
  74.  
  75. }

好了,现在我们就开始实现一个在集群中进行数据库操作的例子,看看akka-cluster是如何把一串操作分派给各节点上去操作的。首先是这个Worker:

  1. import akka.actor._
  2. import scala.concurrent.duration._
  3.  
  4. object Backend {
  5. case class SaveFormula(op1: Int, op2: Int)
  6. def workerProps = Props(new Worker)
  7. }
  8.  
  9. class Worker extends Actor with ActorLogging {
  10. import Backend._
  11.  
  12. context.setReceiveTimeout( milliseconds)
  13.  
  14. override def receive: Receive = {
  15. case SaveFormula(op1,op2) => {
  16. val res = op1 * op2
  17. // saveToDB(op1,op2,res)
  18. log.info(s"******* $op1 X $op2 = $res save to DB by $self *******")
  19. }
  20. case ReceiveTimeout =>
  21. log.info(s"******* $self receive timout! *******")
  22. throw new RuntimeException("Worker idle timeout!")
  23. }
  24. }

这应该是一个最普通的actor了。我们把它放在一个BackoffSupervisor下面:

  1. def superProps: Props = {
  2. def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
  3. case _: DBException => SupervisorStrategy.Restart
  4. }
  5.  
  6. val options = Backoff.onFailure(
  7. childProps = workerProps,
  8. childName = "worker",
  9. minBackoff = second,
  10. maxBackoff = seconds,
  11. randomFactor = 0.20
  12. ).withAutoReset(resetBackoff = seconds)
  13. .withSupervisorStrategy(
  14. OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
  15. decider.orElse(SupervisorStrategy.defaultDecider)
  16. )
  17. )
  18.  
  19. BackoffSupervisor.props(options)
  20. }
  21.  
  22. def create(port: Int): Unit = {
  23. val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
  24. .withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[backend]"))
  25. .withFallback(ConfigFactory.load())
  26.  
  27. val system = ActorSystem("ClusterSystem", config)
  28.  
  29. val Backend = system.actorOf(superProps,"backend")
  30.  
  31. }

下面是负责分配任务的router,或者前端frontend的定义:

  1. import akka.actor._
  2. import akka.routing._
  3. import com.typesafe.config.ConfigFactory
  4. import scala.concurrent.duration._
  5. import scala.util._
  6. import akka.cluster._
  7.  
  8. object Frontend {
  9. private var _frontend: ActorRef = _
  10.  
  11. case class Multiply(op1: Int, op2: Int)
  12. def create(port: Int) = {
  13.  
  14. val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.posrt=$port")
  15. .withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[frontend]"))
  16. .withFallback(ConfigFactory.load())
  17. val system = ActorSystem("ClusterSystem",config)
  18.  
  19. Cluster(system).registerOnMemberUp{
  20. _frontend = system.actorOf(Props[Frontend],"frontend")
  21. }
  22.  
  23. system.actorOf(Props[Frontend],"frontend")
  24.  
  25. }
  26. def getFrontend = _frontend
  27. }
  28.  
  29. class Frontend extends Actor with ActorLogging {
  30. import Frontend._
  31. import Backend._
  32. import context.dispatcher
  33.  
  34. //just lookup routees, routing strategy is responsible for deployment
  35. val backend = context.actorOf(FromConfig.props(/* Props.empty */),"dispatcher")
  36.  
  37. context.system.scheduler.schedule(.seconds, .seconds, self,
  38. Multiply(Random.nextInt(), Random.nextInt()))
  39.  
  40. override def receive: Receive = {
  41. case Multiply(op1,op2) =>
  42. backend ! SaveFormula(op1,op2)
  43. case msg @ _ =>
  44. log.info(s"******* unrecognized message: $msg! ******")
  45. }
  46. }

我们需要在Frontend里构建Backend。但是,Backend actor 即routees ,我们已经在Backend构建时进行了部署,所以在这里只需要用FromConfig.props(Props.empty)能lookup routees就可以了,不需要重复部署。

下面是具体的数据库存储操作示范:

  1. def superProps: Props = {
  2. def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
  3. case _: DBException => SupervisorStrategy.Restart
  4. }
  5. val clientSettings: MongoClientSettings = MongoClientSettings.builder()
  6. .applyToClusterSettings {b =>
  7. b.hosts(List(new ServerAddress("localhost:27017")).asJava)
  8. }.build()
  9.  
  10. val client: MongoClient = MongoClient(clientSettings)
  11.  
  12. val options = Backoff.onFailure(
  13. childProps = workerProps(client),
  14. childName = "worker",
  15. minBackoff = second,
  16. maxBackoff = seconds,
  17. randomFactor = 0.20
  18. ).withAutoReset(resetBackoff = seconds)
  19. .withSupervisorStrategy(
  20. OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
  21. decider.orElse(SupervisorStrategy.defaultDecider)
  22. )
  23. )
  24.  
  25. BackoffSupervisor.props(options)
  26. }

注意,我们是在superProps里做数据库的连接的。这样Backend在实例化或者因为某种原因重启的话,特别是换了另一个JVM时可以正确的构建MongoClient。数据库操作是标准的MongoEngine方式:

  1. import monix.execution.Scheduler.Implicits.global
  2. implicit val mongoClient = client;
  3. val ctx = MGOContext("testdb","mulrecs")
  4.  
  5. def saveToDB(op1: Int, op2: Int, by: String) = {
  6. val doc = Document("by" -> by, "op1" -> op1, "op2" -> op2, "res" -> op1 * op2)
  7. val cmd = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
  8. val task = mgoUpdate[Completed](cmd).toTask
  9. task.runOnComplete {
  10. case Success(s) => log.info("operations completed successfully.")
  11. case Failure(exception) => log.error(s"error: ${exception.getMessage}")
  12. }
  13. }

数据库操作是在另一个ExecutionContext里进行的。

下面是本次示范的完整源代码:

project/scalapb.sbt

  1. addSbtPlugin("com.thesamet" % "sbt-protoc" % "0.99.18")
  2.  
  3. libraryDependencies ++= Seq(
  4. "com.thesamet.scalapb" %% "compilerplugin" % "0.7.4"
  5. )

build.sbt

  1. import scalapb.compiler.Version.scalapbVersion
  2. import scalapb.compiler.Version.grpcJavaVersion
  3.  
  4. name := "cluster-load-balance"
  5.  
  6. version := "0.1"
  7.  
  8. scalaVersion := "2.12.8"
  9.  
  10. scalacOptions += "-Ypartial-unification"
  11.  
  12. libraryDependencies ++= {
  13. val akkaVersion = "2.5.19"
  14. Seq(
  15. "com.typesafe.akka" %% "akka-actor" % akkaVersion,
  16. "com.typesafe.akka" %% "akka-cluster" % akkaVersion,
  17. "com.typesafe.akka" %% "akka-cluster-metrics" % akkaVersion,
  18. "com.thesamet.scalapb" %% "scalapb-runtime" % scalapbVersion % "protobuf",
  19. "com.thesamet.scalapb" %% "scalapb-runtime-grpc" % scalapbVersion,
  20. //for mongodb 4.0
  21. "org.mongodb.scala" %% "mongo-scala-driver" % "2.4.0",
  22. "com.lightbend.akka" %% "akka-stream-alpakka-mongodb" % "0.20",
  23. //other dependencies
  24. "co.fs2" %% "fs2-core" % "0.9.7",
  25. "ch.qos.logback" % "logback-classic" % "1.2.3",
  26. "org.typelevel" %% "cats-core" % "0.9.0",
  27. "io.monix" %% "monix-execution" % "3.0.0-RC1",
  28. "io.monix" %% "monix-eval" % "3.0.0-RC1"
  29. )
  30. }
  31.  
  32. PB.targets in Compile := Seq(
  33. scalapb.gen() -> (sourceManaged in Compile).value
  34. )

resources/application.conf

  1. akka {
  2. actor {
  3. provider = "cluster"
  4. }
  5. remote {
  6. log-remote-lifecycle-events = off
  7. netty.tcp {
  8. hostname = "127.0.0.1"
  9. port =
  10. }
  11. }
  12.  
  13. cluster {
  14. seed-nodes = [
  15. "akka.tcp://ClusterSystem@127.0.0.1:2551",
  16. "akka.tcp://ClusterSystem@127.0.0.1:2552"]
  17.  
  18. # auto-down-unreachable-after = 10s
  19. }
  20. }
  21.  
  22. akka.cluster.min-nr-of-members =
  23.  
  24. akka.cluster.role {
  25. frontend.min-nr-of-members =
  26. backend.min-nr-of-members =
  27. }
  28.  
  29. akka.actor.deployment {
  30. /frontend/dispatcher = {
  31. # Router type provided by metrics extension.
  32. router = cluster-metrics-adaptive-group
  33. # Router parameter specific for metrics extension.
  34. # metrics-selector = heap
  35. # metrics-selector = load
  36. # metrics-selector = cpu
  37. metrics-selector = mix
  38. #
  39. routees.paths = ["/user/backend"]
  40. cluster {
  41. enabled = on
  42. use-role = backend
  43. allow-local-routees = off
  44. }
  45. }
  46. }

protobuf/sdp.proto

  1. syntax = "proto3";
  2.  
  3. import "google/protobuf/wrappers.proto";
  4. import "google/protobuf/any.proto";
  5. import "scalapb/scalapb.proto";
  6.  
  7. option (scalapb.options) = {
  8. // use a custom Scala package name
  9. // package_name: "io.ontherocks.introgrpc.demo"
  10.  
  11. // don't append file name to package
  12. flat_package: true
  13.  
  14. // generate one Scala file for all messages (services still get their own file)
  15. single_file: true
  16.  
  17. // add imports to generated file
  18. // useful when extending traits or using custom types
  19. // import: "io.ontherocks.hellogrpc.RockingMessage"
  20.  
  21. // code to put at the top of generated file
  22. // works only with `single_file: true`
  23. //preamble: "sealed trait SomeSealedTrait"
  24. };
  25.  
  26. package sdp.grpc.services;
  27.  
  28. message ProtoDate {
  29. int32 yyyy = ;
  30. int32 mm = ;
  31. int32 dd = ;
  32. }
  33.  
  34. message ProtoTime {
  35. int32 hh = ;
  36. int32 mm = ;
  37. int32 ss = ;
  38. int32 nnn = ;
  39. }
  40.  
  41. message ProtoDateTime {
  42. ProtoDate date = ;
  43. ProtoTime time = ;
  44. }
  45.  
  46. message ProtoAny {
  47. bytes value = ;
  48. }

protobuf/mgo.proto

  1. import "google/protobuf/any.proto";
  2. import "scalapb/scalapb.proto";
  3.  
  4. option (scalapb.options) = {
  5. // use a custom Scala package name
  6. // package_name: "io.ontherocks.introgrpc.demo"
  7.  
  8. // don't append file name to package
  9. flat_package: true
  10.  
  11. // generate one Scala file for all messages (services still get their own file)
  12. single_file: true
  13.  
  14. // add imports to generated file
  15. // useful when extending traits or using custom types
  16. // import: "io.ontherocks.hellogrpc.RockingMessage"
  17.  
  18. // code to put at the top of generated file
  19. // works only with `single_file: true`
  20. //preamble: "sealed trait SomeSealedTrait"
  21. };
  22.  
  23. /*
  24. * Demoes various customization options provided by ScalaPBs.
  25. */
  26.  
  27. package sdp.grpc.services;
  28.  
  29. import "misc/sdp.proto";
  30.  
  31. message ProtoMGOBson {
  32. bytes bson = ;
  33. }
  34.  
  35. message ProtoMGODocument {
  36. bytes document = ;
  37. }
  38.  
  39. message ProtoMGOResultOption { //FindObservable
  40. int32 optType = ;
  41. ProtoMGOBson bsonParam = ;
  42. int32 valueParam = ;
  43. }
  44.  
  45. message ProtoMGOAdmin{
  46. string tarName = ;
  47. repeated ProtoMGOBson bsonParam = ;
  48. ProtoAny options = ;
  49. string objName = ;
  50. }
  51.  
  52. message ProtoMGOContext { //MGOContext
  53. string dbName = ;
  54. string collName = ;
  55. int32 commandType = ;
  56. repeated ProtoMGOBson bsonParam = ;
  57. repeated ProtoMGOResultOption resultOptions = ;
  58. repeated string targets = ;
  59. ProtoAny options = ;
  60. repeated ProtoMGODocument documents = ;
  61. google.protobuf.BoolValue only = ;
  62. ProtoMGOAdmin adminOptions = ;
  63. }
  64.  
  65. message ProtoMultiply {
  66. int32 op1 = ;
  67. int32 op2 = ;
  68. }

Backend.scala

  1. import akka.actor._
  2. import com.typesafe.config.ConfigFactory
  3. import akka.pattern._
  4. import scala.concurrent.duration._
  5. import sdp.grpc.services._
  6. import org.mongodb.scala._
  7. import sdp.mongo.engine.MGOClasses._
  8. import sdp.mongo.engine.MGOEngine._
  9. import sdp.result.DBOResult._
  10. import scala.collection.JavaConverters._
  11. import scala.util._
  12.  
  13. object Backend {
  14. case class SaveFormula(op1: Int, op2: Int)
  15. case class SavedToDB(res: Int)
  16. class DBException(errmsg: String) extends Exception(errmsg)
  17.  
  18. def workerProps(client: MongoClient) = Props(new Worker(client))
  19.  
  20. def superProps: Props = {
  21. def decider: PartialFunction[Throwable, SupervisorStrategy.Directive] = {
  22. case _: DBException => SupervisorStrategy.Restart
  23. }
  24. val clientSettings: MongoClientSettings = MongoClientSettings.builder()
  25. .applyToClusterSettings {b =>
  26. b.hosts(List(new ServerAddress("localhost:27017")).asJava)
  27. }.build()
  28.  
  29. val client: MongoClient = MongoClient(clientSettings)
  30.  
  31. val options = Backoff.onFailure(
  32. childProps = workerProps(client),
  33. childName = "worker",
  34. minBackoff = second,
  35. maxBackoff = seconds,
  36. randomFactor = 0.20
  37. ).withAutoReset(resetBackoff = seconds)
  38. .withSupervisorStrategy(
  39. OneForOneStrategy(maxNrOfRetries = , withinTimeRange = seconds)(
  40. decider.orElse(SupervisorStrategy.defaultDecider)
  41. )
  42. )
  43.  
  44. BackoffSupervisor.props(options)
  45. }
  46.  
  47. def create(port: Int): Unit = {
  48. val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
  49. .withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[backend]"))
  50. .withFallback(ConfigFactory.load())
  51.  
  52. val system = ActorSystem("ClusterSystem", config)
  53.  
  54. val Backend = system.actorOf(superProps,"backend")
  55.  
  56. }
  57.  
  58. }
  59.  
  60. class Worker(client: MongoClient) extends Actor with ActorLogging {
  61. import Backend._
  62. //use allocated threads for io
  63. // implicit val executionContext = context.system.dispatchers.lookup("dbwork-dispatcher")
  64. import monix.execution.Scheduler.Implicits.global
  65. implicit val mongoClient = client;
  66. val ctx = MGOContext("testdb","mulrecs")
  67.  
  68. def saveToDB(op1: Int, op2: Int, by: String) = {
  69. val doc = Document("by" -> by, "op1" -> op1, "op2" -> op2, "res" -> op1 * op2)
  70. val cmd = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
  71. val task = mgoUpdate[Completed](cmd).toTask
  72. task.runOnComplete {
  73. case Success(s) => log.info("operations completed successfully.")
  74. case Failure(exception) => log.error(s"error: ${exception.getMessage}")
  75. }
  76. }
  77. context.setReceiveTimeout( seconds)
  78.  
  79. override def receive: Receive = {
  80. case ProtoMultiply(op1,op2) => {
  81. val res = op1 * op2
  82. saveToDB(op1, op2, s"$self")
  83.  
  84. log.info(s"******* $op1 X $op2 = $res save to DB by $self *******")
  85. }
  86. case SavedToDB(res) =>
  87. log.info(s"******* result of ${res} saved to database. *******")
  88. case ReceiveTimeout =>
  89. log.info(s"******* $self receive timout! *******")
  90. throw new DBException("worker idle timeout!")
  91. }
  92. }

Frontend.scala

  1. import akka.actor._
  2. import akka.routing._
  3. import com.typesafe.config.ConfigFactory
  4. import scala.concurrent.duration._
  5. import scala.util._
  6. import akka.cluster._
  7. import sdp.grpc.services._
  8.  
  9. object Frontend {
  10. private var _frontend: ActorRef = _
  11.  
  12. case class Multiply(op1: Int, op2: Int)
  13. def create(port: Int) = {
  14.  
  15. val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.posrt=$port")
  16. .withFallback(ConfigFactory.parseString(s"akka.cluster.roles=[frontend]"))
  17. .withFallback(ConfigFactory.load())
  18. val system = ActorSystem("ClusterSystem",config)
  19.  
  20. Cluster(system).registerOnMemberUp{
  21. _frontend = system.actorOf(Props[Frontend],"frontend")
  22. }
  23.  
  24. }
  25. def getFrontend = _frontend
  26. }
  27.  
  28. class Frontend extends Actor with ActorLogging {
  29. import Frontend._
  30. import Backend._
  31. import context.dispatcher
  32.  
  33. //just lookup routees, routing strategy is responsible for deployment
  34. val backend = context.actorOf(FromConfig.props(/* Props.empty */),"dispatcher")
  35.  
  36. context.system.scheduler.schedule(.seconds, .seconds, self,
  37. Multiply(Random.nextInt(), Random.nextInt()))
  38.  
  39. override def receive: Receive = {
  40. case Multiply(op1,op2) =>
  41. backend ! ProtoMultiply(op1,op2)
  42. case msg @ _ =>
  43. log.info(s"******* unrecognized message: $msg! ******")
  44. }
  45. }

LoadBalanceDemo.scala

  1. object LoadBalancingApp extends App {
  2. //
  3. //
  4. //initiate three nodes from backend
  5. Backend.create()
  6. //
  7. Backend.create()
  8. //
  9. Backend.create()
  10. //
  11. //initiate frontend node
  12. Frontend.create()
  13. //
  14. }

converters/BytesConverter.scala

  1. package protobuf.bytes
  2. import java.io.{ByteArrayInputStream,ByteArrayOutputStream,ObjectInputStream,ObjectOutputStream}
  3. import com.google.protobuf.ByteString
  4. object Converter {
  5.  
  6. def marshal(value: Any): ByteString = {
  7. val stream: ByteArrayOutputStream = new ByteArrayOutputStream()
  8. val oos = new ObjectOutputStream(stream)
  9. oos.writeObject(value)
  10. oos.close()
  11. ByteString.copyFrom(stream.toByteArray())
  12. }
  13.  
  14. def unmarshal[A](bytes: ByteString): A = {
  15. val ois = new ObjectInputStream(new ByteArrayInputStream(bytes.toByteArray))
  16. val value = ois.readObject()
  17. ois.close()
  18. value.asInstanceOf[A]
  19. }
  20.  
  21. }

converters/DBOResultType.scala

  1. package sdp.result
  2.  
  3. import cats._
  4. import cats.data.EitherT
  5. import cats.data.OptionT
  6. import monix.eval.Task
  7. import cats.implicits._
  8.  
  9. import scala.concurrent._
  10.  
  11. import scala.collection.TraversableOnce
  12.  
  13. object DBOResult {
  14.  
  15. type DBOError[A] = EitherT[Task,Throwable,A]
  16. type DBOResult[A] = OptionT[DBOError,A]
  17.  
  18. implicit def valueToDBOResult[A](a: A): DBOResult[A] =
  19. Applicative[DBOResult].pure(a)
  20. implicit def optionToDBOResult[A](o: Option[A]): DBOResult[A] =
  21. OptionT((o: Option[A]).pure[DBOError])
  22. implicit def eitherToDBOResult[A](e: Either[Throwable,A]): DBOResult[A] = {
  23. // val error: DBOError[A] = EitherT[Task,Throwable, A](Task.eval(e))
  24. OptionT.liftF(EitherT.fromEither[Task](e))
  25. }
  26. implicit def futureToDBOResult[A](fut: Future[A]): DBOResult[A] = {
  27. val task = Task.fromFuture[A](fut)
  28. val et = EitherT.liftF[Task,Throwable,A](task)
  29. OptionT.liftF(et)
  30. }
  31.  
  32. implicit class DBOResultToTask[A](r: DBOResult[A]) {
  33. def toTask = r.value.value
  34. }
  35.  
  36. implicit class DBOResultToOption[A](r:Either[Throwable,Option[A]]) {
  37. def someValue: Option[A] = r match {
  38. case Left(err) => (None: Option[A])
  39. case Right(oa) => oa
  40. }
  41. }
  42.  
  43. def wrapCollectionInOption[A, C[_] <: TraversableOnce[_]](coll: C[A]): DBOResult[C[A]] =
  44. if (coll.isEmpty)
  45. optionToDBOResult(None: Option[C[A]])
  46. else
  47. optionToDBOResult(Some(coll): Option[C[A]])
  48. }

filestream/FileStreaming.scala

  1. package sdp.file
  2.  
  3. import java.io.{ByteArrayInputStream, InputStream}
  4. import java.nio.ByteBuffer
  5. import java.nio.file.Paths
  6.  
  7. import akka.stream.Materializer
  8. import akka.stream.scaladsl.{FileIO, StreamConverters}
  9. import akka.util._
  10.  
  11. import scala.concurrent.Await
  12. import scala.concurrent.duration._
  13.  
  14. object Streaming {
  15. def FileToByteBuffer(fileName: String, timeOut: FiniteDuration = seconds)(
  16. implicit mat: Materializer):ByteBuffer = {
  17. val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
  18. hd ++ bs
  19. }
  20. (Await.result(fut, timeOut)).toByteBuffer
  21. }
  22.  
  23. def FileToByteArray(fileName: String, timeOut: FiniteDuration = seconds)(
  24. implicit mat: Materializer): Array[Byte] = {
  25. val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
  26. hd ++ bs
  27. }
  28. (Await.result(fut, timeOut)).toArray
  29. }
  30.  
  31. def FileToInputStream(fileName: String, timeOut: FiniteDuration = seconds)(
  32. implicit mat: Materializer): InputStream = {
  33. val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
  34. hd ++ bs
  35. }
  36. val buf = (Await.result(fut, timeOut)).toArray
  37. new ByteArrayInputStream(buf)
  38. }
  39.  
  40. def ByteBufferToFile(byteBuf: ByteBuffer, fileName: String)(
  41. implicit mat: Materializer) = {
  42. val ba = new Array[Byte](byteBuf.remaining())
  43. byteBuf.get(ba,,ba.length)
  44. val baInput = new ByteArrayInputStream(ba)
  45. val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
  46. source.runWith(FileIO.toPath(Paths.get(fileName)))
  47. }
  48.  
  49. def ByteArrayToFile(bytes: Array[Byte], fileName: String)(
  50. implicit mat: Materializer) = {
  51. val bb = ByteBuffer.wrap(bytes)
  52. val baInput = new ByteArrayInputStream(bytes)
  53. val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
  54. source.runWith(FileIO.toPath(Paths.get(fileName)))
  55. }
  56.  
  57. def InputStreamToFile(is: InputStream, fileName: String)(
  58. implicit mat: Materializer) = {
  59. val source = StreamConverters.fromInputStream(() => is)
  60. source.runWith(FileIO.toPath(Paths.get(fileName)))
  61. }
  62.  
  63. }

logging/Log.scala

  1. package sdp.logging
  2.  
  3. import org.slf4j.Logger
  4.  
  5. /**
  6. * Logger which just wraps org.slf4j.Logger internally.
  7. *
  8. * @param logger logger
  9. */
  10. class Log(logger: Logger) {
  11.  
  12. // use var consciously to enable squeezing later
  13. var isDebugEnabled: Boolean = logger.isDebugEnabled
  14. var isInfoEnabled: Boolean = logger.isInfoEnabled
  15. var isWarnEnabled: Boolean = logger.isWarnEnabled
  16. var isErrorEnabled: Boolean = logger.isErrorEnabled
  17.  
  18. def withLevel(level: Symbol)(msg: => String, e: Throwable = null): Unit = {
  19. level match {
  20. case 'debug | 'DEBUG => debug(msg)
  21. case 'info | 'INFO => info(msg)
  22. case 'warn | 'WARN => warn(msg)
  23. case 'error | 'ERROR => error(msg)
  24. case _ => // nothing to do
  25. }
  26. }
  27.  
  28. def debug(msg: => String): Unit = {
  29. if (isDebugEnabled && logger.isDebugEnabled) {
  30. logger.debug(msg)
  31. }
  32. }
  33.  
  34. def debug(msg: => String, e: Throwable): Unit = {
  35. if (isDebugEnabled && logger.isDebugEnabled) {
  36. logger.debug(msg, e)
  37. }
  38. }
  39.  
  40. def info(msg: => String): Unit = {
  41. if (isInfoEnabled && logger.isInfoEnabled) {
  42. logger.info(msg)
  43. }
  44. }
  45.  
  46. def info(msg: => String, e: Throwable): Unit = {
  47. if (isInfoEnabled && logger.isInfoEnabled) {
  48. logger.info(msg, e)
  49. }
  50. }
  51.  
  52. def warn(msg: => String): Unit = {
  53. if (isWarnEnabled && logger.isWarnEnabled) {
  54. logger.warn(msg)
  55. }
  56. }
  57.  
  58. def warn(msg: => String, e: Throwable): Unit = {
  59. if (isWarnEnabled && logger.isWarnEnabled) {
  60. logger.warn(msg, e)
  61. }
  62. }
  63.  
  64. def error(msg: => String): Unit = {
  65. if (isErrorEnabled && logger.isErrorEnabled) {
  66. logger.error(msg)
  67. }
  68. }
  69.  
  70. def error(msg: => String, e: Throwable): Unit = {
  71. if (isErrorEnabled && logger.isErrorEnabled) {
  72. logger.error(msg, e)
  73. }
  74. }
  75.  
  76. }

logging/LogSupport.scala

  1. package sdp.logging
  2.  
  3. import org.slf4j.LoggerFactory
  4.  
  5. trait LogSupport {
  6.  
  7. /**
  8. * Logger
  9. */
  10. protected val log = new Log(LoggerFactory.getLogger(this.getClass))
  11.  
  12. }

mgo.engine/MGOProtoConversions.scala

  1. package sdp.mongo.engine
  2. import org.mongodb.scala.bson.collection.immutable.Document
  3. import org.bson.conversions.Bson
  4. import sdp.grpc.services._
  5. import protobuf.bytes.Converter._
  6. import MGOClasses._
  7. import MGOAdmins._
  8. import MGOCommands._
  9. import org.bson.BsonDocument
  10. import org.bson.codecs.configuration.CodecRegistry
  11. import org.mongodb.scala.bson.codecs.DEFAULT_CODEC_REGISTRY
  12. import org.mongodb.scala.FindObservable
  13.  
  14. object MGOProtoConversion {
  15.  
  16. type MGO_COMMAND_TYPE = Int
  17. val MGO_COMMAND_FIND =
  18. val MGO_COMMAND_COUNT =
  19. val MGO_COMMAND_DISTICT =
  20. val MGO_COMMAND_DOCUMENTSTREAM =
  21. val MGO_COMMAND_AGGREGATE =
  22. val MGO_COMMAND_INSERT =
  23. val MGO_COMMAND_DELETE =
  24. val MGO_COMMAND_REPLACE =
  25. val MGO_COMMAND_UPDATE =
  26.  
  27. val MGO_ADMIN_DROPCOLLECTION =
  28. val MGO_ADMIN_CREATECOLLECTION =
  29. val MGO_ADMIN_LISTCOLLECTION =
  30. val MGO_ADMIN_CREATEVIEW =
  31. val MGO_ADMIN_CREATEINDEX =
  32. val MGO_ADMIN_DROPINDEXBYNAME =
  33. val MGO_ADMIN_DROPINDEXBYKEY =
  34. val MGO_ADMIN_DROPALLINDEXES =
  35.  
  36. case class AdminContext(
  37. tarName: String = "",
  38. bsonParam: Seq[Bson] = Nil,
  39. options: Option[Any] = None,
  40. objName: String = ""
  41. ){
  42. def toProto = sdp.grpc.services.ProtoMGOAdmin(
  43. tarName = this.tarName,
  44. bsonParam = this.bsonParam.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
  45. objName = this.objName,
  46. options = this.options.map(b => ProtoAny(marshal(b)))
  47.  
  48. )
  49. }
  50.  
  51. object AdminContext {
  52. def fromProto(msg: sdp.grpc.services.ProtoMGOAdmin) = new AdminContext(
  53. tarName = msg.tarName,
  54. bsonParam = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
  55. objName = msg.objName,
  56. options = msg.options.map(b => unmarshal[Any](b.value))
  57. )
  58. }
  59.  
  60. case class Context(
  61. dbName: String = "",
  62. collName: String = "",
  63. commandType: MGO_COMMAND_TYPE,
  64. bsonParam: Seq[Bson] = Nil,
  65. resultOptions: Seq[ResultOptions] = Nil,
  66. options: Option[Any] = None,
  67. documents: Seq[Document] = Nil,
  68. targets: Seq[String] = Nil,
  69. only: Boolean = false,
  70. adminOptions: Option[AdminContext] = None
  71. ){
  72.  
  73. def toProto = new sdp.grpc.services.ProtoMGOContext(
  74. dbName = this.dbName,
  75. collName = this.collName,
  76. commandType = this.commandType,
  77. bsonParam = this.bsonParam.map(bsonToProto),
  78. resultOptions = this.resultOptions.map(_.toProto),
  79. options = { if(this.options == None)
  80. None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  81. else
  82. Some(ProtoAny(marshal(this.options.get))) },
  83. documents = this.documents.map(d => sdp.grpc.services.ProtoMGODocument(marshal(d))),
  84. targets = this.targets,
  85. only = Some(this.only),
  86. adminOptions = this.adminOptions.map(_.toProto)
  87. )
  88.  
  89. }
  90.  
  91. object MGODocument {
  92. def fromProto(msg: sdp.grpc.services.ProtoMGODocument): Document =
  93. unmarshal[Document](msg.document)
  94. def toProto(doc: Document): sdp.grpc.services.ProtoMGODocument =
  95. new ProtoMGODocument(marshal(doc))
  96. }
  97.  
  98. object MGOProtoMsg {
  99. def fromProto(msg: sdp.grpc.services.ProtoMGOContext) = new Context(
  100. dbName = msg.dbName,
  101. collName = msg.collName,
  102. commandType = msg.commandType,
  103. bsonParam = msg.bsonParam.map(protoToBson),
  104. resultOptions = msg.resultOptions.map(r => ResultOptions.fromProto(r)),
  105. options = msg.options.map(a => unmarshal[Any](a.value)),
  106. documents = msg.documents.map(doc => unmarshal[Document](doc.document)),
  107. targets = msg.targets,
  108. adminOptions = msg.adminOptions.map(ado => AdminContext.fromProto(ado))
  109. )
  110. }
  111.  
  112. def bsonToProto(bson: Bson) =
  113. ProtoMGOBson(marshal(bson.toBsonDocument(
  114. classOf[org.mongodb.scala.bson.collection.immutable.Document],DEFAULT_CODEC_REGISTRY)))
  115.  
  116. def protoToBson(proto: ProtoMGOBson): Bson = new Bson {
  117. val bsdoc = unmarshal[BsonDocument](proto.bson)
  118. override def toBsonDocument[TDocument](documentClass: Class[TDocument], codecRegistry: CodecRegistry): BsonDocument = bsdoc
  119. }
  120.  
  121. def ctxFromProto(proto: ProtoMGOContext): MGOContext = proto.commandType match {
  122. case MGO_COMMAND_FIND => {
  123. var ctx = new MGOContext(
  124. dbName = proto.dbName,
  125. collName = proto.collName,
  126. actionType = MGO_QUERY,
  127. action = Some(Find())
  128. )
  129. def toResultOption(rts: Seq[ProtoMGOResultOption]): FindObservable[Document] => FindObservable[Document] = findObj =>
  130. rts.foldRight(findObj)((a,b) => ResultOptions.fromProto(a).toFindObservable(b))
  131.  
  132. (proto.bsonParam, proto.resultOptions, proto.only) match {
  133. case (Nil, Nil, None) => ctx
  134. case (Nil, Nil, Some(b)) => ctx.setCommand(Find(firstOnly = b))
  135. case (bp,Nil,None) => ctx.setCommand(
  136. Find(filter = Some(protoToBson(bp.head))))
  137. case (bp,Nil,Some(b)) => ctx.setCommand(
  138. Find(filter = Some(protoToBson(bp.head)), firstOnly = b))
  139. case (bp,fo,None) => {
  140. ctx.setCommand(
  141. Find(filter = Some(protoToBson(bp.head)),
  142. andThen = fo.map(ResultOptions.fromProto)
  143. ))
  144. }
  145. case (bp,fo,Some(b)) => {
  146. ctx.setCommand(
  147. Find(filter = Some(protoToBson(bp.head)),
  148. andThen = fo.map(ResultOptions.fromProto),
  149. firstOnly = b))
  150. }
  151. case _ => ctx
  152. }
  153. }
  154. case MGO_COMMAND_COUNT => {
  155. var ctx = new MGOContext(
  156. dbName = proto.dbName,
  157. collName = proto.collName,
  158. actionType = MGO_QUERY,
  159. action = Some(Count())
  160. )
  161. (proto.bsonParam, proto.options) match {
  162. case (Nil, None) => ctx
  163. case (bp, None) => ctx.setCommand(
  164. Count(filter = Some(protoToBson(bp.head)))
  165. )
  166. case (Nil,Some(o)) => ctx.setCommand(
  167. Count(options = Some(unmarshal[Any](o.value)))
  168. )
  169. case _ => ctx
  170. }
  171. }
  172. case MGO_COMMAND_DISTICT => {
  173. var ctx = new MGOContext(
  174. dbName = proto.dbName,
  175. collName = proto.collName,
  176. actionType = MGO_QUERY,
  177. action = Some(Distict(fieldName = proto.targets.head))
  178. )
  179. (proto.bsonParam) match {
  180. case Nil => ctx
  181. case bp: Seq[ProtoMGOBson] => ctx.setCommand(
  182. Distict(fieldName = proto.targets.head,filter = Some(protoToBson(bp.head)))
  183. )
  184. case _ => ctx
  185. }
  186. }
  187. case MGO_COMMAND_AGGREGATE => {
  188. new MGOContext(
  189. dbName = proto.dbName,
  190. collName = proto.collName,
  191. actionType = MGO_QUERY,
  192. action = Some(Aggregate(proto.bsonParam.map(p => protoToBson(p))))
  193. )
  194. }
  195. case MGO_ADMIN_LISTCOLLECTION => {
  196. new MGOContext(
  197. dbName = proto.dbName,
  198. collName = proto.collName,
  199. actionType = MGO_QUERY,
  200. action = Some(ListCollection(proto.dbName)))
  201. }
  202. case MGO_COMMAND_INSERT => {
  203. var ctx = new MGOContext(
  204. dbName = proto.dbName,
  205. collName = proto.collName,
  206. actionType = MGO_UPDATE,
  207. action = Some(Insert(
  208. newdocs = proto.documents.map(doc => unmarshal[Document](doc.document))))
  209. )
  210. proto.options match {
  211. case None => ctx
  212. case Some(o) => ctx.setCommand(Insert(
  213. newdocs = proto.documents.map(doc => unmarshal[Document](doc.document)),
  214. options = Some(unmarshal[Any](o.value)))
  215. )
  216. }
  217. }
  218. case MGO_COMMAND_DELETE => {
  219. var ctx = new MGOContext(
  220. dbName = proto.dbName,
  221. collName = proto.collName,
  222. actionType = MGO_UPDATE,
  223. action = Some(Delete(
  224. filter = protoToBson(proto.bsonParam.head)))
  225. )
  226. (proto.options, proto.only) match {
  227. case (None,None) => ctx
  228. case (None,Some(b)) => ctx.setCommand(Delete(
  229. filter = protoToBson(proto.bsonParam.head),
  230. onlyOne = b))
  231. case (Some(o),None) => ctx.setCommand(Delete(
  232. filter = protoToBson(proto.bsonParam.head),
  233. options = Some(unmarshal[Any](o.value)))
  234. )
  235. case (Some(o),Some(b)) => ctx.setCommand(Delete(
  236. filter = protoToBson(proto.bsonParam.head),
  237. options = Some(unmarshal[Any](o.value)),
  238. onlyOne = b)
  239. )
  240. }
  241. }
  242. case MGO_COMMAND_REPLACE => {
  243. var ctx = new MGOContext(
  244. dbName = proto.dbName,
  245. collName = proto.collName,
  246. actionType = MGO_UPDATE,
  247. action = Some(Replace(
  248. filter = protoToBson(proto.bsonParam.head),
  249. replacement = unmarshal[Document](proto.documents.head.document)))
  250. )
  251. proto.options match {
  252. case None => ctx
  253. case Some(o) => ctx.setCommand(Replace(
  254. filter = protoToBson(proto.bsonParam.head),
  255. replacement = unmarshal[Document](proto.documents.head.document),
  256. options = Some(unmarshal[Any](o.value)))
  257. )
  258. }
  259. }
  260. case MGO_COMMAND_UPDATE => {
  261. var ctx = new MGOContext(
  262. dbName = proto.dbName,
  263. collName = proto.collName,
  264. actionType = MGO_UPDATE,
  265. action = Some(Update(
  266. filter = protoToBson(proto.bsonParam.head),
  267. update = protoToBson(proto.bsonParam.tail.head)))
  268. )
  269. (proto.options, proto.only) match {
  270. case (None,None) => ctx
  271. case (None,Some(b)) => ctx.setCommand(Update(
  272. filter = protoToBson(proto.bsonParam.head),
  273. update = protoToBson(proto.bsonParam.tail.head),
  274. onlyOne = b))
  275. case (Some(o),None) => ctx.setCommand(Update(
  276. filter = protoToBson(proto.bsonParam.head),
  277. update = protoToBson(proto.bsonParam.tail.head),
  278. options = Some(unmarshal[Any](o.value)))
  279. )
  280. case (Some(o),Some(b)) => ctx.setCommand(Update(
  281. filter = protoToBson(proto.bsonParam.head),
  282. update = protoToBson(proto.bsonParam.tail.head),
  283. options = Some(unmarshal[Any](o.value)),
  284. onlyOne = b)
  285. )
  286. }
  287. }
  288. case MGO_ADMIN_DROPCOLLECTION =>
  289. new MGOContext(
  290. dbName = proto.dbName,
  291. collName = proto.collName,
  292. actionType = MGO_ADMIN,
  293. action = Some(DropCollection(proto.collName))
  294. )
  295. case MGO_ADMIN_CREATECOLLECTION => {
  296. var ctx = new MGOContext(
  297. dbName = proto.dbName,
  298. collName = proto.collName,
  299. actionType = MGO_ADMIN,
  300. action = Some(CreateCollection(proto.collName))
  301. )
  302. proto.options match {
  303. case None => ctx
  304. case Some(o) => ctx.setCommand(CreateCollection(proto.collName,
  305. options = Some(unmarshal[Any](o.value)))
  306. )
  307. }
  308. }
  309. case MGO_ADMIN_CREATEVIEW => {
  310. var ctx = new MGOContext(
  311. dbName = proto.dbName,
  312. collName = proto.collName,
  313. actionType = MGO_ADMIN,
  314. action = Some(CreateView(viewName = proto.targets.head,
  315. viewOn = proto.targets.tail.head,
  316. pipeline = proto.bsonParam.map(p => protoToBson(p))))
  317. )
  318. proto.options match {
  319. case None => ctx
  320. case Some(o) => ctx.setCommand(CreateView(viewName = proto.targets.head,
  321. viewOn = proto.targets.tail.head,
  322. pipeline = proto.bsonParam.map(p => protoToBson(p)),
  323. options = Some(unmarshal[Any](o.value)))
  324. )
  325. }
  326. }
  327. case MGO_ADMIN_CREATEINDEX=> {
  328. var ctx = new MGOContext(
  329. dbName = proto.dbName,
  330. collName = proto.collName,
  331. actionType = MGO_ADMIN,
  332. action = Some(CreateIndex(key = protoToBson(proto.bsonParam.head)))
  333. )
  334. proto.options match {
  335. case None => ctx
  336. case Some(o) => ctx.setCommand(CreateIndex(key = protoToBson(proto.bsonParam.head),
  337. options = Some(unmarshal[Any](o.value)))
  338. )
  339. }
  340. }
  341. case MGO_ADMIN_DROPINDEXBYNAME=> {
  342. var ctx = new MGOContext(
  343. dbName = proto.dbName,
  344. collName = proto.collName,
  345. actionType = MGO_ADMIN,
  346. action = Some(DropIndexByName(indexName = proto.targets.head))
  347. )
  348. proto.options match {
  349. case None => ctx
  350. case Some(o) => ctx.setCommand(DropIndexByName(indexName = proto.targets.head,
  351. options = Some(unmarshal[Any](o.value)))
  352. )
  353. }
  354. }
  355. case MGO_ADMIN_DROPINDEXBYKEY=> {
  356. var ctx = new MGOContext(
  357. dbName = proto.dbName,
  358. collName = proto.collName,
  359. actionType = MGO_ADMIN,
  360. action = Some(DropIndexByKey(key = protoToBson(proto.bsonParam.head)))
  361. )
  362. proto.options match {
  363. case None => ctx
  364. case Some(o) => ctx.setCommand(DropIndexByKey(key = protoToBson(proto.bsonParam.head),
  365. options = Some(unmarshal[Any](o.value)))
  366. )
  367. }
  368. }
  369. case MGO_ADMIN_DROPALLINDEXES=> {
  370. var ctx = new MGOContext(
  371. dbName = proto.dbName,
  372. collName = proto.collName,
  373. actionType = MGO_ADMIN,
  374. action = Some(DropAllIndexes())
  375. )
  376. proto.options match {
  377. case None => ctx
  378. case Some(o) => ctx.setCommand(DropAllIndexes(
  379. options = Some(unmarshal[Any](o.value)))
  380. )
  381. }
  382. }
  383.  
  384. }
  385.  
  386. def ctxToProto(ctx: MGOContext): Option[sdp.grpc.services.ProtoMGOContext] = ctx.action match {
  387. case None => None
  388. case Some(act) => act match {
  389. case Count(filter, options) =>
  390. Some(new sdp.grpc.services.ProtoMGOContext(
  391. dbName = ctx.dbName,
  392. collName = ctx.collName,
  393. commandType = MGO_COMMAND_COUNT,
  394. bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
  395. else Seq(bsonToProto(filter.get))},
  396. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  397. else Some(ProtoAny(marshal(options.get))) }
  398. ))
  399. case Distict(fieldName, filter) =>
  400. Some(new sdp.grpc.services.ProtoMGOContext(
  401. dbName = ctx.dbName,
  402. collName = ctx.collName,
  403. commandType = MGO_COMMAND_DISTICT,
  404. bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
  405. else Seq(bsonToProto(filter.get))},
  406. targets = Seq(fieldName)
  407.  
  408. ))
  409.  
  410. case Find(filter, andThen, firstOnly) =>
  411. Some(new sdp.grpc.services.ProtoMGOContext(
  412. dbName = ctx.dbName,
  413. collName = ctx.collName,
  414. commandType = MGO_COMMAND_FIND,
  415. bsonParam = { if (filter == None) Seq.empty[ProtoMGOBson]
  416. else Seq(bsonToProto(filter.get))},
  417. resultOptions = andThen.map(_.toProto)
  418. ))
  419.  
  420. case Aggregate(pipeLine) =>
  421. Some(new sdp.grpc.services.ProtoMGOContext(
  422. dbName = ctx.dbName,
  423. collName = ctx.collName,
  424. commandType = MGO_COMMAND_AGGREGATE,
  425. bsonParam = pipeLine.map(bsonToProto)
  426. ))
  427.  
  428. case Insert(newdocs, options) =>
  429. Some(new sdp.grpc.services.ProtoMGOContext(
  430. dbName = ctx.dbName,
  431. collName = ctx.collName,
  432. commandType = MGO_COMMAND_INSERT,
  433. documents = newdocs.map(d => ProtoMGODocument(marshal(d))),
  434. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  435. else Some(ProtoAny(marshal(options.get))) }
  436. ))
  437.  
  438. case Delete(filter, options, onlyOne) =>
  439. Some(new sdp.grpc.services.ProtoMGOContext(
  440. dbName = ctx.dbName,
  441. collName = ctx.collName,
  442. commandType = MGO_COMMAND_DELETE,
  443. bsonParam = Seq(bsonToProto(filter)),
  444. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  445. else Some(ProtoAny(marshal(options.get))) },
  446. only = Some(onlyOne)
  447. ))
  448.  
  449. case Replace(filter, replacement, options) =>
  450. Some(new sdp.grpc.services.ProtoMGOContext(
  451. dbName = ctx.dbName,
  452. collName = ctx.collName,
  453. commandType = MGO_COMMAND_REPLACE,
  454. bsonParam = Seq(bsonToProto(filter)),
  455. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  456. else Some(ProtoAny(marshal(options.get))) },
  457. documents = Seq(ProtoMGODocument(marshal(replacement)))
  458. ))
  459.  
  460. case Update(filter, update, options, onlyOne) =>
  461. Some(new sdp.grpc.services.ProtoMGOContext(
  462. dbName = ctx.dbName,
  463. collName = ctx.collName,
  464. commandType = MGO_COMMAND_UPDATE,
  465. bsonParam = Seq(bsonToProto(filter),bsonToProto(update)),
  466. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  467. else Some(ProtoAny(marshal(options.get))) },
  468. only = Some(onlyOne)
  469. ))
  470.  
  471. case DropCollection(coll) =>
  472. Some(new sdp.grpc.services.ProtoMGOContext(
  473. dbName = ctx.dbName,
  474. collName = coll,
  475. commandType = MGO_ADMIN_DROPCOLLECTION
  476. ))
  477.  
  478. case CreateCollection(coll, options) =>
  479. Some(new sdp.grpc.services.ProtoMGOContext(
  480. dbName = ctx.dbName,
  481. collName = coll,
  482. commandType = MGO_ADMIN_CREATECOLLECTION,
  483. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  484. else Some(ProtoAny(marshal(options.get))) }
  485. ))
  486.  
  487. case ListCollection(dbName) =>
  488. Some(new sdp.grpc.services.ProtoMGOContext(
  489. dbName = ctx.dbName,
  490. commandType = MGO_ADMIN_LISTCOLLECTION
  491. ))
  492.  
  493. case CreateView(viewName, viewOn, pipeline, options) =>
  494. Some(new sdp.grpc.services.ProtoMGOContext(
  495. dbName = ctx.dbName,
  496. collName = ctx.collName,
  497. commandType = MGO_ADMIN_CREATEVIEW,
  498. bsonParam = pipeline.map(bsonToProto),
  499. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  500. else Some(ProtoAny(marshal(options.get))) },
  501. targets = Seq(viewName,viewOn)
  502. ))
  503.  
  504. case CreateIndex(key, options) =>
  505. Some(new sdp.grpc.services.ProtoMGOContext(
  506. dbName = ctx.dbName,
  507. collName = ctx.collName,
  508. commandType = MGO_ADMIN_CREATEINDEX,
  509. bsonParam = Seq(bsonToProto(key)),
  510. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  511. else Some(ProtoAny(marshal(options.get))) }
  512. ))
  513.  
  514. case DropIndexByName(indexName, options) =>
  515. Some(new sdp.grpc.services.ProtoMGOContext(
  516. dbName = ctx.dbName,
  517. collName = ctx.collName,
  518. commandType = MGO_ADMIN_DROPINDEXBYNAME,
  519. targets = Seq(indexName),
  520. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  521. else Some(ProtoAny(marshal(options.get))) }
  522. ))
  523.  
  524. case DropIndexByKey(key, options) =>
  525. Some(new sdp.grpc.services.ProtoMGOContext(
  526. dbName = ctx.dbName,
  527. collName = ctx.collName,
  528. commandType = MGO_ADMIN_DROPINDEXBYKEY,
  529. bsonParam = Seq(bsonToProto(key)),
  530. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  531. else Some(ProtoAny(marshal(options.get))) }
  532. ))
  533.  
  534. case DropAllIndexes(options) =>
  535. Some(new sdp.grpc.services.ProtoMGOContext(
  536. dbName = ctx.dbName,
  537. collName = ctx.collName,
  538. commandType = MGO_ADMIN_DROPALLINDEXES,
  539. options = { if(options == None) None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
  540. else Some(ProtoAny(marshal(options.get))) }
  541. ))
  542.  
  543. }
  544. }
  545.  
  546. }

mgo.engine/MongoEngine.scala

  1. package sdp.mongo.engine
  2.  
  3. import java.text.SimpleDateFormat
  4. import java.util.Calendar
  5.  
  6. import akka.NotUsed
  7. import akka.stream.Materializer
  8. import akka.stream.alpakka.mongodb.scaladsl._
  9. import akka.stream.scaladsl.{Flow, Source}
  10. import org.bson.conversions.Bson
  11. import org.mongodb.scala.bson.collection.immutable.Document
  12. import org.mongodb.scala.bson.{BsonArray, BsonBinary}
  13. import org.mongodb.scala.model._
  14. import org.mongodb.scala.{MongoClient, _}
  15. import protobuf.bytes.Converter._
  16. import sdp.file.Streaming._
  17. import sdp.logging.LogSupport
  18.  
  19. import scala.collection.JavaConverters._
  20. import scala.concurrent._
  21. import scala.concurrent.duration._
  22.  
  23. object MGOClasses {
  24. type MGO_ACTION_TYPE = Int
  25. val MGO_QUERY =
  26. val MGO_UPDATE =
  27. val MGO_ADMIN =
  28.  
  29. /* org.mongodb.scala.FindObservable
  30. import com.mongodb.async.client.FindIterable
  31. val resultDocType = FindIterable[Document]
  32. val resultOption = FindObservable(resultDocType)
  33. .maxScan(...)
  34. .limit(...)
  35. .sort(...)
  36. .project(...) */
  37.  
  38. type FOD_TYPE = Int
  39. val FOD_FIRST = //def first(): SingleObservable[TResult], return the first item
  40. val FOD_FILTER = //def filter(filter: Bson): FindObservable[TResult]
  41. val FOD_LIMIT = //def limit(limit: Int): FindObservable[TResult]
  42. val FOD_SKIP = //def skip(skip: Int): FindObservable[TResult]
  43. val FOD_PROJECTION = //def projection(projection: Bson): FindObservable[TResult]
  44. //Sets a document describing the fields to return for all matching documents
  45. val FOD_SORT = //def sort(sort: Bson): FindObservable[TResult]
  46. val FOD_PARTIAL = //def partial(partial: Boolean): FindObservable[TResult]
  47. //Get partial results from a sharded cluster if one or more shards are unreachable (instead of throwing an error)
  48. val FOD_CURSORTYPE = //def cursorType(cursorType: CursorType): FindObservable[TResult]
  49. //Sets the cursor type
  50. val FOD_HINT = //def hint(hint: Bson): FindObservable[TResult]
  51. //Sets the hint for which index to use. A null value means no hint is set
  52. val FOD_MAX = //def max(max: Bson): FindObservable[TResult]
  53. //Sets the exclusive upper bound for a specific index. A null value means no max is set
  54. val FOD_MIN = //def min(min: Bson): FindObservable[TResult]
  55. //Sets the minimum inclusive lower bound for a specific index. A null value means no max is set
  56. val FOD_RETURNKEY = //def returnKey(returnKey: Boolean): FindObservable[TResult]
  57. //Sets the returnKey. If true the find operation will return only the index keys in the resulting documents
  58. val FOD_SHOWRECORDID= //def showRecordId(showRecordId: Boolean): FindObservable[TResult]
  59. //Sets the showRecordId. Set to true to add a field `\$recordId` to the returned documents
  60.  
  61. case class ResultOptions(
  62. optType: FOD_TYPE,
  63. bson: Option[Bson] = None,
  64. value: Int = ){
  65. def toProto = new sdp.grpc.services.ProtoMGOResultOption(
  66. optType = this.optType,
  67. bsonParam = this.bson.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
  68. valueParam = this.value
  69. )
  70. def toFindObservable: FindObservable[Document] => FindObservable[Document] = find => {
  71. optType match {
  72. case FOD_FIRST => find
  73. case FOD_FILTER => find.filter(bson.get)
  74. case FOD_LIMIT => find.limit(value)
  75. case FOD_SKIP => find.skip(value)
  76. case FOD_PROJECTION => find.projection(bson.get)
  77. case FOD_SORT => find.sort(bson.get)
  78. case FOD_PARTIAL => find.partial(value != )
  79. case FOD_CURSORTYPE => find
  80. case FOD_HINT => find.hint(bson.get)
  81. case FOD_MAX => find.max(bson.get)
  82. case FOD_MIN => find.min(bson.get)
  83. case FOD_RETURNKEY => find.returnKey(value != )
  84. case FOD_SHOWRECORDID => find.showRecordId(value != )
  85.  
  86. }
  87. }
  88. }
  89. object ResultOptions {
  90. def fromProto(msg: sdp.grpc.services.ProtoMGOResultOption) = new ResultOptions(
  91. optType = msg.optType,
  92. bson = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
  93. value = msg.valueParam
  94. )
  95.  
  96. }
  97.  
  98. trait MGOCommands
  99.  
  100. object MGOCommands {
  101.  
  102. case class Count(filter: Option[Bson] = None, options: Option[Any] = None) extends MGOCommands
  103.  
  104. case class Distict(fieldName: String, filter: Option[Bson] = None) extends MGOCommands
  105.  
  106. /* org.mongodb.scala.FindObservable
  107. import com.mongodb.async.client.FindIterable
  108. val resultDocType = FindIterable[Document]
  109. val resultOption = FindObservable(resultDocType)
  110. .maxScan(...)
  111. .limit(...)
  112. .sort(...)
  113. .project(...) */
  114. case class Find(filter: Option[Bson] = None,
  115. andThen: Seq[ResultOptions] = Seq.empty[ResultOptions],
  116. firstOnly: Boolean = false) extends MGOCommands
  117.  
  118. case class Aggregate(pipeLine: Seq[Bson]) extends MGOCommands
  119.  
  120. case class MapReduce(mapFunction: String, reduceFunction: String) extends MGOCommands
  121.  
  122. case class Insert(newdocs: Seq[Document], options: Option[Any] = None) extends MGOCommands
  123.  
  124. case class Delete(filter: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands
  125.  
  126. case class Replace(filter: Bson, replacement: Document, options: Option[Any] = None) extends MGOCommands
  127.  
  128. case class Update(filter: Bson, update: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands
  129.  
  130. case class BulkWrite(commands: List[WriteModel[Document]], options: Option[Any] = None) extends MGOCommands
  131.  
  132. }
  133.  
  134. object MGOAdmins {
  135.  
  136. case class DropCollection(collName: String) extends MGOCommands
  137.  
  138. case class CreateCollection(collName: String, options: Option[Any] = None) extends MGOCommands
  139.  
  140. case class ListCollection(dbName: String) extends MGOCommands
  141.  
  142. case class CreateView(viewName: String, viewOn: String, pipeline: Seq[Bson], options: Option[Any] = None) extends MGOCommands
  143.  
  144. case class CreateIndex(key: Bson, options: Option[Any] = None) extends MGOCommands
  145.  
  146. case class DropIndexByName(indexName: String, options: Option[Any] = None) extends MGOCommands
  147.  
  148. case class DropIndexByKey(key: Bson, options: Option[Any] = None) extends MGOCommands
  149.  
  150. case class DropAllIndexes(options: Option[Any] = None) extends MGOCommands
  151.  
  152. }
  153.  
  154. case class MGOContext(
  155. dbName: String,
  156. collName: String,
  157. actionType: MGO_ACTION_TYPE = MGO_QUERY,
  158. action: Option[MGOCommands] = None,
  159. actionOptions: Option[Any] = None,
  160. actionTargets: Seq[String] = Nil
  161. ) {
  162. ctx =>
  163. def setDbName(name: String): MGOContext = ctx.copy(dbName = name)
  164.  
  165. def setCollName(name: String): MGOContext = ctx.copy(collName = name)
  166.  
  167. def setActionType(at: MGO_ACTION_TYPE): MGOContext = ctx.copy(actionType = at)
  168.  
  169. def setCommand(cmd: MGOCommands): MGOContext = ctx.copy(action = Some(cmd))
  170.  
  171. def toSomeProto = MGOProtoConversion.ctxToProto(this)
  172.  
  173. }
  174.  
  175. object MGOContext {
  176. def apply(db: String, coll: String) = new MGOContext(db, coll)
  177. def fromProto(proto: sdp.grpc.services.ProtoMGOContext): MGOContext =
  178. MGOProtoConversion.ctxFromProto(proto)
  179. }
  180.  
  181. case class MGOBatContext(contexts: Seq[MGOContext], tx: Boolean = false) {
  182. ctxs =>
  183. def setTx(txopt: Boolean): MGOBatContext = ctxs.copy(tx = txopt)
  184. def appendContext(ctx: MGOContext): MGOBatContext =
  185. ctxs.copy(contexts = contexts :+ ctx)
  186. }
  187.  
  188. object MGOBatContext {
  189. def apply(ctxs: Seq[MGOContext], tx: Boolean = false) = new MGOBatContext(ctxs,tx)
  190. }
  191.  
  192. type MGODate = java.util.Date
  193. def mgoDate(yyyy: Int, mm: Int, dd: Int): MGODate = {
  194. val ca = Calendar.getInstance()
  195. ca.set(yyyy,mm,dd)
  196. ca.getTime()
  197. }
  198. def mgoDateTime(yyyy: Int, mm: Int, dd: Int, hr: Int, min: Int, sec: Int): MGODate = {
  199. val ca = Calendar.getInstance()
  200. ca.set(yyyy,mm,dd,hr,min,sec)
  201. ca.getTime()
  202. }
  203. def mgoDateTimeNow: MGODate = {
  204. val ca = Calendar.getInstance()
  205. ca.getTime
  206. }
  207.  
  208. def mgoDateToString(dt: MGODate, formatString: String): String = {
  209. val fmt= new SimpleDateFormat(formatString)
  210. fmt.format(dt)
  211. }
  212.  
  213. type MGOBlob = BsonBinary
  214. type MGOArray = BsonArray
  215.  
  216. def fileToMGOBlob(fileName: String, timeOut: FiniteDuration = seconds)(
  217. implicit mat: Materializer) = FileToByteArray(fileName,timeOut)
  218.  
  219. def mgoBlobToFile(blob: MGOBlob, fileName: String)(
  220. implicit mat: Materializer) = ByteArrayToFile(blob.getData,fileName)
  221.  
  222. def mgoGetStringOrNone(doc: Document, fieldName: String) = {
  223. if (doc.keySet.contains(fieldName))
  224. Some(doc.getString(fieldName))
  225. else None
  226. }
  227. def mgoGetIntOrNone(doc: Document, fieldName: String) = {
  228. if (doc.keySet.contains(fieldName))
  229. Some(doc.getInteger(fieldName))
  230. else None
  231. }
  232. def mgoGetLonggOrNone(doc: Document, fieldName: String) = {
  233. if (doc.keySet.contains(fieldName))
  234. Some(doc.getLong(fieldName))
  235. else None
  236. }
  237. def mgoGetDoubleOrNone(doc: Document, fieldName: String) = {
  238. if (doc.keySet.contains(fieldName))
  239. Some(doc.getDouble(fieldName))
  240. else None
  241. }
  242. def mgoGetBoolOrNone(doc: Document, fieldName: String) = {
  243. if (doc.keySet.contains(fieldName))
  244. Some(doc.getBoolean(fieldName))
  245. else None
  246. }
  247. def mgoGetDateOrNone(doc: Document, fieldName: String) = {
  248. if (doc.keySet.contains(fieldName))
  249. Some(doc.getDate(fieldName))
  250. else None
  251. }
  252. def mgoGetBlobOrNone(doc: Document, fieldName: String) = {
  253. if (doc.keySet.contains(fieldName))
  254. doc.get(fieldName).asInstanceOf[Option[MGOBlob]]
  255. else None
  256. }
  257. def mgoGetArrayOrNone(doc: Document, fieldName: String) = {
  258. if (doc.keySet.contains(fieldName))
  259. doc.get(fieldName).asInstanceOf[Option[MGOArray]]
  260. else None
  261. }
  262.  
  263. def mgoArrayToDocumentList(arr: MGOArray): scala.collection.immutable.List[org.bson.BsonDocument] = {
  264. (arr.getValues.asScala.toList)
  265. .asInstanceOf[scala.collection.immutable.List[org.bson.BsonDocument]]
  266. }
  267.  
  268. type MGOFilterResult = FindObservable[Document] => FindObservable[Document]
  269. }
  270.  
  271. object MGOEngine extends LogSupport {
  272.  
  273. import MGOClasses._
  274. import MGOAdmins._
  275. import MGOCommands._
  276. import sdp.result.DBOResult._
  277.  
  278. object TxUpdateMode {
  279. private def mgoTxUpdate(ctxs: MGOBatContext, observable: SingleObservable[ClientSession])(
  280. implicit client: MongoClient, ec: ExecutionContext): SingleObservable[ClientSession] = {
  281. log.info(s"mgoTxUpdate> calling ...")
  282. observable.map(clientSession => {
  283.  
  284. val transactionOptions =
  285. TransactionOptions.builder()
  286. .readConcern(ReadConcern.SNAPSHOT)
  287. .writeConcern(WriteConcern.MAJORITY).build()
  288.  
  289. clientSession.startTransaction(transactionOptions)
  290. /*
  291. val fut = Future.traverse(ctxs.contexts) { ctx =>
  292. mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
  293. }
  294. Await.ready(fut, 3 seconds) */
  295.  
  296. ctxs.contexts.foreach { ctx =>
  297. mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
  298. }
  299. clientSession
  300. })
  301. }
  302.  
  303. private def commitAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
  304. log.info(s"commitAndRetry> calling ...")
  305. observable.recoverWith({
  306. case e: MongoException if e.hasErrorLabel(MongoException.UNKNOWN_TRANSACTION_COMMIT_RESULT_LABEL) => {
  307. log.warn("commitAndRetry> UnknownTransactionCommitResult, retrying commit operation ...")
  308. commitAndRetry(observable)
  309. }
  310. case e: Exception => {
  311. log.error(s"commitAndRetry> Exception during commit ...: $e")
  312. throw e
  313. }
  314. })
  315. }
  316.  
  317. private def runTransactionAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
  318. log.info(s"runTransactionAndRetry> calling ...")
  319. observable.recoverWith({
  320. case e: MongoException if e.hasErrorLabel(MongoException.TRANSIENT_TRANSACTION_ERROR_LABEL) => {
  321. log.warn("runTransactionAndRetry> TransientTransactionError, aborting transaction and retrying ...")
  322. runTransactionAndRetry(observable)
  323. }
  324. })
  325. }
  326.  
  327. def mgoTxBatch(ctxs: MGOBatContext)(
  328. implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = {
  329.  
  330. log.info(s"mgoTxBatch> MGOBatContext: ${ctxs}")
  331.  
  332. val updateObservable: Observable[ClientSession] = mgoTxUpdate(ctxs, client.startSession())
  333. val commitTransactionObservable: SingleObservable[Completed] =
  334. updateObservable.flatMap(clientSession => clientSession.commitTransaction())
  335. val commitAndRetryObservable: SingleObservable[Completed] = commitAndRetry(commitTransactionObservable)
  336.  
  337. runTransactionAndRetry(commitAndRetryObservable)
  338.  
  339. valueToDBOResult(Completed())
  340.  
  341. }
  342. }
  343.  
  344. def mgoUpdateBatch(ctxs: MGOBatContext)(implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = {
  345. log.info(s"mgoUpdateBatch> MGOBatContext: ${ctxs}")
  346. if (ctxs.tx) {
  347. TxUpdateMode.mgoTxBatch(ctxs)
  348. } else {
  349. /*
  350. val fut = Future.traverse(ctxs.contexts) { ctx =>
  351. mgoUpdate[Completed](ctx).map(identity) }
  352.  
  353. Await.ready(fut, 3 seconds)
  354. Future.successful(new Completed) */
  355. ctxs.contexts.foreach { ctx =>
  356. mgoUpdate[Completed](ctx).map(identity) }
  357.  
  358. valueToDBOResult(Completed())
  359. }
  360.  
  361. }
  362.  
  363. def mongoStream(ctx: MGOContext)(
  364. implicit client: MongoClient, ec: ExecutionContextExecutor): Source[Document, NotUsed] = {
  365. log.info(s"mongoStream> MGOContext: ${ctx}")
  366.  
  367. def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
  368. rts.foldRight(findObj)((a,b) => a.toFindObservable(b))
  369.  
  370. val db = client.getDatabase(ctx.dbName)
  371. val coll = db.getCollection(ctx.collName)
  372. if ( ctx.action == None) {
  373. log.error(s"mongoStream> uery action cannot be null!")
  374. throw new IllegalArgumentException("query action cannot be null!")
  375. }
  376. try {
  377. ctx.action.get match {
  378. case Find(None, Nil, false) => //FindObservable
  379. MongoSource(coll.find())
  380. case Find(None, Nil, true) => //FindObservable
  381. MongoSource(coll.find().first())
  382. case Find(Some(filter), Nil, false) => //FindObservable
  383. MongoSource(coll.find(filter))
  384. case Find(Some(filter), Nil, true) => //FindObservable
  385. MongoSource(coll.find(filter).first())
  386. case Find(None, sro, _) => //FindObservable
  387. val next = toResultOption(sro)
  388. MongoSource(next(coll.find[Document]()))
  389. case Find(Some(filter), sro, _) => //FindObservable
  390. val next = toResultOption(sro)
  391. MongoSource(next(coll.find[Document](filter)))
  392. case _ =>
  393. log.error(s"mongoStream> unsupported streaming query [${ctx.action.get}]")
  394. throw new RuntimeException(s"mongoStream> unsupported streaming query [${ctx.action.get}]")
  395.  
  396. }
  397. }
  398. catch { case e: Exception =>
  399. log.error(s"mongoStream> runtime error: ${e.getMessage}")
  400. throw new RuntimeException(s"mongoStream> Error: ${e.getMessage}")
  401. }
  402.  
  403. }
  404.  
  405. // T => FindIterable e.g List[Document]
  406. def mgoQuery[T](ctx: MGOContext, Converter: Option[Document => Any] = None)(implicit client: MongoClient): DBOResult[T] = {
  407. log.info(s"mgoQuery> MGOContext: ${ctx}")
  408.  
  409. val db = client.getDatabase(ctx.dbName)
  410. val coll = db.getCollection(ctx.collName)
  411.  
  412. def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
  413. rts.foldRight(findObj)((a,b) => a.toFindObservable(b))
  414.  
  415. if ( ctx.action == None) {
  416. log.error(s"mgoQuery> uery action cannot be null!")
  417. Left(new IllegalArgumentException("query action cannot be null!"))
  418. }
  419. try {
  420. ctx.action.get match {
  421. /* count */
  422. case Count(Some(filter), Some(opt)) => //SingleObservable
  423. coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
  424. .toFuture().asInstanceOf[Future[T]]
  425. case Count(Some(filter), None) => //SingleObservable
  426. coll.countDocuments(filter).toFuture()
  427. .asInstanceOf[Future[T]]
  428. case Count(None, None) => //SingleObservable
  429. coll.countDocuments().toFuture()
  430. .asInstanceOf[Future[T]]
  431. /* distinct */
  432. case Distict(field, Some(filter)) => //DistinctObservable
  433. coll.distinct(field, filter).toFuture()
  434. .asInstanceOf[Future[T]]
  435. case Distict(field, None) => //DistinctObservable
  436. coll.distinct((field)).toFuture()
  437. .asInstanceOf[Future[T]]
  438. /* find */
  439. case Find(None, Nil, false) => //FindObservable
  440. if (Converter == None) coll.find().toFuture().asInstanceOf[Future[T]]
  441. else coll.find().map(Converter.get).toFuture().asInstanceOf[Future[T]]
  442. case Find(None, Nil, true) => //FindObservable
  443. if (Converter == None) coll.find().first().head().asInstanceOf[Future[T]]
  444. else coll.find().first().map(Converter.get).head().asInstanceOf[Future[T]]
  445. case Find(Some(filter), Nil, false) => //FindObservable
  446. if (Converter == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
  447. else coll.find(filter).map(Converter.get).toFuture().asInstanceOf[Future[T]]
  448. case Find(Some(filter), Nil, true) => //FindObservable
  449. if (Converter == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
  450. else coll.find(filter).first().map(Converter.get).head().asInstanceOf[Future[T]]
  451. case Find(None, sro, _) => //FindObservable
  452. val next = toResultOption(sro)
  453. if (Converter == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
  454. else next(coll.find[Document]()).map(Converter.get).toFuture().asInstanceOf[Future[T]]
  455. case Find(Some(filter), sro, _) => //FindObservable
  456. val next = toResultOption(sro)
  457. if (Converter == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
  458. else next(coll.find[Document](filter)).map(Converter.get).toFuture().asInstanceOf[Future[T]]
  459. /* aggregate AggregateObservable*/
  460. case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
  461. /* mapReduce MapReduceObservable*/
  462. case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
  463. /* list collection */
  464. case ListCollection(dbName) => //ListConllectionObservable
  465. client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
  466.  
  467. }
  468. }
  469. catch { case e: Exception =>
  470. log.error(s"mgoQuery> runtime error: ${e.getMessage}")
  471. Left(new RuntimeException(s"mgoQuery> Error: ${e.getMessage}"))
  472. }
  473. }
  474. //T => Completed, result.UpdateResult, result.DeleteResult
  475. def mgoUpdate[T](ctx: MGOContext)(implicit client: MongoClient): DBOResult[T] =
  476. try {
  477. mgoUpdateObservable[T](ctx).toFuture()
  478. }
  479. catch { case e: Exception =>
  480. log.error(s"mgoUpdate> runtime error: ${e.getMessage}")
  481. Left(new RuntimeException(s"mgoUpdate> Error: ${e.getMessage}"))
  482. }
  483.  
  484. def mgoUpdateObservable[T](ctx: MGOContext)(implicit client: MongoClient): SingleObservable[T] = {
  485. log.info(s"mgoUpdateObservable> MGOContext: ${ctx}")
  486.  
  487. val db = client.getDatabase(ctx.dbName)
  488. val coll = db.getCollection(ctx.collName)
  489. if ( ctx.action == None) {
  490. log.error(s"mgoUpdateObservable> uery action cannot be null!")
  491. throw new IllegalArgumentException("mgoUpdateObservable> query action cannot be null!")
  492. }
  493. try {
  494. ctx.action.get match {
  495. /* insert */
  496. case Insert(docs, Some(opt)) => //SingleObservable[Completed]
  497. if (docs.size > )
  498. coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).asInstanceOf[SingleObservable[T]]
  499. else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).asInstanceOf[SingleObservable[T]]
  500. case Insert(docs, None) => //SingleObservable
  501. if (docs.size > ) coll.insertMany(docs).asInstanceOf[SingleObservable[T]]
  502. else coll.insertOne(docs.head).asInstanceOf[SingleObservable[T]]
  503. /* delete */
  504. case Delete(filter, None, onlyOne) => //SingleObservable
  505. if (onlyOne) coll.deleteOne(filter).asInstanceOf[SingleObservable[T]]
  506. else coll.deleteMany(filter).asInstanceOf[SingleObservable[T]]
  507. case Delete(filter, Some(opt), onlyOne) => //SingleObservable
  508. if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
  509. else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
  510. /* replace */
  511. case Replace(filter, replacement, None) => //SingleObservable
  512. coll.replaceOne(filter, replacement).asInstanceOf[SingleObservable[T]]
  513. case Replace(filter, replacement, Some(opt)) => //SingleObservable
  514. coll.replaceOne(filter, replacement, opt.asInstanceOf[ReplaceOptions]).asInstanceOf[SingleObservable[T]]
  515. /* update */
  516. case Update(filter, update, None, onlyOne) => //SingleObservable
  517. if (onlyOne) coll.updateOne(filter, update).asInstanceOf[SingleObservable[T]]
  518. else coll.updateMany(filter, update).asInstanceOf[SingleObservable[T]]
  519. case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
  520. if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
  521. else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
  522. /* bulkWrite */
  523. case BulkWrite(commands, None) => //SingleObservable
  524. coll.bulkWrite(commands).asInstanceOf[SingleObservable[T]]
  525. case BulkWrite(commands, Some(opt)) => //SingleObservable
  526. coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).asInstanceOf[SingleObservable[T]]
  527. }
  528. }
  529. catch { case e: Exception =>
  530. log.error(s"mgoUpdateObservable> runtime error: ${e.getMessage}")
  531. throw new RuntimeException(s"mgoUpdateObservable> Error: ${e.getMessage}")
  532. }
  533. }
  534.  
  535. def mgoAdmin(ctx: MGOContext)(implicit client: MongoClient): DBOResult[Completed] = {
  536. log.info(s"mgoAdmin> MGOContext: ${ctx}")
  537.  
  538. val db = client.getDatabase(ctx.dbName)
  539. val coll = db.getCollection(ctx.collName)
  540. if ( ctx.action == None) {
  541. log.error(s"mgoAdmin> uery action cannot be null!")
  542. Left(new IllegalArgumentException("mgoAdmin> query action cannot be null!"))
  543. }
  544. try {
  545. ctx.action.get match {
  546. /* drop collection */
  547. case DropCollection(collName) => //SingleObservable
  548. val coll = db.getCollection(collName)
  549. coll.drop().toFuture()
  550. /* create collection */
  551. case CreateCollection(collName, None) => //SingleObservable
  552. db.createCollection(collName).toFuture()
  553. case CreateCollection(collName, Some(opt)) => //SingleObservable
  554. db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture()
  555. /* list collection
  556. case ListCollection(dbName) => //ListConllectionObservable
  557. client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
  558. */
  559. /* create view */
  560. case CreateView(viewName, viewOn, pline, None) => //SingleObservable
  561. db.createView(viewName, viewOn, pline).toFuture()
  562. case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
  563. db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture()
  564. /* create index */
  565. case CreateIndex(key, None) => //SingleObservable
  566. coll.createIndex(key).toFuture().asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
  567. case CreateIndex(key, Some(opt)) => //SingleObservable
  568. coll.createIndex(key, opt.asInstanceOf[IndexOptions]).asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
  569. /* drop index */
  570. case DropIndexByName(indexName, None) => //SingleObservable
  571. coll.dropIndex(indexName).toFuture()
  572. case DropIndexByName(indexName, Some(opt)) => //SingleObservable
  573. coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture()
  574. case DropIndexByKey(key, None) => //SingleObservable
  575. coll.dropIndex(key).toFuture()
  576. case DropIndexByKey(key, Some(opt)) => //SingleObservable
  577. coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture()
  578. case DropAllIndexes(None) => //SingleObservable
  579. coll.dropIndexes().toFuture()
  580. case DropAllIndexes(Some(opt)) => //SingleObservable
  581. coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture()
  582. }
  583. }
  584. catch { case e: Exception =>
  585. log.error(s"mgoAdmin> runtime error: ${e.getMessage}")
  586. throw new RuntimeException(s"mgoAdmin> Error: ${e.getMessage}")
  587. }
  588.  
  589. }
  590.  
  591. /*
  592. def mgoExecute[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
  593. val db = client.getDatabase(ctx.dbName)
  594. val coll = db.getCollection(ctx.collName)
  595. ctx.action match {
  596. /* count */
  597. case Count(Some(filter), Some(opt)) => //SingleObservable
  598. coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
  599. .toFuture().asInstanceOf[Future[T]]
  600. case Count(Some(filter), None) => //SingleObservable
  601. coll.countDocuments(filter).toFuture()
  602. .asInstanceOf[Future[T]]
  603. case Count(None, None) => //SingleObservable
  604. coll.countDocuments().toFuture()
  605. .asInstanceOf[Future[T]]
  606. /* distinct */
  607. case Distict(field, Some(filter)) => //DistinctObservable
  608. coll.distinct(field, filter).toFuture()
  609. .asInstanceOf[Future[T]]
  610. case Distict(field, None) => //DistinctObservable
  611. coll.distinct((field)).toFuture()
  612. .asInstanceOf[Future[T]]
  613. /* find */
  614. case Find(None, None, optConv, false) => //FindObservable
  615. if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
  616. else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
  617. case Find(None, None, optConv, true) => //FindObservable
  618. if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
  619. else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
  620. case Find(Some(filter), None, optConv, false) => //FindObservable
  621. if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
  622. else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
  623. case Find(Some(filter), None, optConv, true) => //FindObservable
  624. if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
  625. else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
  626. case Find(None, Some(next), optConv, _) => //FindObservable
  627. if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
  628. else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
  629. case Find(Some(filter), Some(next), optConv, _) => //FindObservable
  630. if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
  631. else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
  632. /* aggregate AggregateObservable*/
  633. case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
  634. /* mapReduce MapReduceObservable*/
  635. case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
  636. /* insert */
  637. case Insert(docs, Some(opt)) => //SingleObservable[Completed]
  638. if (docs.size > ) coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).toFuture()
  639. .asInstanceOf[Future[T]]
  640. else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).toFuture()
  641. .asInstanceOf[Future[T]]
  642. case Insert(docs, None) => //SingleObservable
  643. if (docs.size > ) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
  644. else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
  645. /* delete */
  646. case Delete(filter, None, onlyOne) => //SingleObservable
  647. if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
  648. else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
  649. case Delete(filter, Some(opt), onlyOne) => //SingleObservable
  650. if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
  651. else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
  652. /* replace */
  653. case Replace(filter, replacement, None) => //SingleObservable
  654. coll.replaceOne(filter, replacement).toFuture().asInstanceOf[Future[T]]
  655. case Replace(filter, replacement, Some(opt)) => //SingleObservable
  656. coll.replaceOne(filter, replacement, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
  657. /* update */
  658. case Update(filter, update, None, onlyOne) => //SingleObservable
  659. if (onlyOne) coll.updateOne(filter, update).toFuture().asInstanceOf[Future[T]]
  660. else coll.updateMany(filter, update).toFuture().asInstanceOf[Future[T]]
  661. case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
  662. if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
  663. else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
  664. /* bulkWrite */
  665. case BulkWrite(commands, None) => //SingleObservable
  666. coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
  667. case BulkWrite(commands, Some(opt)) => //SingleObservable
  668. coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]]
  669.  
  670. /* drop collection */
  671. case DropCollection(collName) => //SingleObservable
  672. val coll = db.getCollection(collName)
  673. coll.drop().toFuture().asInstanceOf[Future[T]]
  674. /* create collection */
  675. case CreateCollection(collName, None) => //SingleObservable
  676. db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
  677. case CreateCollection(collName, Some(opt)) => //SingleObservable
  678. db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
  679. /* list collection */
  680. case ListCollection(dbName) => //ListConllectionObservable
  681. client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
  682. /* create view */
  683. case CreateView(viewName, viewOn, pline, None) => //SingleObservable
  684. db.createView(viewName, viewOn, pline).toFuture().asInstanceOf[Future[T]]
  685. case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
  686. db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
  687. /* create index */
  688. case CreateIndex(key, None) => //SingleObservable
  689. coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
  690. case CreateIndex(key, Some(opt)) => //SingleObservable
  691. coll.createIndex(key, opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
  692. /* drop index */
  693. case DropIndexByName(indexName, None) => //SingleObservable
  694. coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
  695. case DropIndexByName(indexName, Some(opt)) => //SingleObservable
  696. coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
  697. case DropIndexByKey(key, None) => //SingleObservable
  698. coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
  699. case DropIndexByKey(key, Some(opt)) => //SingleObservable
  700. coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
  701. case DropAllIndexes(None) => //SingleObservable
  702. coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
  703. case DropAllIndexes(Some(opt)) => //SingleObservable
  704. coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
  705. }
  706. }
  707. */
  708.  
  709. }
  710.  
  711. object MongoActionStream {
  712.  
  713. import MGOClasses._
  714.  
  715. case class StreamingInsert[A](dbName: String,
  716. collName: String,
  717. converter: A => Document,
  718. parallelism: Int =
  719. ) extends MGOCommands
  720.  
  721. case class StreamingDelete[A](dbName: String,
  722. collName: String,
  723. toFilter: A => Bson,
  724. parallelism: Int = ,
  725. justOne: Boolean = false
  726. ) extends MGOCommands
  727.  
  728. case class StreamingUpdate[A](dbName: String,
  729. collName: String,
  730. toFilter: A => Bson,
  731. toUpdate: A => Bson,
  732. parallelism: Int = ,
  733. justOne: Boolean = false
  734. ) extends MGOCommands
  735.  
  736. case class InsertAction[A](ctx: StreamingInsert[A])(
  737. implicit mongoClient: MongoClient) {
  738.  
  739. val database = mongoClient.getDatabase(ctx.dbName)
  740. val collection = database.getCollection(ctx.collName)
  741.  
  742. def performOnRow(implicit ec: ExecutionContext): Flow[A, Document, NotUsed] =
  743. Flow[A].map(ctx.converter)
  744. .mapAsync(ctx.parallelism)(doc => collection.insertOne(doc).toFuture().map(_ => doc))
  745. }
  746.  
  747. case class UpdateAction[A](ctx: StreamingUpdate[A])(
  748. implicit mongoClient: MongoClient) {
  749.  
  750. val database = mongoClient.getDatabase(ctx.dbName)
  751. val collection = database.getCollection(ctx.collName)
  752.  
  753. def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
  754. if (ctx.justOne) {
  755. Flow[A]
  756. .mapAsync(ctx.parallelism)(a =>
  757. collection.updateOne(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
  758. } else
  759. Flow[A]
  760. .mapAsync(ctx.parallelism)(a =>
  761. collection.updateMany(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
  762. }
  763.  
  764. case class DeleteAction[A](ctx: StreamingDelete[A])(
  765. implicit mongoClient: MongoClient) {
  766.  
  767. val database = mongoClient.getDatabase(ctx.dbName)
  768. val collection = database.getCollection(ctx.collName)
  769.  
  770. def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
  771. if (ctx.justOne) {
  772. Flow[A]
  773. .mapAsync(ctx.parallelism)(a =>
  774. collection.deleteOne(ctx.toFilter(a)).toFuture().map(_ => a))
  775. } else
  776. Flow[A]
  777. .mapAsync(ctx.parallelism)(a =>
  778. collection.deleteMany(ctx.toFilter(a)).toFuture().map(_ => a))
  779. }
  780.  
  781. }
  782.  
  783. object MGOHelpers {
  784.  
  785. implicit class DocumentObservable[C](val observable: Observable[Document]) extends ImplicitObservable[Document] {
  786. override val converter: (Document) => String = (doc) => doc.toJson
  787. }
  788.  
  789. implicit class GenericObservable[C](val observable: Observable[C]) extends ImplicitObservable[C] {
  790. override val converter: (C) => String = (doc) => doc.toString
  791. }
  792.  
  793. trait ImplicitObservable[C] {
  794. val observable: Observable[C]
  795. val converter: (C) => String
  796.  
  797. def results(): Seq[C] = Await.result(observable.toFuture(), seconds)
  798.  
  799. def headResult() = Await.result(observable.head(), seconds)
  800.  
  801. def printResults(initial: String = ""): Unit = {
  802. if (initial.length > ) print(initial)
  803. results().foreach(res => println(converter(res)))
  804. }
  805.  
  806. def printHeadResult(initial: String = ""): Unit = println(s"${initial}${converter(headResult())}")
  807. }
  808.  
  809. def getResult[T](fut: Future[T], timeOut: Duration = second): T = {
  810. Await.result(fut, timeOut)
  811. }
  812.  
  813. def getResults[T](fut: Future[Iterable[T]], timeOut: Duration = second): Iterable[T] = {
  814. Await.result(fut, timeOut)
  815. }
  816.  
  817. import monix.eval.Task
  818. import monix.execution.Scheduler.Implicits.global
  819.  
  820. final class FutureToTask[A](x: => Future[A]) {
  821. def asTask: Task[A] = Task.deferFuture[A](x)
  822. }
  823.  
  824. final class TaskToFuture[A](x: => Task[A]) {
  825. def asFuture: Future[A] = x.runAsync
  826. }
  827.  
  828. }

Akka-Cluster(5)- load-balancing with backoff-supervised stateless computation - 无状态任务集群节点均衡分配的更多相关文章

  1. Scalable MySQL Cluster with Master-Slave Replication, ProxySQL Load Balancing and Orchestrator

    MySQL is one of the most popular open-source relational databases, used by lots of projects around t ...

  2. How Network Load Balancing Technology Works--reference

    http://technet.microsoft.com/en-us/library/cc756878(v=ws.10).aspx In this section Network Load Balan ...

  3. Network Load Balancing Technical Overview--reference

    http://technet.microsoft.com/en-us/library/bb742455.aspx Abstract Network Load Balancing, a clusteri ...

  4. Akka系列(十):Akka集群之Akka Cluster

    前言........... 上一篇文章我们讲了Akka Remote,理解了Akka中的远程通信,其实Akka Cluster可以看成Akka Remote的扩展,由原来的两点变成由多点组成的通信网络 ...

  5. Redis Cluster 集群节点维护 (三)

    Redis Cluster 集群节点维护: 集群运行很久之后,难免由于硬件故障,网络规划,业务增长,等原因对已有集群进行相应的调整,比如增加redis nodes 节点,减少节点,节点迁移,更换服务器 ...

  6. 【架构】How To Use HAProxy to Set Up MySQL Load Balancing

    How To Use HAProxy to Set Up MySQL Load Balancing Dec  2, 2013 MySQL, Scaling, Server Optimization U ...

  7. How Node.js Multiprocess Load Balancing Works

    As of version 0.6.0 of node, load multiple process load balancing is available for node. The concept ...

  8. Akka Cluster简介与基本环境搭建

      akka集群是高容错.去中心化.不存在单点故障以及不存在单点瓶颈的集群.它使用gossip协议通信以及具备故障自动检测功能. Gossip收敛   集群中每一个节点被其他节点监督(默认的最大数量为 ...

  9. Windows Server 2008配置Network Load Balancing(服务群集)

          最近配置SharePoint 2013 WFE 时,客户提到要让多台WFE能load balance,于是研究了下Network Load Balancing.       当把一台服务器 ...

  10. akka cluster 初体验

    cluster 配置 akka { actor { provider = "akka.cluster.ClusterActorRefProvider" } remote { log ...

随机推荐

  1. MAVEN工程相关配置

    MAVEN工程插件安装: Name: MavenArchiver Location: https://repo1.maven.org/maven2/.m2e/connectors/m2eclipse- ...

  2. odoo 配置文件

    [options] ; addons模块的查找路径 addons_path = E:\GreenOdoo8.0\source\openerp\addons ; 管理员主控密码(用于创建.还原和备份数据 ...

  3. mysql 多列索引学习-经典实例

    索引优化 ,b-tree假设某个表有一个联合索引(c1,c2,c3,c4) 以下 只能使用该联合索引的c1,c2,c3部分A. where c1 = x and c2 = x and c4>x ...

  4. 新手必备的Linux知识

    测试人员为什么学习linux? 对于软件测试人员来说,我们测试的任何产品都是基于操作系统.比如我们每天都在使用的QQ软件,它有windows.ios.Android.Mac OS等版本,需要把QQ安装 ...

  5. Xamarin.Android 报错问题

    如果程序无法调试,输出中提示:(无法连接到logcat,GetProcessId 返回了:0) https://yq.aliyun.com/articles/618738

  6. Laravel API Tutorial: How to Build and Test a RESTful API

    With the rise of mobile development and JavaScript frameworks, using a RESTful API is the best optio ...

  7. python3 第三十二章 - 标准库概览

    1. 操作系统接口 os 模块提供很多函数与操作系统进行交互︰ >>> import os >>> os.getcwd() # 返回当前的工作目录 'C:\\Pyt ...

  8. AX_Query

    static void example(Args _args)  {      SysQueryRun     queryRun = new SysQueryRun(querystr(KTL_Sale ...

  9. 学习python importlib的导入机制

    1. Importer协议 协议涉及两个对象: Finder 和 loader 1. Finder 实现了方法: finder.find_module(fullname, path=None) 返回一 ...

  10. 使用git提交项目到码云

    1.下载git客户端工具(.exe) 点击安装 2.找到你存放项目的根目录(例如:e:/gittest) 3.在该根目录下,右键,选择“Git Bash Here” 4.出现命令行,输入初始化命令: ...