SDP(11):MongoDB-Engine功能实现
根据上篇关于MongoDB-Engine的功能设计方案,我们将在这篇讨论里进行功能实现和测试。下面是具体的功能实现代码:基本上是直接调用Mongo-scala的对应函数,需要注意的是java类型和scala类型之间的相互转换:
object MGOEngine {
import MGOContext._
import MGOCommands._
import MGOAdmins._ def mgoExecute[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
ctx.action match {
/* count */
case Count(Some(filter),Some(opt)) =>
coll.count(filter,opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter),None) =>
coll.count(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None,None) =>
coll.count().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field,Some(filter)) =>
coll.distinct(field,filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field,None) =>
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None,None,optConv,false) =>
if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(None,None,optConv,true) =>
if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(Some(filter),None,optConv,false) =>
if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter),None,optConv,true) =>
if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(None,Some(next),optConv,_) =>
if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter),Some(next),optConv,_) =>
if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
/* aggregate */
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce */
case MapReduce(mf,rf) => coll.mapReduce(mf,rf).toFuture().asInstanceOf[Future[T]]
/* insert */
case Insert(docs,Some(opt)) =>
if (docs.size > ) coll.insertMany(docs,opt.asInstanceOf[InsertManyOptions]).toFuture()
.asInstanceOf[Future[T]]
else coll.insertOne(docs.head,opt.asInstanceOf[InsertOneOptions]).toFuture()
.asInstanceOf[Future[T]]
case Insert(docs,None) =>
if (docs.size > ) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
/* delete */
case Delete(filter,None,onlyOne) =>
if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
case Delete(filter,Some(opt),onlyOne) =>
if (onlyOne) coll.deleteOne(filter,opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter,opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
/* replace */
case Replace(filter,replacement,None) =>
coll.replaceOne(filter,replacement).toFuture().asInstanceOf[Future[T]]
case Replace(filter,replacement,Some(opt)) =>
coll.replaceOne(filter,replacement,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* update */
case Update(filter,update,None,onlyOne) =>
if (onlyOne) coll.updateOne(filter,update).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter,update).toFuture().asInstanceOf[Future[T]]
case Update(filter,update,Some(opt),onlyOne) =>
if (onlyOne) coll.updateOne(filter,update,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter,update,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* bulkWrite */
case BulkWrite(commands,None) =>
coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
case BulkWrite(commands,Some(opt)) =>
coll.bulkWrite(commands,opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]] /* drop collection */
case DropCollection(collName) =>
val coll = db.getCollection(collName)
coll.drop().toFuture().asInstanceOf[Future[T]]
/* create collection */
case CreateCollection(collName,None) =>
db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
case CreateCollection(collName,Some(opt)) =>
db.createCollection(collName,opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) =>
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
/* create view */
case CreateView(viewName,viewOn,pline,None) =>
db.createView(viewName,viewOn,pline).toFuture().asInstanceOf[Future[T]]
case CreateView(viewName,viewOn,pline,Some(opt)) =>
db.createView(viewName,viewOn,pline,opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
/* create index */
case CreateIndex(key,None) =>
coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
case CreateIndex(key,Some(opt)) =>
coll.createIndex(key,opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
/* drop index */
case DropIndexByName(indexName, None) =>
coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
case DropIndexByName(indexName, Some(opt)) =>
coll.dropIndex(indexName,opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key,None) =>
coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key,Some(opt)) =>
coll.dropIndex(key,opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(None) =>
coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(Some(opt)) =>
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
} } }
注意:以上所有函数都返回Future[T]结果。下面我们来试运行这些函数,不过先关注一些细节:关于MongoDB的Date,Blob,Array等类型在scala中的使用方法:
type MGODate = java.util.Date
def mgoDate(yyyy: Int, mm: Int, dd: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd)
ca.getTime()
}
def mgoDateTime(yyyy: Int, mm: Int, dd: Int, hr: Int, min: Int, sec: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd,hr,min,sec)
ca.getTime()
}
def mgoDateTimeNow: MGODate = {
val ca = Calendar.getInstance()
ca.getTime
} def mgoDateToString(dt: MGODate, formatString: String): String = {
val fmt= new SimpleDateFormat(formatString)
fmt.format(dt)
} type MGOBlob = BsonBinary
type MGOArray = BsonArray def fileToMGOBlob(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer) = FileToByteArray(fileName,timeOut) def mgoBlobToFile(blob: MGOBlob, fileName: String)(
implicit mat: Materializer) = ByteArrayToFile(blob.getData,fileName)
然后就是MongoDB数据类型的读取帮助函数:
def mgoGetStringOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getString(fieldName))
else None
}
def mgoGetIntOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getInteger(fieldName))
else None
}
def mgoGetLonggOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getLong(fieldName))
else None
}
def mgoGetDoubleOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDouble(fieldName))
else None
}
def mgoGetBoolOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getBoolean(fieldName))
else None
}
def mgoGetDateOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDate(fieldName))
else None
}
def mgoGetBlobOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOBlob]]
else None
}
def mgoGetArrayOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOArray]]
else None
} def mgoArrayToDocumentList(arr: MGOArray): scala.collection.immutable.List[org.bson.BsonDocument] = {
(arr.getValues.asScala.toList)
.asInstanceOf[scala.collection.immutable.List[org.bson.BsonDocument]]
} type MGOFilterResult = FindObservable[Document] => FindObservable[Document]
下面我们就开始设置试运行环境,从一个全新的collection开始:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import org.mongodb.scala._
import org.mongodb.scala.connection._ import scala.collection.JavaConverters._
import com.mongodb.client.model._ import scala.util._ object MongoEngineTest extends App {
import MGOContext._
import MGOEngine._
import MGOHelpers._
import MGOCommands._
import MGOAdmins._ val clusterSettings = ClusterSettings.builder()
.hosts(List(new ServerAddress("localhost:27017")).asJava).build()
val clientSettings = MongoClientSettings.builder().clusterSettings(clusterSettings).build()
implicit val client = MongoClient(clientSettings) implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val ec = system.dispatcher val ctx = MGOContext("testdb","po").setCommand(
DropCollection("po"))
println(getResult(mgoExecute(ctx)))
测试运行了DropCollection指令。下面我们试着insert两个document:
val pic = fileToMGOBlob("/users/tiger/nobody.png")
val po1 = Document (
"ponum" -> "po18012301",
"vendor" -> "The smartphone compay",
"podate" -> mgoDate(,,),
"remarks" -> "urgent, rush order",
"handler" -> pic,
"podtl" -> Seq(
Document("item" -> "sony smartphone", "price" -> 2389.00, "qty" -> , "packing" -> "standard"),
Document("item" -> "ericson smartphone", "price" -> 897.00, "qty" -> , "payterm" -> "30 days")
)
) val po2 = Document (
"ponum" -> "po18022002",
"vendor" -> "The Samsung compay",
"podate" -> mgoDate(,,),
"podtl" -> Seq(
Document("item" -> "samsung galaxy s8", "price" -> 2300.00, "qty" -> , "packing" -> "standard"),
Document("item" -> "samsung galaxy s7", "price" -> 1897.00, "qty" -> , "payterm" -> "30 days"),
Document("item" -> "apple iphone7", "price" -> 6500.00, "qty" -> , "packing" -> "luxury")
)
) val optInsert = new InsertManyOptions().ordered(true)
val ctxInsert = ctx.setCommand(
Insert(Seq(po1,po2),Some(optInsert))
) println(getResult(mgoExecute(ctxInsert)))
注意InsertManyOptions的具体设定方式。 为了配合更方便准确的强类型操作,我们需要进行Document类型到具体应用类型之间的对应转换:
case class PO (
ponum: String,
podate: MGODate,
vendor: String,
remarks: Option[String],
podtl: Option[MGOArray],
handler: Option[MGOBlob]
)
def toPO(doc: Document): PO = {
PO(
ponum = doc.getString("ponum"),
podate = doc.getDate("podate"),
vendor = doc.getString("vendor"),
remarks = mgoGetStringOrNone(doc,"remarks"),
podtl = mgoGetArrayOrNone(doc,"podtl"),
handler = mgoGetBlobOrNone(doc,"handler")
)
} case class PODTL(
item: String,
price: Double,
qty: Int,
packing: Option[String],
payTerm: Option[String]
)
def toPODTL(podtl: Document): PODTL = {
PODTL(
item = podtl.getString("item"),
price = podtl.getDouble("price"),
qty = podtl.getInteger("qty"),
packing = mgoGetStringOrNone(podtl,"packing"),
payTerm = mgoGetStringOrNone(podtl,"payterm")
)
} def showPO(po: PO) = {
println(s"po number: ${po.ponum}")
println(s"po date: ${mgoDateToString(po.podate,"yyyy-MM-dd")}")
println(s"vendor: ${po.vendor}")
if (po.remarks != None)
println(s"remarks: ${po.remarks.get}")
po.podtl match {
case Some(barr) =>
mgoArrayToDocumentList(barr)
.map { dc => toPODTL(dc)}
.foreach { doc: PODTL =>
print(s"==>Item: ${doc.item} ")
print(s"price: ${doc.price} ")
print(s"qty: ${doc.qty} ")
doc.packing.foreach(pk => print(s"packing: ${pk} "))
doc.payTerm.foreach(pt => print(s"payTerm: ${pt} "))
println("")
}
case _ =>
} po.handler match {
case Some(blob) =>
val fileName = s"/users/tiger/${po.ponum}.png"
mgoBlobToFile(blob,fileName)
println(s"picture saved to ${fileName}")
case None => println("no picture provided")
} }
在上面的代码里我们使用了前面提供的MongoDB数据类型读取帮助函数。下面我们测试对poCollection中的Document进行查询,示范包括projection,sort,filter等:
import org.mongodb.scala.model.Projections._
import org.mongodb.scala.model.Filters._
import org.mongodb.scala.model.Sorts._
val sort: MGOFilterResult = find => find.sort(descending("ponum"))
val proj: MGOFilterResult = find => find.projection(and(include("ponum","podate"),include("vendor"),excludeId()))
val ctxFind = ctx.setCommand(Find(andThen=Some(proj)))
val ctxFindFirst = ctx.setCommand(Find(firstOnly=true,converter = Some(toPO _)))
val ctxFindArrayItem = ctx.setCommand(
Find(filter = Some(equal("podtl.qty",)), converter = Some(toPO _))
) for {
_ <- mgoExecute[List[Document]](ctxFind).andThen {
case Success(docs) => docs.map(toPO).foreach(showPO)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
} _ <- mgoExecute[PO](ctxFindFirst).andThen {
case Success(doc) => showPO(doc)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
} _ <- mgoExecute[List[PO]](ctxFindArrayItem).andThen {
case Success(docs) => docs.foreach(showPO)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
}
} yield()
因为mgoExecute返回Future结果,所以我们可以用for-comprehension对几个运算进行串联运行。
下面是这次示范的源代码:
build.sbt
name := "learn-mongo" version := "0.1" scalaVersion := "2.12.4" libraryDependencies := Seq(
"org.mongodb.scala" %% "mongo-scala-driver" % "2.2.1",
"com.lightbend.akka" %% "akka-stream-alpakka-mongodb" % "0.17"
)
MGOHelpers.scala
import org.mongodb.scala._
import scala.concurrent._
import scala.concurrent.duration._ object MGOHelpers { implicit class DocumentObservable[C](val observable: Observable[Document]) extends ImplicitObservable[Document] {
override val converter: (Document) => String = (doc) => doc.toJson
} implicit class GenericObservable[C](val observable: Observable[C]) extends ImplicitObservable[C] {
override val converter: (C) => String = (doc) => doc.toString
} trait ImplicitObservable[C] {
val observable: Observable[C]
val converter: (C) => String def results(): Seq[C] = Await.result(observable.toFuture(), seconds)
def headResult() = Await.result(observable.head(), seconds)
def printResults(initial: String = ""): Unit = {
if (initial.length > ) print(initial)
results().foreach(res => println(converter(res)))
}
def printHeadResult(initial: String = ""): Unit = println(s"${initial}${converter(headResult())}")
} def getResult[T](fut: Future[T], timeOut: Duration = second): T = {
Await.result(fut,timeOut)
}
def getResults[T](fut: Future[Iterable[T]], timeOut: Duration = second): Iterable[T] = {
Await.result(fut,timeOut)
} }
FileStreaming.scala
import java.io.{InputStream, ByteArrayInputStream}
import java.nio.ByteBuffer
import java.nio.file.Paths import akka.stream.{Materializer}
import akka.stream.scaladsl.{FileIO, StreamConverters} import scala.concurrent.{Await}
import akka.util._
import scala.concurrent.duration._ object FileStreaming {
def FileToByteBuffer(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer):ByteBuffer = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
(Await.result(fut, timeOut)).toByteBuffer
} def FileToByteArray(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer): Array[Byte] = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
(Await.result(fut, timeOut)).toArray
} def FileToInputStream(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer): InputStream = {
val fut = FileIO.fromPath(Paths.get(fileName)).runFold(ByteString()) { case (hd, bs) =>
hd ++ bs
}
val buf = (Await.result(fut, timeOut)).toArray
new ByteArrayInputStream(buf)
} def ByteBufferToFile(byteBuf: ByteBuffer, fileName: String)(
implicit mat: Materializer) = {
val ba = new Array[Byte](byteBuf.remaining())
byteBuf.get(ba,,ba.length)
val baInput = new ByteArrayInputStream(ba)
val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
source.runWith(FileIO.toPath(Paths.get(fileName)))
} def ByteArrayToFile(bytes: Array[Byte], fileName: String)(
implicit mat: Materializer) = {
val bb = ByteBuffer.wrap(bytes)
val baInput = new ByteArrayInputStream(bytes)
val source = StreamConverters.fromInputStream(() => baInput) //ByteBufferInputStream(bytes))
source.runWith(FileIO.toPath(Paths.get(fileName)))
} def InputStreamToFile(is: InputStream, fileName: String)(
implicit mat: Materializer) = {
val source = StreamConverters.fromInputStream(() => is)
source.runWith(FileIO.toPath(Paths.get(fileName)))
} }
MongoEngine.scala
import java.text.SimpleDateFormat import org.bson.conversions.Bson
import org.mongodb.scala._
import org.mongodb.scala.model._
import java.util.Calendar
import scala.collection.JavaConverters._
import FileStreaming._
import akka.stream.Materializer
import org.mongodb.scala.bson.{BsonArray, BsonBinary} import scala.concurrent._
import scala.concurrent.duration._ object MGOContext { trait MGOCommands object MGOCommands { case class Count(filter: Option[Bson], options: Option[Any]) extends MGOCommands case class Distict(fieldName: String, filter: Option[Bson]) extends MGOCommands /* org.mongodb.scala.FindObservable
import com.mongodb.async.client.FindIterable
val resultDocType = FindIterable[Document]
val resultOption = FindObservable(resultDocType)
.maxScan(...)
.limit(...)
.sort(...)
.project(...) */
case class Find[M](filter: Option[Bson] = None,
andThen: Option[FindObservable[Document] => FindObservable[Document]]= None,
converter: Option[Document => M] = None,
firstOnly: Boolean = false) extends MGOCommands case class Aggregate(pipeLine: Seq[Bson]) extends MGOCommands case class MapReduce(mapFunction: String, reduceFunction: String) extends MGOCommands case class Insert(newdocs: Seq[Document], options: Option[Any] = None) extends MGOCommands case class Delete(filter: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class Replace(filter: Bson, replacement: Document, options: Option[Any] = None) extends MGOCommands case class Update(filter: Bson, update: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class BulkWrite(commands: List[WriteModel[Document]], options: Option[Any] = None) extends MGOCommands } object MGOAdmins { case class DropCollection(collName: String) extends MGOCommands case class CreateCollection(collName: String, options: Option[Any] = None) extends MGOCommands case class ListCollection(dbName: String) extends MGOCommands case class CreateView(viewName: String, viewOn: String, pipeline: Seq[Bson], options: Option[Any] = None) extends MGOCommands case class CreateIndex(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropIndexByName(indexName: String, options: Option[Any] = None) extends MGOCommands case class DropIndexByKey(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropAllIndexes(options: Option[Any] = None) extends MGOCommands } case class MGOContext(
dbName: String,
collName: String,
action: MGOCommands = null
) {
ctx =>
def setDbName(name: String): MGOContext = ctx.copy(dbName = name) def setCollName(name: String): MGOContext = ctx.copy(collName = name) def setCommand(cmd: MGOCommands): MGOContext = ctx.copy(action = cmd)
} object MGOContext {
def apply(db: String, coll: String) = new MGOContext(db, coll) def apply(db: String, coll: String, command: MGOCommands) =
new MGOContext(db, coll, command) } type MGODate = java.util.Date
def mgoDate(yyyy: Int, mm: Int, dd: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd)
ca.getTime()
}
def mgoDateTime(yyyy: Int, mm: Int, dd: Int, hr: Int, min: Int, sec: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd,hr,min,sec)
ca.getTime()
}
def mgoDateTimeNow: MGODate = {
val ca = Calendar.getInstance()
ca.getTime
} def mgoDateToString(dt: MGODate, formatString: String): String = {
val fmt= new SimpleDateFormat(formatString)
fmt.format(dt)
} type MGOBlob = BsonBinary
type MGOArray = BsonArray def fileToMGOBlob(fileName: String, timeOut: FiniteDuration = seconds)(
implicit mat: Materializer) = FileToByteArray(fileName,timeOut) def mgoBlobToFile(blob: MGOBlob, fileName: String)(
implicit mat: Materializer) = ByteArrayToFile(blob.getData,fileName) def mgoGetStringOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getString(fieldName))
else None
}
def mgoGetIntOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getInteger(fieldName))
else None
}
def mgoGetLonggOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getLong(fieldName))
else None
}
def mgoGetDoubleOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDouble(fieldName))
else None
}
def mgoGetBoolOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getBoolean(fieldName))
else None
}
def mgoGetDateOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDate(fieldName))
else None
}
def mgoGetBlobOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOBlob]]
else None
}
def mgoGetArrayOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOArray]]
else None
} def mgoArrayToDocumentList(arr: MGOArray): scala.collection.immutable.List[org.bson.BsonDocument] = {
(arr.getValues.asScala.toList)
.asInstanceOf[scala.collection.immutable.List[org.bson.BsonDocument]]
} type MGOFilterResult = FindObservable[Document] => FindObservable[Document]
}
object MGOEngine {
import MGOContext._
import MGOCommands._
import MGOAdmins._ def mgoExecute[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
ctx.action match {
/* count */
case Count(Some(filter),Some(opt)) =>
coll.count(filter,opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter),None) =>
coll.count(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None,None) =>
coll.count().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field,Some(filter)) =>
coll.distinct(field,filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field,None) =>
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None,None,optConv,false) =>
if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(None,None,optConv,true) =>
if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(Some(filter),None,optConv,false) =>
if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter),None,optConv,true) =>
if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(None,Some(next),optConv,_) =>
if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter),Some(next),optConv,_) =>
if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
/* aggregate */
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce */
case MapReduce(mf,rf) => coll.mapReduce(mf,rf).toFuture().asInstanceOf[Future[T]]
/* insert */
case Insert(docs,Some(opt)) =>
if (docs.size > ) coll.insertMany(docs,opt.asInstanceOf[InsertManyOptions]).toFuture()
.asInstanceOf[Future[T]]
else coll.insertOne(docs.head,opt.asInstanceOf[InsertOneOptions]).toFuture()
.asInstanceOf[Future[T]]
case Insert(docs,None) =>
if (docs.size > ) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
/* delete */
case Delete(filter,None,onlyOne) =>
if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
case Delete(filter,Some(opt),onlyOne) =>
if (onlyOne) coll.deleteOne(filter,opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter,opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
/* replace */
case Replace(filter,replacement,None) =>
coll.replaceOne(filter,replacement).toFuture().asInstanceOf[Future[T]]
case Replace(filter,replacement,Some(opt)) =>
coll.replaceOne(filter,replacement,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* update */
case Update(filter,update,None,onlyOne) =>
if (onlyOne) coll.updateOne(filter,update).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter,update).toFuture().asInstanceOf[Future[T]]
case Update(filter,update,Some(opt),onlyOne) =>
if (onlyOne) coll.updateOne(filter,update,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter,update,opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* bulkWrite */
case BulkWrite(commands,None) =>
coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
case BulkWrite(commands,Some(opt)) =>
coll.bulkWrite(commands,opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]] /* drop collection */
case DropCollection(collName) =>
val coll = db.getCollection(collName)
coll.drop().toFuture().asInstanceOf[Future[T]]
/* create collection */
case CreateCollection(collName,None) =>
db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
case CreateCollection(collName,Some(opt)) =>
db.createCollection(collName,opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) =>
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
/* create view */
case CreateView(viewName,viewOn,pline,None) =>
db.createView(viewName,viewOn,pline).toFuture().asInstanceOf[Future[T]]
case CreateView(viewName,viewOn,pline,Some(opt)) =>
db.createView(viewName,viewOn,pline,opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
/* create index */
case CreateIndex(key,None) =>
coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
case CreateIndex(key,Some(opt)) =>
coll.createIndex(key,opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
/* drop index */
case DropIndexByName(indexName, None) =>
coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
case DropIndexByName(indexName, Some(opt)) =>
coll.dropIndex(indexName,opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key,None) =>
coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key,Some(opt)) =>
coll.dropIndex(key,opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(None) =>
coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(Some(opt)) =>
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
} } }
MongoEngineTest.scala
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import org.mongodb.scala._
import org.mongodb.scala.connection._ import scala.collection.JavaConverters._
import com.mongodb.client.model._ import scala.util._ object MongoEngineTest extends App {
import MGOContext._
import MGOEngine._
import MGOHelpers._
import MGOCommands._
import MGOAdmins._ val clusterSettings = ClusterSettings.builder()
.hosts(List(new ServerAddress("localhost:27017")).asJava).build()
val clientSettings = MongoClientSettings.builder().clusterSettings(clusterSettings).build()
implicit val client = MongoClient(clientSettings) implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val ec = system.dispatcher val ctx = MGOContext("testdb","po").setCommand(
DropCollection("po"))
println(getResult(mgoExecute(ctx))) val pic = fileToMGOBlob("/users/tiger/nobody.png")
val po1 = Document (
"ponum" -> "po18012301",
"vendor" -> "The smartphone compay",
"podate" -> mgoDate(,,),
"remarks" -> "urgent, rush order",
"handler" -> pic,
"podtl" -> Seq(
Document("item" -> "sony smartphone", "price" -> 2389.00, "qty" -> , "packing" -> "standard"),
Document("item" -> "ericson smartphone", "price" -> 897.00, "qty" -> , "payterm" -> "30 days")
)
) val po2 = Document (
"ponum" -> "po18022002",
"vendor" -> "The Samsung compay",
"podate" -> mgoDate(,,),
"podtl" -> Seq(
Document("item" -> "samsung galaxy s8", "price" -> 2300.00, "qty" -> , "packing" -> "standard"),
Document("item" -> "samsung galaxy s7", "price" -> 1897.00, "qty" -> , "payterm" -> "30 days"),
Document("item" -> "apple iphone7", "price" -> 6500.00, "qty" -> , "packing" -> "luxury")
)
) val optInsert = new InsertManyOptions().ordered(true)
val ctxInsert = ctx.setCommand(
Insert(Seq(po1,po2),Some(optInsert))
)
// println(getResult(mgoExecute(ctxInsert))) case class PO (
ponum: String,
podate: MGODate,
vendor: String,
remarks: Option[String],
podtl: Option[MGOArray],
handler: Option[MGOBlob]
)
def toPO(doc: Document): PO = {
PO(
ponum = doc.getString("ponum"),
podate = doc.getDate("podate"),
vendor = doc.getString("vendor"),
remarks = mgoGetStringOrNone(doc,"remarks"),
podtl = mgoGetArrayOrNone(doc,"podtl"),
handler = mgoGetBlobOrNone(doc,"handler")
)
} case class PODTL(
item: String,
price: Double,
qty: Int,
packing: Option[String],
payTerm: Option[String]
)
def toPODTL(podtl: Document): PODTL = {
PODTL(
item = podtl.getString("item"),
price = podtl.getDouble("price"),
qty = podtl.getInteger("qty"),
packing = mgoGetStringOrNone(podtl,"packing"),
payTerm = mgoGetStringOrNone(podtl,"payterm")
)
} def showPO(po: PO) = {
println(s"po number: ${po.ponum}")
println(s"po date: ${mgoDateToString(po.podate,"yyyy-MM-dd")}")
println(s"vendor: ${po.vendor}")
if (po.remarks != None)
println(s"remarks: ${po.remarks.get}")
po.podtl match {
case Some(barr) =>
mgoArrayToDocumentList(barr)
.map { dc => toPODTL(dc)}
.foreach { doc: PODTL =>
print(s"==>Item: ${doc.item} ")
print(s"price: ${doc.price} ")
print(s"qty: ${doc.qty} ")
doc.packing.foreach(pk => print(s"packing: ${pk} "))
doc.payTerm.foreach(pt => print(s"payTerm: ${pt} "))
println("")
}
case _ =>
} po.handler match {
case Some(blob) =>
val fileName = s"/users/tiger/${po.ponum}.png"
mgoBlobToFile(blob,fileName)
println(s"picture saved to ${fileName}")
case None => println("no picture provided")
} } import org.mongodb.scala.model.Projections._
import org.mongodb.scala.model.Filters._
import org.mongodb.scala.model.Sorts._
val sort: MGOFilterResult = find => find.sort(descending("ponum"))
val proj: MGOFilterResult = find => find.projection(and(include("ponum","podate"),include("vendor"),excludeId()))
val ctxFind = ctx.setCommand(Find(andThen=Some(proj)))
val ctxFindFirst = ctx.setCommand(Find(firstOnly=true,converter = Some(toPO _)))
val ctxFindArrayItem = ctx.setCommand(
Find(filter = Some(equal("podtl.qty",)), converter = Some(toPO _))
) for {
_ <- mgoExecute[List[Document]](ctxFind).andThen {
case Success(docs) => docs.map(toPO).foreach(showPO)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
} _ <- mgoExecute[PO](ctxFindFirst).andThen {
case Success(doc) => showPO(doc)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
} _ <- mgoExecute[List[PO]](ctxFindArrayItem).andThen {
case Success(docs) => docs.foreach(showPO)
println("-------------------------------")
case Failure(e) => println(e.getMessage)
}
} yield() scala.io.StdIn.readLine() system.terminate()
}
SDP(11):MongoDB-Engine功能实现的更多相关文章
- C++11的一些功能
.断言是将一个须要为真的表达式放在语句中,在debug模式下检查一些逻辑错误的參数.C++中使用assert须要使用<assert.h>或者<cassert>头文件.有函数定义 ...
- Apache Kafka 0.11版本新功能简介
Apache Kafka近日推出0.11版本.这是一个里程碑式的大版本,特别是Kafka从这个版本开始支持“exactly-once”语义(下称EOS, exactly-once semantics) ...
- Kafka 0.11版本新功能介绍 —— 空消费组延时rebalance
在0.11之前的版本中,多个consumer实例加入到一个空消费组将导致多次的rebalance,这是由于每个consumer instance启动的时间不可控,很有可能超出coordinator确定 ...
- Azure CosmosDB (11) MongoDB概念
<Windows Azure Platform 系列文章目录> Azure Cosmos DB兼容MongoDB的API,下表将帮助我们更容易理解MongoDB中的一些概念: SQL概念 ...
- 【Android】3.11 地理编码功能
分类:C#.Android.VS2015.百度地图应用: 创建日期:2016-02-04 一.简介 地理编码指的是将地址信息建立空间坐标关系的过程,提供了地理坐标和地址之间相互转换的能力. 地理编码分 ...
- 第11章 PADS功能使用技巧(1)-最全面
一.如何走蛇形线? 蛇形线是布线过程中常用的一种走线方式,其主要目的是为了调节延时满足系统时序设计要求,但是设计者应该有这样的认识:蛇形线会破坏信号质量,改变传输延时,布线时要尽量避免使用,因此一块P ...
- 11.MongoDB系列之连接副本集
1. Python连接副本集 from pymongo import MongoClient from bson.codec_options import CodecOptions from retr ...
- 第11章 PADS功能使用技巧(2)-最全面
原文链接点击这里 七.Flood与Hatch有什么区别? 我们先看看PADS Layout Help 文档是怎么说的,如下图所示: 从检索到的帮助信息,我们可以得到Hatch与Pour的区别,原文如下 ...
- 802.11(wifi)的MAC层功能
MAC层是802.11的主要功能部分.上层应用通过调用MAC层提供的接口原语调用MAC层的功能. MAC一共向上提供了2大类接口原语,共30种.数据(1)和管理(29).数据部分就是提供普通数据包的收 ...
- Java 11 新功能来了!
关键时刻,第一时间送达! 目前 Oracle 已经发布了 Java Development Kit 10,下个版本 JDK 11 也即将发布.本文介绍 Java 11 的新功能. 根据Oracle新出 ...
随机推荐
- java处理json与对象的转化 递归
整个类是一个case,总结了我在使用java处理json的时候遇到的问题,还有级联关系的对象如何遍历,json和对象之间的转换! 对于对象json转换中遇到的问题我参考了一篇博客,http://blo ...
- Javascript获取数组中的最大值和最小值方法汇总
方法一 sort()方法 b-a从大到小,a-b从小到大 var max2 = arr.sort(function(a,b){ return b-a; })[0]; console.log(max2) ...
- block,inline,inline-block的区别
最近正在复习,紧张地准备几天后的笔试,然后刚好看到这个地方. block:块级元素,会换行,如div,p,h1~h6,table这些,可以设置宽高: inline:行内元素,不换行,挤在一行显示 ...
- ThinkPHP的使用
在public目录下使用命令行执行:php -S localhost:8888 route.php 无需使用服务器就可启动
- API token for Github
1.如图所示:(前提是登录Github,进入Personal settings) 2,创建token 3,生成token 4. Last login: Mon Dec 5 20:24:18 on c ...
- Nginx location配置详细解释
nginx location配置详细解释 语法规则: location [=|~|~*|^~] /uri/ { - } = 开头表示精确匹配 ^~ 开头表示uri以某个常规字符串开头,理解为匹配 ur ...
- hdu1496 打表
通常可以想到直接四个for枚举,但是会超时.就算只用三个for也很危险.可以用打表的方法将时间复杂度降到O(n^2),注意到x1,x2,x3,x4的取值区间是关于零对称的,因此可以只考虑正整数部分,洗 ...
- [Cake] 1. CI中的Cake
在上一篇C#Make自动化构建-简介中,简单的介绍了下Cake的脚本如何编写以及通过Powershell在本地运行Cake脚本.本篇在此基础上,介绍下如何在CI环境中使用Cake. 1. Cake简介 ...
- linux主机名为bogon的原因及修改方法
今天登录linux,发现主机名是bogon,虽然不影响使用,但是看着很不爽,于是想了解一下,为什么会发生这种情况,在csdn上找了到了一个文章,原文如下: 起因:公司网络接口做了接口认证,虚拟机桥接至 ...
- 关于 target="_blank"漏洞的分析
创建: 于 八月 30, 2016 关于 target="_blank"漏洞的分析 一.漏洞详情:首先攻击者能够将链接(指向攻击者自己控制的页面的,该被控页面的js脚本可以对母页 ...