目的:使的在IDEA中编辑代码,令代码实现mongodb运算,且转换较为便捷

  • 由实验2可知,运算环境的搭建亦需要对数据进行存储和计算,故需要实现类型转换,所以在实验2的基础上搭建环境。
  • 菜鸟教程可得到mongodb命令的具体格式等,如:新建集合

==》可以新建:

              case class  CreateCollection(name:String,options:Options[Any]=None)  //其他的mongodb命令也是照着这样建立的
  • 由org.mongodb.scala.model.Filters可知Filters内的相关方法的返回类型都是Bson。例子如下:

 /**
* Creates a filter that matches all documents where the value of the given field is greater than the specified value.
*
* @param fieldName the field name
* @param value the value
* @tparam TItem the value type
* @return the filter
* @see [[http://docs.mongodb.org/manual/reference/operator/query/gt \$gt]]
*/
def gt[TItem](fieldName: String, value: TItem): Bson = JFilters.gt(fieldName, value) /**
* Creates a filter that matches all documents where the value of the given field is less than the specified value.
*
* @param fieldName the field name
* @param value the value
* @tparam TItem the value type
* @return the filter
* @see [[http://docs.mongodb.org/manual/reference/operator/query/lt \$lt]]
*/
def lt[TItem](fieldName: String, value: TItem): Bson = JFilters.lt(fieldName, value)

  

  • 根据相关语法格式,可新建

               case class Count(filter: Option[Bson], options: Option[Any]) 
  • 执行相关命令需要引入数据库和集合,新建MGOContext.scala
case class MGOContext(
dbName:String,
collName:String,
action:MGOCommands=null
){ctx=>
def setDbName(name:String):MGOContext=ctx.copy(dbName=name) def setCollName(name:String):MGOContext=ctx.copy(collName=name) def setCommand(cmd:MGOCommands):MGOContext=ctx.copy(action=cmd)
} object MGOContext {
def apply(db:String,coll:String) = new MGOContext(db,coll) def apply(
db: String,
coll: String,
action: MGOCommands = null
): MGOContext = new MGOContext(db, coll, action) }

 

  • 分别新建相关的数据库操作,新建MGOCommands.scala和MGOAdmins.scala
//MGOAdmins.scala
import org.bson.conversions.Bson object MGOAdmins {
case class DropCollection(collName: String) extends MGOCommands case class CreateCollection(collName: String, options: Option[Any] = None) extends MGOCommands case class ListCollection(dbName: String) extends MGOCommands case class CreateView(viewName: String, viewOn: String, pipeline: Seq[Bson], options: Option[Any] = None) extends MGOCommands case class CreateIndex(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropIndexByName(indexName: String, options: Option[Any] = None) extends MGOCommands case class DropIndexByKey(key: Bson, options: Option[Any] = None) extends MGOCommands case class DropAllIndexes(options: Option[Any] = None) extends MGOCommands } //MGOCommands.scala
import org.mongodb.scala.{Document, FindObservable}
import org.mongodb.scala.bson.conversions.Bson
import org.mongodb.scala.model.WriteModel trait MGOCommands object MGOCommands {
case class Count(filter: Option[Bson], options: Option[Any]) extends MGOCommands case class Distict(fieldName: String, filter: Option[Bson]) extends MGOCommands case class Find[M](filter: Option[Bson] = None,
andThen: Option[FindObservable[Document] => FindObservable[Document]]= None,
converter: Option[Document => M] = None,
firstOnly: Boolean = false) extends MGOCommands case class Aggregate(pipeLine: Seq[Bson]) extends MGOCommands case class MapReduce(mapFunction: String, reduceFunction: String) extends MGOCommands case class Insert(newdocs: Seq[Document], options: Option[Any] = None) extends MGOCommands case class Delete(filter: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class Replace(filter: Bson, replacement: Document, options: Option[Any] = None) extends MGOCommands case class Update(filter: Bson, update: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands case class BulkWrite(commands: List[WriteModel[Document]], options: Option[Any] = None) extends MGOCommands }
  • 综上所述,具体实现相关的操作
import org.mongodb.scala.{Document, MongoClient}
import org.mongodb.scala.model.{BulkWriteOptions, CountOptions, CreateCollectionOptions, CreateViewOptions, DeleteOptions, DropIndexOptions, IndexOptions, InsertManyOptions, InsertOneOptions, UpdateOptions} import scala.concurrent.Future object MGOEngine{ import MGOAdmins._
import MGOCommands._ def DAO[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
ctx.action match {
case Count(Some(filter), Some(opt)) =>
coll.count(filter, opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter), None) =>
coll.count(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None, None) =>
coll.count().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field, Some(filter)) =>
coll.distinct(field, filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field, None) =>
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None, None, optConv, false) =>
if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(None, None, optConv, true) =>
if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, false) =>
if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, true) =>
if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(None, Some(next), optConv, _) =>
if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), Some(next), optConv, _) =>
if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
/* aggregate */
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce */
case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
/* insert */
case Insert(docs, Some(opt)) =>
if (docs.size > 1) coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).toFuture()
.asInstanceOf[Future[T]]
else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).toFuture()
.asInstanceOf[Future[T]]
case Insert(docs, None) =>
if (docs.size > 1) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
/* delete */
case Delete(filter, None, onlyOne) =>
if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
case Delete(filter, Some(opt), onlyOne) =>
if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
/* replace */
case Replace(filter, replacement, None) =>
coll.replaceOne(filter, replacement).toFuture().asInstanceOf[Future[T]]
case Replace(filter, replacement, Some(opt)) =>
coll.replaceOne(filter, replacement, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* update */
case Update(filter, update, None, onlyOne) =>
if (onlyOne) coll.updateOne(filter, update).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update).toFuture().asInstanceOf[Future[T]]
case Update(filter, update, Some(opt), onlyOne) =>
if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* bulkWrite */
case BulkWrite(commands, None) =>
coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
case BulkWrite(commands, Some(opt)) =>
coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]] /* drop collection */
case DropCollection(collName) =>
val coll = db.getCollection(collName)
coll.drop().toFuture().asInstanceOf[Future[T]]
/* create collection */
case CreateCollection(collName, None) =>
db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
case CreateCollection(collName, Some(opt)) =>
db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) =>
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
/* create view */
case CreateView(viewName, viewOn, pline, None) =>
db.createView(viewName, viewOn, pline).toFuture().asInstanceOf[Future[T]]
case CreateView(viewName, viewOn, pline, Some(opt)) =>
db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
/* create index */
case CreateIndex(key, None) =>
coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
case CreateIndex(key, Some(opt)) =>
coll.createIndex(key, opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
/* drop index */
case DropIndexByName(indexName, None) =>
coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
case DropIndexByName(indexName, Some(opt)) =>
coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, None) =>
coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, Some(opt)) =>
coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(None) =>
coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(Some(opt)) =>
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
} }}

  

Scala与Mongodb实践3-----运算环境的搭建的更多相关文章

  1. Scala与Mongodb实践4-----数据库操具体应用

    目的:在实践3中搭建了运算环境,这里学会如何使用该环境进行具体的运算和相关的排序组合等. 由数据库mongodb操作如find,aggregate等可知它们的返回类型是FindObservable.A ...

  2. Scala与Mongodb实践2-----图片、日期的存储读取

    目的:在IDEA中实现图片.日期等相关的类型在mongodb存储读取 主要是Scala和mongodb里面的类型的转换.Scala里面的数据编码类型和mongodb里面的存储的数据类型各个不同.存在类 ...

  3. 转】[1.0.2] 详解基于maven管理-scala开发的spark项目开发环境的搭建与测试

    场景 好的,假设项目数据调研与需求分析已接近尾声,马上进入Coding阶段了,辣么在Coding之前需要干马呢?是的,“统一开发工具.开发环境的搭建与本地测试.测试环境的搭建与测试” - 本文详细记录 ...

  4. Scala与Mongodb实践1-----mongodbCRUD

    目的:如何使用MongoDB之前提供有关Scala驱动程序及其异步API. 1.现有条件 IDEA中的:Scala+sbt+SDK mongodb-scala-driver的网址:http://mon ...

  5. 使用Scala操作Mongodb

    介绍 Scala是一种功能性面向对象语言.它融汇了很多前所未有的特性.而同一时候又执行于JVM之上.随着开发人员对Scala的兴趣日增,以及越来越多的工具支持,无疑Scala语言将成为你手上一件不可缺 ...

  6. SDP(6):分布式数据库运算环境- Cassandra-Engine

    现代信息系统应该是避不开大数据处理的.作为一个通用的系统集成工具也必须具备大数据存储和读取能力.cassandra是一种分布式的数据库,具备了分布式数据库高可用性(high-availability) ...

  7. Scala对MongoDB的增删改查操作

    =========================================== 原文链接: Scala对MongoDB的增删改查操作 转载请注明出处! ==================== ...

  8. Windows 7上安装配置TensorFlow-GPU运算环境

    Windows 7上安装配置TensorFlow-GPU运算环境 1. 概述 在深度学习实践中,对于简单的模型和相对较小的数据集,我们可以使用CPU完成建模过程.例如在MNIST数据集上进行手写数字识 ...

  9. 实验 2 Scala 编程初级实践

    实验 2 Scala 编程初级实践 一.实验目的 1.掌握 Scala 语言的基本语法.数据结构和控制结构: 2.掌握面向对象编程的基础知识,能够编写自定义类和特质: 3.掌握函数式编程的基础知识,能 ...

随机推荐

  1. 在 CentOS 7.3 上安装 nginx 服务为例,说明在 Linux 实例中如何检查 TCP 80 端口是否正常工作

    CentOS 7.3 这部分以在 CentOS 7.3 上安装 nginx 服务为例,说明在 Linux 实例中如何检查 TCP 80 端口是否正常工作. 登录 ECS 管理控制台,确认实例所在安全组 ...

  2. 【codeforces 761A】Dasha and Stairs

    time limit per test2 seconds memory limit per test256 megabytes inputstandard input outputstandard o ...

  3. angular 点击事件阻止冒泡及默认行为

    经常遇到场景:多层级元素绑定ng-click 事件,则底层元素的点击事件存在冒泡现象,怎么解决? 类似原生JS ,只是语法稍有不同,如下: 阻止冒泡 $event.stopPropagation() ...

  4. H3C 环路避免机制六:触发更新

  5. .net连接数据库

    /* 连接数据库步骤-- 1.创建连接字符串 data source = ... 计算机名称 initial catalog = ... 数据库名称 integrated security = tru ...

  6. P1004 奶牛与牧场

    题目描述 有一个牧场,牧场上的牧草每天都在匀速生长,这片牧场可供 \(a\) 头牛吃 \(b\) 天,或可供 \(c\) 头牛吃 \(d\) 天,那么,这片牧场每天新生的草量最多可供几头牛吃1天? 输 ...

  7. linux获知当前时间

    内核代码能一直获取一个当前时间的表示, 通过查看 jifies 的值. 常常地, 这个值只代 表从最后一次启动以来的时间, 这个事实对驱动来说无关, 因为它的生命周期受限于系统 的 uptime. 如 ...

  8. 解决html2canvas图片跨域合成失败的问题

    /** * 将图片转换为base64 * 解决html2canvas跨域合成失败的问题 */ var getBase64Image = function(src, cb) { var img = do ...

  9. java 如何重写equals

    java中重写equals表面上看只涉及equals与hashCode两个方法,但如果仔细考虑发现重写一个逻辑完整的equals并不容易,需要考虑克隆,继承(多态)等问题,下面是最近的几点心得 1.先 ...

  10. Sybase commands

    (1)update table statistics $table name if we change index info for a table ,such as create or drop i ...