scalikeJDBC可以通过配置文件来设置连接池及全局系统参数。对配置文件的解析是通过TypesafeConfig工具库实现的。默认加载classpath下的application.conf,application.json和application.properties文件。作为尝试,我们可以在resource/application.conf文件里进行h2和mysql数据库的JDBC驱动参数定义:

# JDBC settings
db {
h2 {
driver="org.h2.Driver"
url="jdbc:h2:tcp://localhost/~/slickdemo"
user=""
password=""
poolInitialSize=
poolMaxSize=
poolConnectionTimeoutMillis=
poolValidationQuery="select 1 as one"
poolFactoryName="commons-dbcp"
}
} db.mysql.driver="com.mysql.jdbc.Driver"
db.mysql.url="jdbc:mysql://localhost:3306/testdb"
db.mysql.user="root"
db.mysql.password=""
db.mysql.poolInitialSize=
db.mysql.poolMaxSize=
db.mysql.poolConnectionTimeoutMillis=
db.mysql.poolValidationQuery="select 1 as one"
db.mysql.poolFactoryName="commons-dbcp" # scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled=true
scalikejdbc.global.loggingSQLAndTime.logLevel=info
scalikejdbc.global.loggingSQLAndTime.warningEnabled=true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis=
scalikejdbc.global.loggingSQLAndTime.warningLogLevel=warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode=false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace=false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth=

上面h2和mysql设置采用了不同的格式。scalikeJDBC是在trait DBs中的setup(dbname)来进行dbname数据库连接池的设定的:

/**
* DB configurator
*/
trait DBs { self: TypesafeConfigReader with TypesafeConfig with EnvPrefix => def setup(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
val JDBCSettings(url, user, password, driver) = readJDBCSettings(dbName)
val cpSettings = readConnectionPoolSettings(dbName)
if (driver != null && driver.trim.nonEmpty) {
Class.forName(driver)
}
ConnectionPool.add(dbName, url, user, password, cpSettings)
} def setupAll(): Unit = {
loadGlobalSettings()
dbNames.foreach { dbName => setup(Symbol(dbName)) }
} def close(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
ConnectionPool.close(dbName)
} def closeAll(): Unit = {
ConnectionPool.closeAll
} } /**
* Default DB setup executor
*/
object DBs extends DBs
with TypesafeConfigReader
with StandardTypesafeConfig
with NoEnvPrefix

可以看到:setup(dbname)进行了dbname设置操作包括Class.forName(driver),ConnectionPool.add(dbname...)。我们首先试试使用h2数据库进行一些操作:

import scalikejdbc._
import scalikejdbc.config._
import org.joda.time._
import scala.util._ //Try
import scalikejdbc.TxBoundary.Try._ object JDBCConfig extends App{
// DBs.setup/DBs.setupAll loads specified JDBC driver classes.
// DBs.setupAll()
DBs.setup('h2)
DBs.setup('mysql)
// Unlike DBs.setupAll(), DBs.setup() doesn't load configurations under global settings automatically
DBs.loadGlobalSettings() val dbname = 'h2 //clear table object
try {
sql"""
drop table members
""".execute().apply()(NamedAutoSession(dbname))
}
catch {
case _: Throwable =>
}

也可以用DBs.setupAll()来设定配置文件中的所有数据库设置。setupAll()还运行了loadGlobalSettings()。下面我们再进行实际的数据操作:

 //construct SQL object
val createSQL: SQL[Nothing,NoExtractor] =SQL("""
create table members (
id bigint primary key auto_increment,
name varchar() not null,
description varchar(),
birthday date,
created_at timestamp not null
)""") //run this SQL
createSQL.execute().apply()(NamedAutoSession(dbname)) //autoCommit //data model
case class Member(
id: Long,
name: String,
description: Option[String] = None,
birthday: Option[LocalDate] = None,
createdAt: DateTime) def create(name: String, birthday: Option[LocalDate], remarks: Option[String])(implicit session: DBSession): Member = {
val insertSQL: SQL[Nothing,NoExtractor] =
sql"""insert into members (name, birthday, description, created_at)
values (${name}, ${birthday}, ${remarks}, ${DateTime.now})"""
val id: Long = insertSQL.updateAndReturnGeneratedKey.apply()
Member(id, name, remarks, birthday,DateTime.now)
} val users = List(
("John",new LocalDate("2008-03-01"),"youngest user"),
("Susan",new LocalDate("2000-11-03"),"middle aged user"),
("Peter",new LocalDate("1983-01-21"),"oldest user"),
) val result: Try[List[Member]] =
NamedDB(dbname) localTx { implicit session =>
Try {
val members: List[Member] = users.map { person =>
create(person._1, Some(person._2), Some(person._3))
}
members
}
} result match {
case Success(mlist) => println(s"batch added members: $mlist")
case Failure(err) => println(s"${err.getMessage}")
} //data row converter
val toMember = (rs: WrappedResultSet) => Member(
id = rs.long("id"),
name = rs.string("name"),
description = rs.stringOpt("description"),
birthday = rs.jodaLocalDateOpt("birthday"),
createdAt = rs.jodaDateTime("created_at")
) val selectSQL: SQL[Member,HasExtractor] = sql"""select * from members""".map(toMember)
val members: List[Member] = NamedDB(dbname) readOnly { implicit session =>
selectSQL.list.apply()
} println(s"all members: $members")
NamedDB('h2mem).close()

注意在过程中我们使用Named???(???)来指定目标数据库连接connection。在上面的配置文件中有一项属性poolFactoryName,它指定了具体使用的数据库连接池工具。scalikeJDBC提供了commons-dbcp,commons-dbcp2,bonecp如下:

poolFactoryName="commons-dbcp"
poolFactoryName="commons=dbcp2"
poolFactoryName="bonecp"

如果配置文件中不提供poolFactoryName的设置,默认为commons-dbcp。翻查了一下,上面这几个连接池管理工具都很陈旧了。想到slick用的是HikariCP,上网看了看2018年还进行了最近更新。下面我们就为scalikeJDBC增加HikariCP连接池管理工具支持。首先,我们需要用TypesafeConfig解析HikariCP配置后构建HikariConfig对象,然后用它来构建HikariDataSource。

下面是配置文件解析代码:

package configdbs
import scala.collection.mutable
import scala.concurrent.duration.Duration
import scala.language.implicitConversions
import com.typesafe.config._
import java.util.concurrent.TimeUnit
import java.util.Properties
import scalikejdbc.config._
import com.typesafe.config.Config
import com.zaxxer.hikari._
import scalikejdbc.ConnectionPoolFactoryRepository /** Extension methods to make Typesafe Config easier to use */
class ConfigExtensionMethods(val c: Config) extends AnyVal {
import scala.collection.JavaConverters._ def getBooleanOr(path: String, default: => Boolean = false) = if(c.hasPath(path)) c.getBoolean(path) else default
def getIntOr(path: String, default: => Int = ) = if(c.hasPath(path)) c.getInt(path) else default
def getStringOr(path: String, default: => String = null) = if(c.hasPath(path)) c.getString(path) else default
def getConfigOr(path: String, default: => Config = ConfigFactory.empty()) = if(c.hasPath(path)) c.getConfig(path) else default def getMillisecondsOr(path: String, default: => Long = 0L) = if(c.hasPath(path)) c.getDuration(path, TimeUnit.MILLISECONDS) else default
def getDurationOr(path: String, default: => Duration = Duration.Zero) =
if(c.hasPath(path)) Duration(c.getDuration(path, TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS) else default def getPropertiesOr(path: String, default: => Properties = null): Properties =
if(c.hasPath(path)) new ConfigExtensionMethods(c.getConfig(path)).toProperties else default def toProperties: Properties = {
def toProps(m: mutable.Map[String, ConfigValue]): Properties = {
val props = new Properties(null)
m.foreach { case (k, cv) =>
val v =
if(cv.valueType() == ConfigValueType.OBJECT) toProps(cv.asInstanceOf[ConfigObject].asScala)
else if(cv.unwrapped eq null) null
else cv.unwrapped.toString
if(v ne null) props.put(k, v)
}
props
}
toProps(c.root.asScala)
} def getBooleanOpt(path: String): Option[Boolean] = if(c.hasPath(path)) Some(c.getBoolean(path)) else None
def getIntOpt(path: String): Option[Int] = if(c.hasPath(path)) Some(c.getInt(path)) else None
def getStringOpt(path: String) = Option(getStringOr(path))
def getPropertiesOpt(path: String) = Option(getPropertiesOr(path))
} object ConfigExtensionMethods {
@inline implicit def configExtensionMethods(c: Config): ConfigExtensionMethods = new ConfigExtensionMethods(c)
} trait HikariConfigReader extends TypesafeConfigReader {
self: TypesafeConfig => // with TypesafeConfigReader => //NoEnvPrefix => import ConfigExtensionMethods.configExtensionMethods def getFactoryName(dbName: Symbol): String = {
val c: Config = config.getConfig(envPrefix + "db." + dbName.name)
c.getStringOr("poolFactoryName", ConnectionPoolFactoryRepository.COMMONS_DBCP)
} def hikariCPConfig(dbName: Symbol): HikariConfig = { val hconf = new HikariConfig()
val c: Config = config.getConfig(envPrefix + "db." + dbName.name) // Connection settings
if (c.hasPath("dataSourceClass")) {
hconf.setDataSourceClassName(c.getString("dataSourceClass"))
} else {
Option(c.getStringOr("driverClassName", c.getStringOr("driver"))).map(hconf.setDriverClassName _)
}
hconf.setJdbcUrl(c.getStringOr("url", null))
c.getStringOpt("user").foreach(hconf.setUsername)
c.getStringOpt("password").foreach(hconf.setPassword)
c.getPropertiesOpt("properties").foreach(hconf.setDataSourceProperties) // Pool configuration
hconf.setConnectionTimeout(c.getMillisecondsOr("connectionTimeout", ))
hconf.setValidationTimeout(c.getMillisecondsOr("validationTimeout", ))
hconf.setIdleTimeout(c.getMillisecondsOr("idleTimeout", ))
hconf.setMaxLifetime(c.getMillisecondsOr("maxLifetime", ))
hconf.setLeakDetectionThreshold(c.getMillisecondsOr("leakDetectionThreshold", ))
hconf.setInitializationFailFast(c.getBooleanOr("initializationFailFast", false))
c.getStringOpt("connectionTestQuery").foreach(hconf.setConnectionTestQuery)
c.getStringOpt("connectionInitSql").foreach(hconf.setConnectionInitSql)
val numThreads = c.getIntOr("numThreads", )
hconf.setMaximumPoolSize(c.getIntOr("maxConnections", numThreads * ))
hconf.setMinimumIdle(c.getIntOr("minConnections", numThreads))
hconf.setPoolName(c.getStringOr("poolName", dbName.name))
hconf.setRegisterMbeans(c.getBooleanOr("registerMbeans", false)) // Equivalent of ConnectionPreparer
hconf.setReadOnly(c.getBooleanOr("readOnly", false))
c.getStringOpt("isolation").map("TRANSACTION_" + _).foreach(hconf.setTransactionIsolation)
hconf.setCatalog(c.getStringOr("catalog", null)) hconf }
}

hikariCPConfig函数返回了hconf。下面我们还需要修改DBs.setup调用HikariConfigReader里的函数来构建HikariDataSource已经相关的配置参数:

import scalikejdbc._
trait ConfigDBs {
self: TypesafeConfigReader with TypesafeConfig with HikariConfigReader => def setup(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
getFactoryName(dbName) match {
case "hikaricp" => {
val hconf = hikariCPConfig(dbName)
val hikariCPSource = new HikariDataSource(hconf)
if (hconf.getDriverClassName != null && hconf.getDriverClassName.trim.nonEmpty) {
Class.forName(hconf.getDriverClassName)
}
ConnectionPool.add(dbName, new DataSourceConnectionPool(hikariCPSource))
}
case _ => {
val JDBCSettings(url, user, password, driver) = readJDBCSettings(dbName)
val cpSettings = readConnectionPoolSettings(dbName)
if (driver != null && driver.trim.nonEmpty) {
Class.forName(driver)
}
ConnectionPool.add(dbName, url, user, password, cpSettings)
}
}
} def setupAll(): Unit = {
loadGlobalSettings()
dbNames.foreach { dbName => setup(Symbol(dbName)) }
} def close(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
ConnectionPool.close(dbName)
} def closeAll(): Unit = {
ConnectionPool.closeAll
} } object ConfigDBs extends ConfigDBs
with TypesafeConfigReader
with StandardTypesafeConfig
with HikariConfigReader case class ConfigDBsWithEnv(envValue: String) extends ConfigDBs
with TypesafeConfigReader
with StandardTypesafeConfig
with HikariConfigReader
with EnvPrefix { override val env = Option(envValue)
}

增加了ConfigDBs对象来替代原来的DBs对象。在ConfigDBs.setup(dbname)实现了HikariCP的调用和配置。ConfigDBsWithEnv可以支持在配置文件中外包一层路径:

dev {
db {
h2 {
driver = "org.h2.Driver"
url = "jdbc:h2:tcp://localhost/~/slickdemo"
user = ""
password = ""
poolFactoryName = "hikaricp"
numThreads =
maxConnections =
minConnections =
keepAliveConnection = true
}
mysql {
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://localhost:3306/testdb"
user = "root"
password = ""
poolInitialSize =
poolMaxSize =
poolConnectionTimeoutMillis =
poolValidationQuery = "select 1 as one"
poolFactoryName = "bonecp" }
} # scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled = true
scalikejdbc.global.loggingSQLAndTime.logLevel = info
scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis =
scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth =
}

好了,下面是增加了HikariCP的测试代码:

import configdbs._
import scalikejdbc._
import org.joda.time._
import scala.util._ //Try
import scalikejdbc.TxBoundary.Try._
object ConfigureDBs extends App { ConfigDBsWithEnv("dev").setupAll() val dbname = 'h2 //clear table object
try {
sql"""
drop table members
""".execute().apply()(NamedAutoSession(dbname))
}
catch {
case _: Throwable =>
} //construct SQL object
val createSQL: SQL[Nothing,NoExtractor] =SQL("""
create table members (
id bigint primary key auto_increment,
name varchar() not null,
description varchar(),
birthday date,
created_at timestamp not null
)""") //run this SQL
createSQL.execute().apply()(NamedAutoSession(dbname)) //autoCommit //data model
case class Member(
id: Long,
name: String,
description: Option[String] = None,
birthday: Option[LocalDate] = None,
createdAt: DateTime) def create(name: String, birthday: Option[LocalDate], remarks: Option[String])(implicit session: DBSession): Member = {
val insertSQL: SQL[Nothing,NoExtractor] =
sql"""insert into members (name, birthday, description, created_at)
values (${name}, ${birthday}, ${remarks}, ${DateTime.now})"""
val id: Long = insertSQL.updateAndReturnGeneratedKey.apply()
Member(id, name, remarks, birthday,DateTime.now)
} val users = List(
("John",new LocalDate("2008-03-01"),"youngest user"),
("Susan",new LocalDate("2000-11-03"),"middle aged user"),
("Peter",new LocalDate("1983-01-21"),"oldest user"),
) val result: Try[List[Member]] =
NamedDB(dbname) localTx { implicit session =>
Try {
val members: List[Member] = users.map { person =>
create(person._1, Some(person._2), Some(person._3))
}
members
}
} result match {
case Success(mlist) => println(s"batch added members: $mlist")
case Failure(err) => println(s"${err.getMessage}")
} //data row converter
val toMember = (rs: WrappedResultSet) => Member(
id = rs.long("id"),
name = rs.string("name"),
description = rs.stringOpt("description"),
birthday = rs.jodaLocalDateOpt("birthday"),
createdAt = rs.jodaDateTime("created_at")
) val selectSQL: SQL[Member,HasExtractor] = sql"""select * from members""".map(toMember)
val members: List[Member] = NamedDB(dbname) readOnly { implicit session =>
selectSQL.list.apply()
} println(s"all members: $members")
NamedDB('h2mem).close() }

运行正常!

下面是本次讨论的示范源代码:

build.sbt

name := "learn-scalikeJDBC"

version := "0.1"

scalaVersion := "2.12.4"

// Scala 2.10, 2.11, 2.12
libraryDependencies ++= Seq(
"org.scalikejdbc" %% "scalikejdbc" % "3.1.0",
"org.scalikejdbc" %% "scalikejdbc-test" % "3.1.0" % "test",
"org.scalikejdbc" %% "scalikejdbc-config" % "3.1.0",
"com.h2database" % "h2" % "1.4.196",
"mysql" % "mysql-connector-java" % "6.0.6",
"org.postgresql" % "postgresql" % "9.4-1205-jdbc42",
"commons-dbcp" % "commons-dbcp" % "1.4",
"org.apache.tomcat" % "tomcat-jdbc" % "9.0.2",
"com.zaxxer" % "HikariCP" % "2.7.4",
"com.jolbox" % "bonecp" % "0.8.0.RELEASE",
"ch.qos.logback" % "logback-classic" % "1.2.3"
)

resource/application.conf

# JDBC settings
test {
db {
h2 {
driver = "org.h2.Driver"
url = "jdbc:h2:tcp://localhost/~/slickdemo"
user = ""
password = ""
poolInitialSize =
poolMaxSize =
poolConnectionTimeoutMillis =
poolValidationQuery = "select 1 as one"
poolFactoryName = "commons-dbcp2"
}
} db.mysql.driver = "com.mysql.jdbc.Driver"
db.mysql.url = "jdbc:mysql://localhost:3306/testdb"
db.mysql.user = "root"
db.mysql.password = ""
db.mysql.poolInitialSize =
db.mysql.poolMaxSize =
db.mysql.poolConnectionTimeoutMillis =
db.mysql.poolValidationQuery = "select 1 as one"
db.mysql.poolFactoryName = "bonecp" # scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled = true
scalikejdbc.global.loggingSQLAndTime.logLevel = info
scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis =
scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth =
}
dev {
db {
h2 {
driver = "org.h2.Driver"
url = "jdbc:h2:tcp://localhost/~/slickdemo"
user = ""
password = ""
poolFactoryName = "hikaricp"
numThreads =
maxConnections =
minConnections =
keepAliveConnection = true
}
mysql {
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://localhost:3306/testdb"
user = "root"
password = ""
poolInitialSize =
poolMaxSize =
poolConnectionTimeoutMillis =
poolValidationQuery = "select 1 as one"
poolFactoryName = "bonecp" }
} # scallikejdbc Global settings
scalikejdbc.global.loggingSQLAndTime.enabled = true
scalikejdbc.global.loggingSQLAndTime.logLevel = info
scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis =
scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
scalikejdbc.global.loggingSQLAndTime.stackTraceDepth =
}

HikariConfig.scala

package configdbs
import scala.collection.mutable
import scala.concurrent.duration.Duration
import scala.language.implicitConversions
import com.typesafe.config._
import java.util.concurrent.TimeUnit
import java.util.Properties
import scalikejdbc.config._
import com.typesafe.config.Config
import com.zaxxer.hikari._
import scalikejdbc.ConnectionPoolFactoryRepository /** Extension methods to make Typesafe Config easier to use */
class ConfigExtensionMethods(val c: Config) extends AnyVal {
import scala.collection.JavaConverters._ def getBooleanOr(path: String, default: => Boolean = false) = if(c.hasPath(path)) c.getBoolean(path) else default
def getIntOr(path: String, default: => Int = ) = if(c.hasPath(path)) c.getInt(path) else default
def getStringOr(path: String, default: => String = null) = if(c.hasPath(path)) c.getString(path) else default
def getConfigOr(path: String, default: => Config = ConfigFactory.empty()) = if(c.hasPath(path)) c.getConfig(path) else default def getMillisecondsOr(path: String, default: => Long = 0L) = if(c.hasPath(path)) c.getDuration(path, TimeUnit.MILLISECONDS) else default
def getDurationOr(path: String, default: => Duration = Duration.Zero) =
if(c.hasPath(path)) Duration(c.getDuration(path, TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS) else default def getPropertiesOr(path: String, default: => Properties = null): Properties =
if(c.hasPath(path)) new ConfigExtensionMethods(c.getConfig(path)).toProperties else default def toProperties: Properties = {
def toProps(m: mutable.Map[String, ConfigValue]): Properties = {
val props = new Properties(null)
m.foreach { case (k, cv) =>
val v =
if(cv.valueType() == ConfigValueType.OBJECT) toProps(cv.asInstanceOf[ConfigObject].asScala)
else if(cv.unwrapped eq null) null
else cv.unwrapped.toString
if(v ne null) props.put(k, v)
}
props
}
toProps(c.root.asScala)
} def getBooleanOpt(path: String): Option[Boolean] = if(c.hasPath(path)) Some(c.getBoolean(path)) else None
def getIntOpt(path: String): Option[Int] = if(c.hasPath(path)) Some(c.getInt(path)) else None
def getStringOpt(path: String) = Option(getStringOr(path))
def getPropertiesOpt(path: String) = Option(getPropertiesOr(path))
} object ConfigExtensionMethods {
@inline implicit def configExtensionMethods(c: Config): ConfigExtensionMethods = new ConfigExtensionMethods(c)
} trait HikariConfigReader extends TypesafeConfigReader {
self: TypesafeConfig => // with TypesafeConfigReader => //NoEnvPrefix => import ConfigExtensionMethods.configExtensionMethods def getFactoryName(dbName: Symbol): String = {
val c: Config = config.getConfig(envPrefix + "db." + dbName.name)
c.getStringOr("poolFactoryName", ConnectionPoolFactoryRepository.COMMONS_DBCP)
} def hikariCPConfig(dbName: Symbol): HikariConfig = { val hconf = new HikariConfig()
val c: Config = config.getConfig(envPrefix + "db." + dbName.name) // Connection settings
if (c.hasPath("dataSourceClass")) {
hconf.setDataSourceClassName(c.getString("dataSourceClass"))
} else {
Option(c.getStringOr("driverClassName", c.getStringOr("driver"))).map(hconf.setDriverClassName _)
}
hconf.setJdbcUrl(c.getStringOr("url", null))
c.getStringOpt("user").foreach(hconf.setUsername)
c.getStringOpt("password").foreach(hconf.setPassword)
c.getPropertiesOpt("properties").foreach(hconf.setDataSourceProperties) // Pool configuration
hconf.setConnectionTimeout(c.getMillisecondsOr("connectionTimeout", ))
hconf.setValidationTimeout(c.getMillisecondsOr("validationTimeout", ))
hconf.setIdleTimeout(c.getMillisecondsOr("idleTimeout", ))
hconf.setMaxLifetime(c.getMillisecondsOr("maxLifetime", ))
hconf.setLeakDetectionThreshold(c.getMillisecondsOr("leakDetectionThreshold", ))
hconf.setInitializationFailFast(c.getBooleanOr("initializationFailFast", false))
c.getStringOpt("connectionTestQuery").foreach(hconf.setConnectionTestQuery)
c.getStringOpt("connectionInitSql").foreach(hconf.setConnectionInitSql)
val numThreads = c.getIntOr("numThreads", )
hconf.setMaximumPoolSize(c.getIntOr("maxConnections", numThreads * ))
hconf.setMinimumIdle(c.getIntOr("minConnections", numThreads))
hconf.setPoolName(c.getStringOr("poolName", dbName.name))
hconf.setRegisterMbeans(c.getBooleanOr("registerMbeans", false)) // Equivalent of ConnectionPreparer
hconf.setReadOnly(c.getBooleanOr("readOnly", false))
c.getStringOpt("isolation").map("TRANSACTION_" + _).foreach(hconf.setTransactionIsolation)
hconf.setCatalog(c.getStringOr("catalog", null)) hconf }
} import scalikejdbc._
trait ConfigDBs {
self: TypesafeConfigReader with TypesafeConfig with HikariConfigReader => def setup(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
getFactoryName(dbName) match {
case "hikaricp" => {
val hconf = hikariCPConfig(dbName)
val hikariCPSource = new HikariDataSource(hconf)
if (hconf.getDriverClassName != null && hconf.getDriverClassName.trim.nonEmpty) {
Class.forName(hconf.getDriverClassName)
}
ConnectionPool.add(dbName, new DataSourceConnectionPool(hikariCPSource))
}
case _ => {
val JDBCSettings(url, user, password, driver) = readJDBCSettings(dbName)
val cpSettings = readConnectionPoolSettings(dbName)
if (driver != null && driver.trim.nonEmpty) {
Class.forName(driver)
}
ConnectionPool.add(dbName, url, user, password, cpSettings)
}
}
} def setupAll(): Unit = {
loadGlobalSettings()
dbNames.foreach { dbName => setup(Symbol(dbName)) }
} def close(dbName: Symbol = ConnectionPool.DEFAULT_NAME): Unit = {
ConnectionPool.close(dbName)
} def closeAll(): Unit = {
ConnectionPool.closeAll
} } object ConfigDBs extends ConfigDBs
with TypesafeConfigReader
with StandardTypesafeConfig
with HikariConfigReader case class ConfigDBsWithEnv(envValue: String) extends ConfigDBs
with TypesafeConfigReader
with StandardTypesafeConfig
with HikariConfigReader
with EnvPrefix { override val env = Option(envValue)
}

ConfigDBs.scala

import configdbs._
import scalikejdbc._
import org.joda.time._
import scala.util._ //Try
import scalikejdbc.TxBoundary.Try._
object ConfigureDBs extends App { ConfigDBsWithEnv("dev").setupAll() val dbname = 'mysql //clear table object
try {
sql"""
drop table members
""".execute().apply()(NamedAutoSession(dbname))
}
catch {
case _: Throwable =>
} //construct SQL object
val createSQL: SQL[Nothing,NoExtractor] =SQL("""
create table members (
id bigint primary key auto_increment,
name varchar() not null,
description varchar(),
birthday date,
created_at timestamp not null
)""") //run this SQL
createSQL.execute().apply()(NamedAutoSession(dbname)) //autoCommit //data model
case class Member(
id: Long,
name: String,
description: Option[String] = None,
birthday: Option[LocalDate] = None,
createdAt: DateTime) def create(name: String, birthday: Option[LocalDate], remarks: Option[String])(implicit session: DBSession): Member = {
val insertSQL: SQL[Nothing,NoExtractor] =
sql"""insert into members (name, birthday, description, created_at)
values (${name}, ${birthday}, ${remarks}, ${DateTime.now})"""
val id: Long = insertSQL.updateAndReturnGeneratedKey.apply()
Member(id, name, remarks, birthday,DateTime.now)
} val users = List(
("John",new LocalDate("2008-03-01"),"youngest user"),
("Susan",new LocalDate("2000-11-03"),"middle aged user"),
("Peter",new LocalDate("1983-01-21"),"oldest user"),
) val result: Try[List[Member]] =
NamedDB(dbname) localTx { implicit session =>
Try {
val members: List[Member] = users.map { person =>
create(person._1, Some(person._2), Some(person._3))
}
members
}
} result match {
case Success(mlist) => println(s"batch added members: $mlist")
case Failure(err) => println(s"${err.getMessage}")
} //data row converter
val toMember = (rs: WrappedResultSet) => Member(
id = rs.long("id"),
name = rs.string("name"),
description = rs.stringOpt("description"),
birthday = rs.jodaLocalDateOpt("birthday"),
createdAt = rs.jodaDateTime("created_at")
) val selectSQL: SQL[Member,HasExtractor] = sql"""select * from members""".map(toMember)
val members: List[Member] = NamedDB(dbname) readOnly { implicit session =>
selectSQL.list.apply()
} println(s"all members: $members")
NamedDB('h2mem).close() }

SDP(2):ScalikeJDBC-Connection Pool Configuration的更多相关文章

  1. SDP(8):文本式数据库-MongoDB-Scala基本操作

    MongoDB是一种文本式数据库.与传统的关系式数据库最大不同是MongoDB没有标准的格式要求,即没有schema,合适高效处理当今由互联网+商业产生的多元多态数据.MongoDB也是一种分布式数据 ...

  2. SDP(0):Streaming-Data-Processor - Data Processing with Akka-Stream

    再有两天就进入2018了,想想还是要准备一下明年的工作方向.回想当初开始学习函数式编程时的主要目的是想设计一套标准API給那些习惯了OOP方式开发商业应用软件的程序员们,使他们能用一种接近传统数据库软 ...

  3. SDP(1):ScalikeJDBC-基本操作介绍

    简单来说:JDBC是一种开放标准的跨编程语言.跨数据库类型编程API.各类型数据库产品厂商都会按它的标准要求来提供针对自身产品的JDBC驱动程序.最主要的这是一套成熟的工具,在编程人员中使用很普及.既 ...

  4. SDP(3):ScalikeJDBC- JDBC-Engine:Fetching

    ScalikeJDBC在覆盖JDBC基本功能上是比较完整的,而且实现这些功能的方式比较简洁,运算效率方面自然会稍高一筹了.理论上用ScalikeJDBC作为一种JDBC-Engine还是比较理想的:让 ...

  5. SDP(6):分布式数据库运算环境- Cassandra-Engine

    现代信息系统应该是避不开大数据处理的.作为一个通用的系统集成工具也必须具备大数据存储和读取能力.cassandra是一种分布式的数据库,具备了分布式数据库高可用性(high-availability) ...

  6. SDP(7):Cassandra- Cassandra-Engine:Streaming

    akka在alpakka工具包里提供了对cassandra数据库的streaming功能.简单来讲就是用一个CQL-statement读取cassandra数据并产生akka-stream的Sourc ...

  7. SDP(9):MongoDB-Scala - data access and modeling

    MongoDB是一种文件型数据库,对数据格式没有硬性要求,所以可以实现灵活多变的数据存储和读取.MongoDB又是一种分布式数据库,与传统关系数据库不同的是,分布式数据库不支持table-join,所 ...

  8. SDP(12): MongoDB-Engine - Streaming

    在akka-alpakka工具包里也提供了对MongoDB的stream-connector,能针对MongoDB数据库进行streaming操作.这个MongoDB-connector里包含了Mon ...

  9. SDP(5):ScalikeJDBC- JDBC-Engine:Streaming

    作为一种通用的数据库编程引擎,用Streaming来应对海量数据的处理是必备功能.同样,我们还是通过一种Context传递产生流的要求.因为StreamingContext比较简单,而且还涉及到数据抽 ...

随机推荐

  1. Sublime Text 3 配置分析与我的配置---小结

    Sublime Text 3 配置解释(默认){// 设置主题文件"color_scheme": "Packages/Color Scheme – Default/Mon ...

  2. [编织消息框架][netty源码分析]9 Promise 实现类DefaultPromise职责与实现

    netty Future是基于jdk Future扩展,以监听完成任务触发执行Promise是对Future修改任务数据DefaultPromise是重要的模板类,其它不同类型实现基本是一层简单的包装 ...

  3. 如何将外部的obj模型导入OpenGL

    1.关于obj的说明. obj中存放的是顶点坐标信息(v),面的信息(f),法线(vn),纹理坐标(vt),以及材质(这个放在mtl)中 我使用CINEMA 4D导出用VS查看后的信息: CINEMA ...

  4. MySQL查询(进阶)(每个标点都是重点)

    MySQL 是工作中很普遍的需要用到的,所以必须掌握,而 之前我们一直说的都是怎么存. 你只会存不会取有个屁用.所以希望大家在如何查询读取数据这方面多下点功夫. 这篇和上一篇都是干货,我也是第一次学. ...

  5. css3实现梯形三角

    近期移动端项目中,图片很多 移动端尽量少图片,以便提升加载速度!这时候css3可以大放光芒比如梯形的背景图 --------------------------------- ------------ ...

  6. Tsung脚本中使用动态参数(一)---直接在脚本里编写Erlang代码

    杀死一个程序猿,只要改三次需求.同理,杀死一个接口自动化测试人员,只要改三次接口数据处理方式.我目前的状态,改了一次接口数据处理方式,有一种胸闷的感觉. 因为改需求,所以,要改脚本.T_T.所以,才有 ...

  7. (译)Web是如何工作的:给Web开发新手的初级读物

    原文地址:https://medium.freecodecamp.org/how-the-web-works-a-primer-for-newcomers-to-web-development-or- ...

  8. 实现数组元素互换位置(乘机理解java参数传递)

    Java中函数参数是按值传递的,在实现数组元素互换位置之前,我想先说一下Java函数参数传递过程.一般情况下我们会把参数分为基本数据类型和引用数据类型,然后分别来讲参数传递,因为他们的外在表现似乎是不 ...

  9. Flask-SQLAlchemy.........>model创建表

    import datetime from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarat ...

  10. Grails框架使用指南

    Grails框架使用指南,详解Grails框架!cnxieyang@163.com