akka-typed(7) - cluster:sharding, 集群分片
在使用akka-typed的过程中发现有很多地方都简化了不少,变得更方便了,包括:Supervision,只要用Behaviors.supervise()把Behavior包住,很容易就可以实现这个actor的SupervisorStrategy.restartWithBackoff策略了。然后集群化的group router使用起来也很方便,再就是集群分片cluster-sharding了。下面我们就通过一个例子来介绍cluster-sharding的具体使用方法。
首先,分片的意思是指在集群中多个节点上部署某种actor,即entity,的构建机制。entity的构建是动态的,ClusterSharding系统根据各节点的负载情况决定到底在哪个节点构建entity,然后返回ShardRegion:一个该类entity具体的构建工具及消息中介。也就是说我们可以把同样的一种运算通过entityId指定给任何一个entity,但具体这个entity生存在集群哪个节点上人工是无法确定的,完全靠ClusterSharding引导。先设计一个简单功能的actor,测试它作为一个entity的工作细节:
object Counter {
sealed trait Command extends CborSerializable
case object Increment extends Command
final case class GetValue(replyTo: ActorRef[Response]) extends Command
case object StopCounter extends Command
private case object Idle extends Command sealed trait Response extends CborSerializable
case class SubTtl(entityId: String, ttl: Int) extends Response val TypeKey = EntityTypeKey[Command]("Counter") def apply(nodeAddress: String, entityContext: EntityContext[Command]): Behavior[Command] = {
Behaviors.setup { ctx =>
def updated(value: Int): Behavior[Command] = {
Behaviors.receiveMessage[Command] {
case Increment =>
ctx.log.info("******************{} counting at {},{}",ctx.self.path,nodeAddress,entityContext.entityId)
updated(value + )
case GetValue(replyTo) =>
ctx.log.info("******************{} get value at {},{}",ctx.self.path,nodeAddress,entityContext.entityId)
replyTo ! SubTtl(entityContext.entityId,value)
Behaviors.same
case Idle =>
entityContext.shard ! ClusterSharding.Passivate(ctx.self)
Behaviors.same
case StopCounter =>
Behaviors.stopped(() => ctx.log.info("************{} stopping ... passivated for idling.", entityContext.entityId))
}
}
ctx.setReceiveTimeout(.seconds, Idle)
updated()
}
}
}
cluster-sharding的机制是这样的:在每个(或指定的)节点上构建部署一个某种EntityType的ShardRegion。这样系统可以在任何部署了ShardRegion的节点上构建这种entity。然后ClusterSharding系统会根据entityId来引导消息至正确的接收对象。我们再看看ShardRegion的部署是如何实现的吧:
object EntityManager {
sealed trait Command
case class AddOne(counterId: String) extends Command
case class GetSum(counterId: String ) extends Command
case class WrappedTotal(res: Counter.Response) extends Command def apply(): Behavior[Command] = Behaviors.setup { ctx =>
val cluster = Cluster(ctx.system)
val sharding = ClusterSharding(ctx.system)
val entityType = Entity(Counter.TypeKey) { entityContext =>
Counter(cluster.selfMember.address.toString,entityContext)
}.withStopMessage(Counter.StopCounter)
sharding.init(entityType) val counterRef: ActorRef[Counter.Response] = ctx.messageAdapter(ref => WrappedTotal(ref)) Behaviors.receiveMessage[Command] {
case AddOne(cid) =>
val entityRef: EntityRef[Counter.Command] = sharding.entityRefFor(Counter.TypeKey, cid)
entityRef ! Counter.Increment
Behaviors.same
case GetSum(cid) =>
val entityRef: EntityRef[Counter.Command] = sharding.entityRefFor(Counter.TypeKey, cid)
entityRef ! Counter.GetValue(counterRef)
Behaviors.same
case WrappedTotal(ttl) => ttl match {
case Counter.SubTtl(eid,subttl) =>
ctx.log.info("***********************{} total: {} ",eid,subttl)
}
Behaviors.same
}
} }
太简单了, sharding.ini(entityType)一个函数完成了一个节点分片部署。系统通过sharding.init(entityType)来实现ShardRegion构建。这个entityType代表某种特殊actor模版,看看它的构建函数:
object Entity { /**
* Defines how the entity should be created. Used in [[ClusterSharding#init]]. More optional
* settings can be defined using the `with` methods of the returned [[Entity]].
*
* @param typeKey A key that uniquely identifies the type of entity in this cluster
* @param createBehavior Create the behavior for an entity given a [[EntityContext]] (includes entityId)
* @tparam M The type of message the entity accepts
*/
def apply[M](typeKey: EntityTypeKey[M])(
createBehavior: EntityContext[M] => Behavior[M]): Entity[M, ShardingEnvelope[M]] =
new Entity(createBehavior, typeKey, None, Props.empty, None, None, None, None, None)
}
这个函数需要一个EntityTyeKey和一个构建Behavior的函数createBehavior,产生一个Entity类型。Entity类型定义如下:
final class Entity[M, E] private[akka] (
val createBehavior: EntityContext[M] => Behavior[M],
val typeKey: EntityTypeKey[M],
val stopMessage: Option[M],
val entityProps: Props,
val settings: Option[ClusterShardingSettings],
val messageExtractor: Option[ShardingMessageExtractor[E, M]],
val allocationStrategy: Option[ShardAllocationStrategy],
val role: Option[String],
val dataCenter: Option[DataCenter]) { /**
* [[akka.actor.typed.Props]] of the entity actors, such as dispatcher settings.
*/
def withEntityProps(newEntityProps: Props): Entity[M, E] =
copy(entityProps = newEntityProps) /**
* Additional settings, typically loaded from configuration.
*/
def withSettings(newSettings: ClusterShardingSettings): Entity[M, E] =
copy(settings = Option(newSettings)) /**
* Message sent to an entity to tell it to stop, e.g. when rebalanced or passivated.
* If this is not defined it will be stopped automatically.
* It can be useful to define a custom stop message if the entity needs to perform
* some asynchronous cleanup or interactions before stopping.
*/
def withStopMessage(newStopMessage: M): Entity[M, E] =
copy(stopMessage = Option(newStopMessage)) /**
*
* If a `messageExtractor` is not specified the messages are sent to the entities by wrapping
* them in [[ShardingEnvelope]] with the entityId of the recipient actor. That envelope
* is used by the [[HashCodeMessageExtractor]] for extracting entityId and shardId. The number of
* shards is then defined by `numberOfShards` in `ClusterShardingSettings`, which by default
* is configured with `akka.cluster.sharding.number-of-shards`.
*/
def withMessageExtractor[Envelope](newExtractor: ShardingMessageExtractor[Envelope, M]): Entity[M, Envelope] =
new Entity(
createBehavior,
typeKey,
stopMessage,
entityProps,
settings,
Option(newExtractor),
allocationStrategy,
role,
dataCenter) /**
* Allocation strategy which decides on which nodes to allocate new shards,
* [[ClusterSharding#defaultShardAllocationStrategy]] is used if this is not specified.
*/
def withAllocationStrategy(newAllocationStrategy: ShardAllocationStrategy): Entity[M, E] =
copy(allocationStrategy = Option(newAllocationStrategy)) /**
* Run the Entity actors on nodes with the given role.
*/
def withRole(newRole: String): Entity[M, E] = copy(role = Some(newRole)) /**
* The data center of the cluster nodes where the cluster sharding is running.
* If the dataCenter is not specified then the same data center as current node. If the given
* dataCenter does not match the data center of the current node the `ShardRegion` will be started
* in proxy mode.
*/
def withDataCenter(newDataCenter: DataCenter): Entity[M, E] = copy(dataCenter = Some(newDataCenter)) private def copy(
createBehavior: EntityContext[M] => Behavior[M] = createBehavior,
typeKey: EntityTypeKey[M] = typeKey,
stopMessage: Option[M] = stopMessage,
entityProps: Props = entityProps,
settings: Option[ClusterShardingSettings] = settings,
allocationStrategy: Option[ShardAllocationStrategy] = allocationStrategy,
role: Option[String] = role,
dataCenter: Option[DataCenter] = dataCenter): Entity[M, E] = {
new Entity(
createBehavior,
typeKey,
stopMessage,
entityProps,
settings,
messageExtractor,
allocationStrategy,
role,
dataCenter)
}
}
这里面有许多方法用来控制Entity的构建和作业。
然后我们把这个EntityManager当作RootBehavior部署到多个节点上去:
object ClusterShardingApp {
def main(args: Array[String]): Unit = {
if (args.isEmpty) {
startup("shard", )
startup("shard", )
startup("shard", )
startup("front", )
} else {
require(args.size == , "Usage: role port")
startup(args(), args().toInt)
}
} def startup(role: String, port: Int): Unit = {
// Override the configuration of the port when specified as program argument
val config = ConfigFactory
.parseString(s"""
akka.remote.artery.canonical.port=$port
akka.cluster.roles = [$role]
""")
.withFallback(ConfigFactory.load("cluster")) val entityManager = ActorSystem[EntityManager.Command](EntityManager(), "ClusterSystem", config)
...
}
一共设定了3个role=shard节点和1个front节点。
在front节点上对entityId分别为9013,9014,9015,9016几个entity发送消息:
def startup(role: String, port: Int): Unit = {
// Override the configuration of the port when specified as program argument
val config = ConfigFactory
.parseString(s"""
akka.remote.artery.canonical.port=$port
akka.cluster.roles = [$role]
""")
.withFallback(ConfigFactory.load("cluster")) val entityManager = ActorSystem[EntityManager.Command](EntityManager(), "ClusterSystem", config)
if (role == "front") {
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
}
以下是部分运算结果显示:
::10.073 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/786/9014 counting at akka://ClusterSystem@127.0.0.1:25253,9014
::10.106 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/786/9014 counting at akka://ClusterSystem@127.0.0.1:25253,9014
::10.106 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/786/9014 counting at akka://ClusterSystem@127.0.0.1:25253,9014
::10.106 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/785/9013 counting at akka://ClusterSystem@127.0.0.1:25251,9013
::10.107 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/785/9013 counting at akka://ClusterSystem@127.0.0.1:25251,9013
::10.107 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/785/9013 counting at akka://ClusterSystem@127.0.0.1:25251,9013
::10.107 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/785/9013 counting at akka://ClusterSystem@127.0.0.1:25251,9013
::10.109 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/787/9015 counting at akka://ClusterSystem@127.0.0.1:25254,9015
::10.110 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/787/9015 counting at akka://ClusterSystem@127.0.0.1:25254,9015
::10.110 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/787/9015 counting at akka://ClusterSystem@127.0.0.1:25254,9015
::10.110 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/787/9015 get value at akka://ClusterSystem@127.0.0.1:25254,9015
::10.112 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.EntityManager$ - *********************** total:
::10.149 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/786/9014 get value at akka://ClusterSystem@127.0.0.1:25253,9014
::10.149 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/785/9013 get value at akka://ClusterSystem@127.0.0.1:25251,9013
::10.169 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.EntityManager$ - *********************** total:
::10.169 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.EntityManager$ - *********************** total:
::10.171 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/788/9016 counting at akka://ClusterSystem@127.0.0.1:25251,9016
::10.171 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ******************akka://ClusterSystem/system/sharding/Counter/788/9016 get value at akka://ClusterSystem@127.0.0.1:25251,9016
::10.172 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.EntityManager$ - *********************** total: ::32.176 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ************ stopping ... passivated for idling.
::52.529 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ************ stopping ... passivated for idling.
::52.658 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ************ stopping ... passivated for idling.
::52.662 [ClusterSystem-akka.actor.default-dispatcher-] INFO com.learn.akka.Counter$ - ************ stopping ... passivated for idling.
下面是本次示范的完整源代码:
ClusterSharding.scala
package com.learn.akka
import scala.concurrent.duration._
import akka.actor.typed._
import akka.actor.typed.scaladsl._
import akka.cluster.sharding.typed.scaladsl.EntityContext
import akka.cluster.sharding.typed.scaladsl.Entity
import akka.persistence.typed.PersistenceId
//#sharding-extension
import akka.cluster.sharding.typed.ShardingEnvelope
import akka.cluster.sharding.typed.scaladsl.ClusterSharding
import akka.cluster.sharding.typed.scaladsl.EntityTypeKey
import akka.cluster.sharding.typed.scaladsl.EntityRef
import com.typesafe.config.ConfigFactory
import akka.cluster.typed.Cluster
//#counter
object Counter {
sealed trait Command extends CborSerializable
case object Increment extends Command
final case class GetValue(replyTo: ActorRef[Response]) extends Command
case object StopCounter extends Command
private case object Idle extends Command sealed trait Response extends CborSerializable
case class SubTtl(entityId: String, ttl: Int) extends Response val TypeKey = EntityTypeKey[Command]("Counter") def apply(nodeAddress: String, entityContext: EntityContext[Command]): Behavior[Command] = {
Behaviors.setup { ctx =>
def updated(value: Int): Behavior[Command] = {
Behaviors.receiveMessage[Command] {
case Increment =>
ctx.log.info("******************{} counting at {},{}",ctx.self.path,nodeAddress,entityContext.entityId)
updated(value + )
case GetValue(replyTo) =>
ctx.log.info("******************{} get value at {},{}",ctx.self.path,nodeAddress,entityContext.entityId)
replyTo ! SubTtl(entityContext.entityId,value)
Behaviors.same
case Idle =>
entityContext.shard ! ClusterSharding.Passivate(ctx.self)
Behaviors.same
case StopCounter =>
Behaviors.stopped(() => ctx.log.info("************{} stopping ... passivated for idling.", entityContext.entityId))
}
}
ctx.setReceiveTimeout(.seconds, Idle)
updated()
}
}
}
object EntityManager {
sealed trait Command
case class AddOne(counterId: String) extends Command
case class GetSum(counterId: String ) extends Command
case class WrappedTotal(res: Counter.Response) extends Command def apply(): Behavior[Command] = Behaviors.setup { ctx =>
val cluster = Cluster(ctx.system)
val sharding = ClusterSharding(ctx.system)
val entityType = Entity(Counter.TypeKey) { entityContext =>
Counter(cluster.selfMember.address.toString,entityContext)
}.withStopMessage(Counter.StopCounter)
sharding.init(entityType) val counterRef: ActorRef[Counter.Response] = ctx.messageAdapter(ref => WrappedTotal(ref)) Behaviors.receiveMessage[Command] {
case AddOne(cid) =>
val entityRef: EntityRef[Counter.Command] = sharding.entityRefFor(Counter.TypeKey, cid)
entityRef ! Counter.Increment
Behaviors.same
case GetSum(cid) =>
val entityRef: EntityRef[Counter.Command] = sharding.entityRefFor(Counter.TypeKey, cid)
entityRef ! Counter.GetValue(counterRef)
Behaviors.same
case WrappedTotal(ttl) => ttl match {
case Counter.SubTtl(eid,subttl) =>
ctx.log.info("***********************{} total: {} ",eid,subttl)
}
Behaviors.same
}
} } object ClusterShardingApp {
def main(args: Array[String]): Unit = {
if (args.isEmpty) {
startup("shard", )
startup("shard", )
startup("shard", )
startup("front", )
} else {
require(args.size == , "Usage: role port")
startup(args(), args().toInt)
}
} def startup(role: String, port: Int): Unit = {
// Override the configuration of the port when specified as program argument
val config = ConfigFactory
.parseString(s"""
akka.remote.artery.canonical.port=$port
akka.cluster.roles = [$role]
""")
.withFallback(ConfigFactory.load("cluster")) val entityManager = ActorSystem[EntityManager.Command](EntityManager(), "ClusterSystem", config)
if (role == "front") {
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.AddOne("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
entityManager ! EntityManager.GetSum("")
} } }
cluster.conf
akka {
actor {
provider = cluster serialization-bindings {
"com.learn.akka.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port =
}
}
cluster {
seed-nodes = [
"akka://ClusterSystem@127.0.0.1:25251",
"akka://ClusterSystem@127.0.0.1:25252"]
}
}
akka-typed(7) - cluster:sharding, 集群分片的更多相关文章
- Akka Cluster之集群分片
一.介绍 当您需要在集群中的多个节点之间分配Actor,并希望能够使用其逻辑标识符与它们进行交互时,集群分片是非常有用的.你无需关心Actor在集群中的物理位置,因为这可能也会随着时间的推移而发生变 ...
- Akka(13): 分布式运算:Cluster-Sharding-运算的集群分片
通过上篇关于Cluster-Singleton的介绍,我们了解了Akka为分布式程序提供的编程支持:基于消息驱动的运算模式特别适合分布式程序编程,我们不需要特别的努力,只需要按照普通的Actor编程方 ...
- akka 集群分片
akka 集群 Sharding分片 分片上下级结构 集群(多台节点机) —> 每台节点机(1个片区) —> 每个片区(多个分片) —> 每个分片(多个实体) 实体: 分片管理的 A ...
- Akka-Cluster(6)- Cluster-Sharding:集群分片,分布式交互程序核心方式
在前面几篇讨论里我们介绍了在集群环境里的一些编程模式.分布式数据结构及具体实现方式.到目前为止,我们已经实现了把程序任务分配给处于很多服务器上的actor,能够最大程度的利用整体系统的硬件资源.这是因 ...
- Mongodb中Sharding集群
随着mongodb数据量的增多,可能会达到单个节点的存储能力限制,以及application较大的访问量也会导致单个节点无法承担,所以此时需要构建集群环境,并通过sharding方案将整个数据集拆分成 ...
- Mongodb Sharding 集群配置
mongodb的sharding集群由以下3个服务组成: Shards Server: 每个shard由一个或多个mongod进程组成,用于存储数据 Config Server: 用于存储集群的M ...
- Redis集群(九):Redis Sharding集群Redis节点主从切换后客户端自动重新连接
上文介绍了Redis Sharding集群的使用,点击阅读 本文介绍当某个Redis节点的Master节点发生问题,发生主从切换时,Jedis怎样自动重连新的Master节点 一.步骤如下: 1.配 ...
- 超详细的 Redis Cluster 官方集群搭建指南
今天从 0 开始搭建 Redis Cluster 官方集群,解决搭建过程中遇到的问题,超详细. 安装ruby环境 因为官方提供的创建集群的工具是用ruby写的,需要ruby2.2.2+版本支持,rub ...
- mongoDB3.4的sharding集群搭建及JavaAPI的简易使用
第一部分 在搭建mongoDB之前,我们要考虑几个小问题: 1.我们搭建集群的目的是什么?是多备份提高容错和系统可用性还是横向拓展存储大规模数据还是两者兼有? 如果是为了多备份那么选择replicat ...
随机推荐
- mysql 一台服务器中装两个mysql
个人经验: 服务器中已有mysql5.0 现要安装mysql5.5 下载安装包,安装后,mysql5.5中没有my.ini文件,就在我自己的电脑上复制了mysql5.5的my.ini文件进去. 1.在 ...
- 带你学够浪:Go语言基础系列 - 8分钟学基础语法
对于一般的语言使用者来说 ,20% 的语言特性就能够满足 80% 的使用需求,剩下在使用中掌握. 基于这一理论,Go 基础系列的文章不会刻意追求面面俱到,但该有知识点都会覆盖,目的是带你快跑赶上 Go ...
- 环境篇:Zeppelin
环境篇:Zeppelin Zeppelin 是什么 Apache Zeppelin 是一个让交互式数据分析变得可行的基于网页的开源框架.Zeppelin提供了数据分析.数据可视化等功能. Zeppel ...
- 手机端页面访问PC页面自动跳手机端代码
<script> var mobileAgent = new Array("iphone", "ipod", "ipad", & ...
- web自动化之执行js脚本
from selenium import webdriver from selenium.webdriver.support.wait import WebDriverWait from seleni ...
- RabbitMQ(1)---基本概念
一.安装RabbitMQ 安装直接用docker安装,如果手动安装的话比较繁琐,还要安装erlang语言的环境.在安装有docker机器上执行官网提供的指令(https://www.rabbitmq. ...
- mac下安装rabbitmq和php+rabbitq
一.首先使用brew安装rabbitmq brew install rabbitmq 安装完成,终端会出现如下内容,如图: 启动RabbitMQ 前台运行rabbitmq-server 后台运行bre ...
- [JavaWeb基础] 014.Struts2 标签库学习
在Struts1和Struts2中都有很多很方便使用的标签库,使用它可以让我们的页面代码更加的简洁,易懂,规范.标签的形式就跟html的标签形式一样.上面的篇章中我们也讲解了自定义标签那么在如何使用标 ...
- 设计MyTime类 代码参考
#include <iostream> #include <cstdio> using namespace std; class MyTime { private: int h ...
- 使用 git add -p 整理 patch
背景 当我们修改了代码准备提交时,本地的改动可能包含了不能提交的调试语句,还可能需要拆分成多个细粒度的 pactch. 本文将介绍如何使用 git add -p 来交互式选择代码片段,辅助整理出所需的 ...