Part B: Key/value service with log compaction

Do a git pull to get the latest lab software.

As things stand now with your lab code, a rebooting server replays the complete Raft log in order to restore its state. However, it's not practical for a long-running server to remember the complete Raft log forever. Instead, you'll modify Raft and kvserver to cooperate to save space: from time to time kvserver will persistently store a "snapshot" of its current state, and Raft will discard log entries that precede the snapshot. When a server restarts (or falls far behind the leader and must catch up), the server first installs a snapshot and then replays log entries from after the point at which the snapshot was created. Section 7 of the extended Raft paper outlines the scheme; you will have to design the details.根据您的实验代码,重新启动的服务器将重播完整的筏日志以恢复其状态。但是,对于长时间运行的服务器来说,永远记住完整的筏日志是不现实的。相反,您将修改Raft和kvserver以节省空间:kvserver将持续地存储当前状态的“快照”,而Raft将丢弃快照之前的日志条目。当服务器重新启动(或远远落后于领先者,必须迎头赶上)时,服务器首先安装快照,然后从创建快照的位置开始重播日志条目。You should spend some time figuring out what the interface will be between your Raft library and your service so that your Raft library can discard log entries. Think about how your Raft will operate while storing only the tail of the log, and how it will discard old log entries. You should discard them in a way that allows the Go garbage collector to free and re-use the memory; this requires that there be no reachable references (pointers) to the discarded log entries.您应该花一些时间来确定您的Raft库和服务之间的接口,以便您的Raft库可以丢弃日志条目。考虑一下您的Raft将如何操作,而只存储日志的尾部,以及它将如何丢弃旧的日志条目。您应该以允许Go垃圾收集器释放和重用内存的方式丢弃它们;这要求对丢弃的日志条目没有可到达的引用(指针)。

The tester passes maxraftstate to your StartKVServer(). maxraftstate indicates the maximum allowed size of your persistent Raft state in bytes (including the log, but not including snapshots). You should compare maxraftstate to persister.RaftStateSize(). Whenever your key/value server detects that the Raft state size is approaching this threshold, it should save a snapshot, and tell the Raft library that it has snapshotted, so that Raft can discard old log entries. If maxraftstate is -1, you do not have to snapshot.当您的键/值服务器检测到Raft的状态大小接近这个阈值时,它应该保存一个快照,并告诉筏库它已经快照了,以便Raft可以丢弃旧的日志条目。如果maxraftstate为-1,则不必快照。

Your raft.go probably keeps the entire log in a Go slice. Modify it so that it can be given a log index, discard the entries before that index, and continue operating while storing only log entries after that index. Make sure you pass all the Raft tests after making these changes.

Modify your kvserver so that it detects when the persisted Raft state grows too large, and then hands a snapshot to Raft and tells Raft that it can discard old log entries. Raft should save each snapshot with persister.SaveStateAndSnapshot() (don't use files). A kvserver instance should restore the snapshot from the persister when it re-starts.kvserver实例应该在重新启动时从persister恢复快照。

  • You can test your Raft and kvserver's ability to operate with a trimmed log, and its ability to re-start from the combination of a kvserver snapshot and persisted Raft state, by running the Lab 3A tests while artificially setting maxraftstate to 1.
  • Think about when a kvserver should snapshot its state and what should be included in the snapshot. Raft must store each snapshot in the persister object using SaveStateAndSnapshot(), along with corresponding Raft state. You can read the latest stored snapshot using ReadSnapshot().
  • Your kvserver must be able to detect duplicated operations in the log across checkpoints, so any state you are using to detect them must be included in the snapshots. Remember to capitalize all fields of structures stored in the snapshot.
  • You are allowed to add methods to your Raft so that kvserver can manage the process of trimming the Raft log and manage kvserver snapshots.

Modify your Raft leader code to send an InstallSnapshot RPC to a follower when the leader has discarded the log entries the follower needs. When a follower receives an InstallSnapshot RPC, your Raft code will need to send the included snapshot to its kvserver. You can use the applyCh for this purpose, by adding new fields to ApplyMsg. Your solution is complete when it passes all of the Lab 3 tests.

The maxraftstate limit applies to the GOB-encoded bytes your Raft passes to persister.SaveRaftState().

  • You should send the entire snapshot in a single InstallSnapshot RPC. You do not have to implement Figure 13's offset mechanism for splitting up the snapshot.
  • Make sure you pass TestSnapshotRPC before moving on to the other Snapshot tests.
  • A reasonable amount of time to take for the Lab 3 tests is 400 seconds of real time and 700 seconds of CPU time. Further, go test -run TestSnapshotSize should take less than 20 seconds of real time.

Your code should pass the 3B tests (as in the example here) as well as the 3A tests.

$ go test -run 3B

Test: InstallSnapshot RPC (3B) ...

... Passed --   1.5  3   163   63

Test: snapshot size is reasonable (3B) ...

... Passed --   0.4  3  2407  800

Test: restarts, snapshots, one client (3B) ...

... Passed --  19.2  5 123372 24718

Test: restarts, snapshots, many clients (3B) ...

... Passed --  18.9  5 127387 58305

Test: unreliable net, snapshots, many clients (3B) ...

... Passed --  16.3  5  4485 1053

Test: unreliable net, restarts, snapshots, many clients (3B) ...

... Passed --  20.7  5  4802 1005

Test: unreliable net, restarts, partitions, snapshots, many clients (3B) ...

... Passed --  27.1  5  3281  535

Test: unreliable net, restarts, partitions, snapshots, many clients, linearizability checks (3B) ...

... Passed --  25.0  7 11344  748

PASS

ok      kvraft  129.114s

my homework code:raft.go

  1. package raft
  2.  
  3. //
  4. // this is an outline of the API that raft must expose to
  5. // the service (or tester). see comments below for
  6. // each of these functions for more details.
  7. //
  8. // rf = Make(...)
  9. // create a new Raft server.
  10. // rf.Start(command interface{}) (index, term, isleader)
  11. // start agreement on a new log entry
  12. // rf.GetState() (term, isLeader)
  13. // ask a Raft for its current term, and whether it thinks it is leader
  14. // ApplyMsg
  15. // each time a new entry is committed to the log, each Raft peer
  16. // should send an ApplyMsg to the service (or tester)
  17. // in the same server.
  18. //
  19.  
  20. import (
  21. "labrpc"
  22. "math/rand"
  23. "sync"
  24. "time"
  25. )
  26.  
  27. import "bytes"
  28. import "labgob"
  29.  
  30. //
  31. // as each Raft peer becomes aware that successive log entries are
  32. // committed, the peer should send an ApplyMsg to the service (or
  33. // tester) on the same server, via the applyCh passed to Make(). set
  34. // CommandValid to true to indicate that the ApplyMsg contains a newly
  35. // committed log entry.
  36. //
  37. // in Lab 3 you'll want to send other kinds of messages (e.g.,
  38. // snapshots) on the applyCh; at that point you can add fields to
  39. // ApplyMsg, but set CommandValid to false for these other uses.
  40. //
  41. type ApplyMsg struct {
  42. CommandValid bool
  43. Command interface{}
  44. CommandIndex int
  45.  
  46. // to send kv snapshot to kv server
  47. CommandData []byte // 3B
  48. }
  49.  
  50. type LogEntry struct {
  51. Command interface{}
  52. Term int
  53. }
  54.  
  55. const (
  56. Follower int =
  57. Candidate int =
  58. Leader int =
  59. HEART_BEAT_TIMEOUT = //心跳超时,要求1秒10次,所以是100ms一次
  60. )
  61.  
  62. //
  63. // A Go object implementing a single Raft peer.
  64. //
  65. type Raft struct {
  66. mu sync.Mutex // Lock to protect shared access to this peer's state
  67. peers []*labrpc.ClientEnd // RPC end points of all peers
  68. persister *Persister // Object to hold this peer's persisted state
  69. me int // this peer's index into peers[]
  70.  
  71. // Your data here (2A, 2B, 2C).
  72. // Look at the paper's Figure 2 for a description of what
  73. // state a Raft server must maintain.
  74. electionTimer *time.Timer // 选举定时器
  75. heartbeatTimer *time.Timer // 心跳定时器
  76. state int // 角色
  77. voteCount int //投票数
  78. applyCh chan ApplyMsg // 提交通道
  79.  
  80. snapshottedIndex int // 3B 归档位置
  81.  
  82. //Persistent state on all servers:
  83. currentTerm int //latest term server has seen (initialized to 0 on first boot, increases monotonically)
  84. votedFor int //candidateId that received vote in current term (or null if none)
  85. log []LogEntry //log entries; each entry contains command for state machine, and term when entry was received by leader (first index is 1)
  86.  
  87. //Volatile state on all servers:
  88. commitIndex int //index of highest log entry known to be committed (initialized to 0, increases monotonically)
  89. lastApplied int //index of highest log entry applied to state machine (initialized to 0, increases monotonically)
  90. //Volatile state on leaders:(Reinitialized after election)
  91. nextIndex []int //for each server, index of the next log entry to send to that server (initialized to leader last log index + 1)
  92. matchIndex []int //for each server, index of highest log entry known to be replicated on server (initialized to 0, increases monotonically)
  93.  
  94. }
  95.  
  96. // return currentTerm and whether this server
  97. // believes it is the leader.
  98. func (rf *Raft) GetState() (int, bool) {
  99. var term int
  100. var isleader bool
  101. // Your code here (2A).
  102. rf.mu.Lock()
  103. defer rf.mu.Unlock()
  104. term = rf.currentTerm
  105. isleader = rf.state == Leader
  106. return term, isleader
  107. }
  108.  
  109. func (rf *Raft) encodeRaftState() []byte {
  110. w := new(bytes.Buffer)
  111. e := labgob.NewEncoder(w)
  112. e.Encode(rf.currentTerm)
  113. e.Encode(rf.votedFor)
  114. e.Encode(rf.log)
  115. e.Encode(rf.snapshottedIndex)
  116. return w.Bytes()
  117. }
  118.  
  119. func (rf *Raft) persist() {
  120. // Your code here (2C).
  121. // Example:
  122. // w := new(bytes.Buffer)
  123. // e := labgob.NewEncoder(w)
  124. // e.Encode(rf.currentTerm)
  125. // e.Encode(rf.votedFor)
  126. // e.Encode(rf.log)
  127. // e.Encode(rf.snapshottedIndex)
  128. // data := w.Bytes()
  129. rf.persister.SaveRaftState(rf.encodeRaftState())
  130. }
  131. func (rf *Raft) GetRaftStateSize() int {
  132. return rf.persister.RaftStateSize()
  133. }
  134.  
  135. //
  136. // restore previously persisted state.
  137. //
  138. func (rf *Raft) readPersist(data []byte) {
  139. if data == nil || len(data) < { // bootstrap without any state?
  140. return
  141. }
  142. // Your code here (2C).
  143. // Example:
  144. r := bytes.NewBuffer(data)
  145. d := labgob.NewDecoder(r)
  146. var currentTerm int
  147. var votedFor int
  148. var log []LogEntry
  149. var snapshottedIndex int
  150. if d.Decode(&currentTerm) != nil ||
  151. d.Decode(&votedFor) != nil ||
  152. d.Decode(&log) != nil ||
  153. d.Decode(&snapshottedIndex) != nil {
  154. // error...
  155. panic("fail to decode state")
  156. } else {
  157. rf.currentTerm = currentTerm
  158. rf.votedFor = votedFor
  159. rf.log = log
  160. rf.snapshottedIndex = snapshottedIndex
  161. // for lab 3b, we need to set them at the first index
  162. // i.e., 0 if snapshot is disabled
  163. rf.commitIndex = snapshottedIndex
  164. rf.lastApplied = snapshottedIndex
  165. }
  166. }
  167.  
  168. //
  169. // example RequestVote RPC arguments structure.
  170. // field names must start with capital letters!
  171. //
  172. type RequestVoteArgs struct {
  173. // Your data here (2A, 2B).
  174. Term int //candidate’s term
  175. CandidateId int //candidate requesting vote
  176. LastLogIndex int //index of candidate’s last log entry (§5.4)
  177. LastLogTerm int //term of candidate’s last log entry (§5.4)
  178. }
  179.  
  180. //
  181. // example RequestVote RPC reply structure.
  182. // field names must start with capital letters!
  183. //
  184. type RequestVoteReply struct {
  185. // Your data here (2A).
  186. Term int //currentTerm, for candidate to update itself
  187. VoteGranted bool //true means candidate received vote
  188. }
  189.  
  190. //
  191. // example RequestVote RPC handler.
  192. //
  193. func (rf *Raft) RequestVote(args *RequestVoteArgs, reply *RequestVoteReply) {
  194. // Your code here (2A, 2B).
  195. rf.mu.Lock()
  196. defer rf.mu.Unlock()
  197. defer rf.persist() // 改动需要持久化
  198. DPrintf("Candidate[raft%v][term:%v] request vote: raft%v[%v] 's term%v\n", args.CandidateId, args.Term, rf.me, rf.state, rf.currentTerm)
  199. if args.Term < rf.currentTerm ||
  200. (args.Term == rf.currentTerm && rf.votedFor != - && rf.votedFor != args.CandidateId) {
  201. reply.Term = rf.currentTerm
  202. reply.VoteGranted = false
  203. return
  204. }
  205.  
  206. if args.Term > rf.currentTerm {
  207. rf.currentTerm = args.Term
  208. rf.switchStateTo(Follower)
  209. }
  210.  
  211. // 2B: candidate's vote should be at least up-to-date as receiver's log
  212. // "up-to-date" is defined in thesis 5.4.1
  213. lastLogIndex := len(rf.log) -
  214. if args.LastLogTerm < rf.log[lastLogIndex].Term ||
  215. (args.LastLogTerm == rf.log[lastLogIndex].Term &&
  216. args.LastLogIndex < rf.getAbsoluteLogIndex(lastLogIndex)) {
  217. // Receiver is more up-to-date, does not grant vote
  218. reply.Term = rf.currentTerm
  219. reply.VoteGranted = false
  220. return
  221. }
  222.  
  223. rf.votedFor = args.CandidateId
  224. reply.Term = rf.currentTerm
  225. reply.VoteGranted = true
  226. // reset timer after grant vote
  227. rf.electionTimer.Reset(randTimeDuration())
  228. }
  229.  
  230. type AppendEntriesArgs struct {
  231. Term int //leader’s term
  232. LeaderId int //so follower can redirect clients
  233. PrevLogIndex int //index of log entry immediately preceding new ones
  234. PrevLogTerm int //term of prevLogIndex entry
  235. Entries []LogEntry //log entries to store (empty for heartbeat; may send more than one for efficiency)
  236. LeaderCommit int //leader’s commitIndex
  237. }
  238.  
  239. type AppendEntriesReply struct {
  240. Term int //currentTerm, for leader to update itself
  241. Success bool //true if follower contained entry matching prevLogIndex and prevLogTerm
  242.  
  243. //Figure 8: A time sequence showing why a leader cannot determine commitment using log entries from older terms. In
  244. // (a) S1 is leader and partially replicates the log entry at index
  245. // 2. In (b) S1 crashes; S5 is elected leader for term 3 with votes
  246. // from S3, S4, and itself, and accepts a different entry at log
  247. // index 2. In (c) S5 crashes; S1 restarts, is elected leader, and
  248. // continues replication. At this point, the log entry from term 2
  249. // has been replicated on a majority of the servers, but it is not
  250. // committed. If S1 crashes as in (d), S5 could be elected leader
  251. // (with votes from S2, S3, and S4) and overwrite the entry with
  252. // its own entry from term 3. However, if S1 replicates an entry from its current term on a majority of the servers before
  253. // crashing, as in (e), then this entry is committed (S5 cannot
  254. // win an election). At this point all preceding entries in the log
  255. // are committed as well.
  256. ConflictTerm int // 2C
  257. ConflictIndex int // 2C
  258. }
  259.  
  260. func (rf *Raft) AppendEntries(args *AppendEntriesArgs, reply *AppendEntriesReply) {
  261. rf.mu.Lock()
  262. defer rf.mu.Unlock()
  263. defer rf.persist() // 改动需要持久化
  264. DPrintf("leader[raft%v][term:%v] beat term:%v [raft%v][%v]\n", args.LeaderId, args.Term, rf.currentTerm, rf.me, rf.state)
  265. reply.Success = true
  266.  
  267. // 1. Reply false if term < currentTerm (§5.1)
  268. if args.Term < rf.currentTerm {
  269. reply.Success = false
  270. reply.Term = rf.currentTerm
  271. return
  272. }
  273. //If RPC request or response contains term T > currentTerm:set currentTerm = T, convert to follower (§5.1)
  274. if args.Term > rf.currentTerm {
  275. rf.currentTerm = args.Term
  276. rf.switchStateTo(Follower)
  277. }
  278.  
  279. // reset election timer even log does not match
  280. // args.LeaderId is the current term's Leader
  281. rf.electionTimer.Reset(randTimeDuration())
  282.  
  283. if args.PrevLogIndex <= rf.snapshottedIndex {
  284. reply.Success = true
  285.  
  286. // sync log if needed
  287. if args.PrevLogIndex+len(args.Entries) > rf.snapshottedIndex {
  288. // if snapshottedIndex == prevLogIndex, all log entries should be added.
  289. startIdx := rf.snapshottedIndex - args.PrevLogIndex
  290. // only keep the last snapshotted one
  291. rf.log = rf.log[:]
  292. rf.log = append(rf.log, args.Entries[startIdx:]...)
  293. }
  294.  
  295. return
  296. }
  297.  
  298. // 2. Reply false if log doesn’t contain an entry at prevLogIndex
  299. // whose term matches prevLogTerm (§5.3)
  300. lastLogIndex := rf.getAbsoluteLogIndex(len(rf.log) - )
  301. if lastLogIndex < args.PrevLogIndex {
  302. reply.Success = false
  303. reply.Term = rf.currentTerm
  304. // optimistically thinks receiver's log matches with Leader's as a subset
  305. reply.ConflictIndex = len(rf.log)
  306. // no conflict term
  307. reply.ConflictTerm = -
  308. return
  309. }
  310.  
  311. // 3. If an existing entry conflicts with a new one (same index
  312. // but different terms), delete the existing entry and all that
  313. // follow it (§5.3)
  314. if rf.log[rf.getRelativeLogIndex(args.PrevLogIndex)].Term != args.PrevLogTerm {
  315. reply.Success = false
  316. reply.Term = rf.currentTerm
  317. // receiver's log in certain term unmatches Leader's log
  318. reply.ConflictTerm = rf.log[rf.getRelativeLogIndex(args.PrevLogIndex)].Term
  319.  
  320. // expecting Leader to check the former term
  321. // so set ConflictIndex to the first one of entries in ConflictTerm
  322. conflictIndex := args.PrevLogIndex
  323. // apparently, since rf.log[0] are ensured to match among all servers
  324. // ConflictIndex must be > 0, safe to minus 1
  325. for rf.log[rf.getRelativeLogIndex(conflictIndex-)].Term == reply.ConflictTerm {
  326. conflictIndex--
  327. if conflictIndex == rf.snapshottedIndex+ {
  328. // this may happen after snapshot,
  329. // because the term of the first log may be the current term
  330. // before lab 3b this is not going to happen, since rf.log[0].Term = 0
  331. break
  332. }
  333. }
  334. reply.ConflictIndex = conflictIndex
  335. return
  336. }
  337.  
  338. // 4. Append any new entries not already in the log
  339. // compare from rf.log[args.PrevLogIndex + 1]
  340. unmatch_idx := -
  341. for idx := range args.Entries {
  342. if len(rf.log) < rf.getRelativeLogIndex(args.PrevLogIndex++idx) ||
  343. rf.log[rf.getRelativeLogIndex(args.PrevLogIndex++idx)].Term != args.Entries[idx].Term {
  344. // unmatch log found
  345. unmatch_idx = idx
  346. break
  347. }
  348. }
  349.  
  350. if unmatch_idx != - {
  351. // there are unmatch entries
  352. // truncate unmatch Follower entries, and apply Leader entries
  353. rf.log = rf.log[:rf.getRelativeLogIndex(args.PrevLogIndex++unmatch_idx)]
  354. rf.log = append(rf.log, args.Entries[unmatch_idx:]...)
  355. }
  356.  
  357. //5. If leaderCommit > commitIndex, set commitIndex = min(leaderCommit, index of last new entry)
  358. if args.LeaderCommit > rf.commitIndex {
  359. rf.setCommitIndex(min(args.LeaderCommit, rf.getAbsoluteLogIndex(len(rf.log)-)))
  360. }
  361.  
  362. reply.Success = true
  363. }
  364.  
  365. //
  366. // example code to send a RequestVote RPC to a server.
  367. // server is the index of the target server in rf.peers[].
  368. // expects RPC arguments in args.
  369. // fills in *reply with RPC reply, so caller should
  370. // pass &reply.
  371. // the types of the args and reply passed to Call() must be
  372. // the same as the types of the arguments declared in the
  373. // handler function (including whether they are pointers).
  374. //
  375. // The labrpc package simulates a lossy network, in which servers
  376. // may be unreachable, and in which requests and replies may be lost.
  377. // Call() sends a request and waits for a reply. If a reply arrives
  378. // within a timeout interval, Call() returns true; otherwise
  379. // Call() returns false. Thus Call() may not return for a while.
  380. // A false return can be caused by a dead server, a live server that
  381. // can't be reached, a lost request, or a lost reply.
  382. //
  383. // Call() is guaranteed to return (perhaps after a delay) *except* if the
  384. // handler function on the server side does not return. Thus there
  385. // is no need to implement your own timeouts around Call().
  386. //
  387. // look at the comments in ../labrpc/labrpc.go for more details.
  388. //
  389. // if you're having trouble getting RPC to work, check that you've
  390. // capitalized all field names in structs passed over RPC, and
  391. // that the caller passes the address of the reply struct with &, not
  392. // the struct itself.
  393. //
  394. func (rf *Raft) sendRequestVote(server int, args *RequestVoteArgs, reply *RequestVoteReply) bool {
  395. ok := rf.peers[server].Call("Raft.RequestVote", args, reply)
  396. return ok
  397. }
  398.  
  399. func (rf *Raft) sendAppendEntries(server int, args *AppendEntriesArgs, reply *AppendEntriesReply) bool {
  400. ok := rf.peers[server].Call("Raft.AppendEntries", args, reply)
  401. return ok
  402. }
  403.  
  404. //
  405. // the service using Raft (e.g. a k/v server) wants to start
  406. // agreement on the next command to be appended to Raft's log. if this
  407. // server isn't the leader, returns false. otherwise start the
  408. // agreement and return immediately. there is no guarantee that this
  409. // command will ever be committed to the Raft log, since the leader
  410. // may fail or lose an election. even if the Raft instance has been killed,
  411. // this function should return gracefully.
  412. //
  413. // the first return value is the index that the command will appear at
  414. // if it's ever committed. the second return value is the current
  415. // term. the third return value is true if this server believes it is
  416. // the leader.
  417. //
  418. func (rf *Raft) Start(command interface{}) (int, int, bool) {
  419. index := -
  420. term := -
  421. isLeader := true
  422.  
  423. // Your code here (2B).
  424. rf.mu.Lock()
  425. defer rf.mu.Unlock()
  426. term = rf.currentTerm
  427. isLeader = rf.state == Leader
  428. if isLeader {
  429. rf.log = append(rf.log, LogEntry{Command: command, Term: term})
  430. rf.persist() // 改动需要持久化
  431. index = rf.getAbsoluteLogIndex(len(rf.log) - )
  432. rf.matchIndex[rf.me] = index
  433. rf.nextIndex[rf.me] = index + 1
         rf.heartbeats()
  434. }
  435.  
  436. return index, term, isLeader
  437. }
  438.  
  439. //
  440. // the tester calls Kill() when a Raft instance won't
  441. // be needed again. you are not required to do anything
  442. // in Kill(), but it might be convenient to (for example)
  443. // turn off debug output from this instance.
  444. //
  445. func (rf *Raft) Kill() {
  446. // Your code here, if desired.
  447. }
  448.  
  449. //
  450. // the service or tester wants to create a Raft server. the ports
  451. // of all the Raft servers (including this one) are in peers[]. this
  452. // server's port is peers[me]. all the servers' peers[] arrays
  453. // have the same order. persister is a place for this server to
  454. // save its persistent state, and also initially holds the most
  455. // recent saved state, if any. applyCh is a channel on which the
  456. // tester or service expects Raft to send ApplyMsg messages.
  457. // Make() must return quickly, so it should start goroutines
  458. // for any long-running work.
  459. //
  460. func Make(peers []*labrpc.ClientEnd, me int,
  461. persister *Persister, applyCh chan ApplyMsg) *Raft {
  462. rf := &Raft{}
  463. rf.peers = peers
  464. rf.persister = persister
  465. rf.me = me
  466.  
  467. // Your initialization code here (2A, 2B, 2C).
  468. rf.state = Follower
  469. rf.votedFor = -
  470. rf.heartbeatTimer = time.NewTimer(HEART_BEAT_TIMEOUT * time.Millisecond)
  471. rf.electionTimer = time.NewTimer(randTimeDuration())
  472.  
  473. rf.applyCh = applyCh
  474. rf.log = make([]LogEntry, ) // start from index 1
  475.  
  476. // initialize from state persisted before a crash
  477. rf.mu.Lock()
  478. rf.readPersist(persister.ReadRaftState())
  479. rf.mu.Unlock()
  480.  
  481. rf.nextIndex = make([]int, len(rf.peers))
  482. //for persist
  483. for i := range rf.nextIndex {
  484. // initialized to leader last log index + 1
  485. rf.nextIndex[i] = len(rf.log)
  486. }
  487. rf.matchIndex = make([]int, len(rf.peers))
  488.  
  489. //以定时器的维度重写background逻辑
  490. go func() {
  491. for {
  492. select {
  493. case <-rf.electionTimer.C:
  494. rf.mu.Lock()
  495. switch rf.state {
  496. case Follower:
  497. rf.switchStateTo(Candidate)
  498. case Candidate:
  499. rf.startElection()
  500. }
  501. rf.mu.Unlock()
  502.  
  503. case <-rf.heartbeatTimer.C:
  504. rf.mu.Lock()
  505. if rf.state == Leader {
  506. rf.heartbeats()
  507. rf.heartbeatTimer.Reset(HEART_BEAT_TIMEOUT * time.Millisecond)
  508. }
  509. rf.mu.Unlock()
  510. }
  511. }
  512. }()
  513.  
  514. return rf
  515. }
  516.  
  517. func randTimeDuration() time.Duration {
  518. return time.Duration(HEART_BEAT_TIMEOUT*+rand.Intn(HEART_BEAT_TIMEOUT)) * time.Millisecond
  519. }
  520.  
  521. //切换状态,调用者需要加锁
  522. func (rf *Raft) switchStateTo(state int) {
  523. if state == rf.state {
  524. return
  525. }
  526. DPrintf("Term %d: server %d convert from %v to %v\n", rf.currentTerm, rf.me, rf.state, state)
  527. rf.state = state
  528. switch state {
  529. case Follower:
  530. rf.heartbeatTimer.Stop()
  531. rf.electionTimer.Reset(randTimeDuration())
  532. rf.votedFor = -
  533. case Candidate:
  534. //成为候选人后立马进行选举
  535. rf.startElection()
  536.  
  537. case Leader:
  538. // initialized to leader last log index + 1
  539. for i := range rf.nextIndex {
  540. rf.nextIndex[i] = rf.getAbsoluteLogIndex(len(rf.log))
  541. }
  542. for i := range rf.matchIndex {
  543. rf.matchIndex[i] = rf.snapshottedIndex //3B
  544. }
  545.  
  546. rf.electionTimer.Stop()
  547. rf.heartbeats()
  548. rf.heartbeatTimer.Reset(HEART_BEAT_TIMEOUT * time.Millisecond)
  549. }
  550. }
  551.  
  552. // 发送心跳包,调用者需要加锁
  553. func (rf *Raft) heartbeats() {
  554. for i := range rf.peers {
  555. if i != rf.me {
  556. go rf.heartbeat(i)
  557. }
  558. }
  559. }
  560.  
  561. func (rf *Raft) heartbeat(server int) {
  562. rf.mu.Lock()
  563. if rf.state != Leader {
  564. rf.mu.Unlock()
  565. return
  566. }
  567.  
  568. prevLogIndex := rf.nextIndex[server] -
  569.  
  570. if prevLogIndex < rf.snapshottedIndex {
  571. // leader has discarded log entries the follower needs
  572. // send snapshot to follower and retry later
  573. rf.mu.Unlock()
  574. rf.syncSnapshotWith(server)
  575. return
  576. }
  577.  
  578. // use deep copy to avoid race condition
  579. // when override log in AppendEntries()
  580. entries := make([]LogEntry, len(rf.log[rf.getRelativeLogIndex(prevLogIndex+):]))
  581. copy(entries, rf.log[rf.getRelativeLogIndex(prevLogIndex+):])
  582.  
  583. args := AppendEntriesArgs{
  584. Term: rf.currentTerm,
  585. LeaderId: rf.me,
  586. PrevLogIndex: prevLogIndex,
  587. PrevLogTerm: rf.log[rf.getRelativeLogIndex(prevLogIndex)].Term,
  588. Entries: entries,
  589. LeaderCommit: rf.commitIndex,
  590. }
  591. rf.mu.Unlock()
  592.  
  593. var reply AppendEntriesReply
  594. if rf.sendAppendEntries(server, &args, &reply) {
  595. rf.mu.Lock()
  596. defer rf.mu.Unlock()
  597. if rf.state != Leader {
  598. return
  599. }
  600. // If last log index ≥ nextIndex for a follower: send
  601. // AppendEntries RPC with log entries starting at nextIndex
  602. // • If successful: update nextIndex and matchIndex for
  603. // follower (§5.3)
  604. // • If AppendEntries fails because of log inconsistency:
  605. // decrement nextIndex and retry (§5.3)
  606. if reply.Success {
  607. // successfully replicated args.Entries
  608. rf.matchIndex[server] = args.PrevLogIndex + len(args.Entries)
  609. rf.nextIndex[server] = rf.matchIndex[server] +
  610.  
  611. // If there exists an N such that N > commitIndex, a majority
  612. // of matchIndex[i] ≥ N, and log[N].term == currentTerm:
  613. // set commitIndex = N (§5.3, §5.4).
  614. for N := rf.getAbsoluteLogIndex(len(rf.log) - ); N > rf.commitIndex; N-- {
  615. count :=
  616. for _, matchIndex := range rf.matchIndex {
  617. if matchIndex >= N {
  618. count +=
  619. }
  620. }
  621.  
  622. if count > len(rf.peers)/ {
  623. // most of nodes agreed on rf.log[i]
  624. rf.setCommitIndex(N)
  625. break
  626. }
  627. }
  628.  
  629. } else {
  630. if reply.Term > rf.currentTerm {
  631. rf.currentTerm = reply.Term
  632. rf.switchStateTo(Follower)
  633. rf.persist() // 改动需要持久化
  634. } else {
  635. //如果走到这个分支,那一定是需要前推(优化前推)
  636. rf.nextIndex[server] = reply.ConflictIndex
  637.  
  638. // if term found, override it to
  639. // the first entry after entries in ConflictTerm
  640. if reply.ConflictTerm != - {
  641. for i := args.PrevLogIndex; i >= rf.snapshottedIndex+; i-- {
  642. if rf.log[rf.getRelativeLogIndex(i-)].Term == reply.ConflictTerm {
  643. // in next trial, check if log entries in ConflictTerm matches
  644. rf.nextIndex[server] = i
  645. break
  646. }
  647. }
  648. }
  649. //和等待下一轮执行相比,直接retry并没有明显优势,但对于3B有效
  650. go rf.heartbeat(server)
  651. }
  652. }
  653. // rf.mu.Unlock()
  654. }
  655. }
  656.  
  657. // 开始选举,调用者需要加锁
  658. func (rf *Raft) startElection() {
  659.  
  660. // DPrintf("raft%v is starting election\n", rf.me)
  661. rf.currentTerm +=
  662. rf.votedFor = rf.me //vote for me
  663. rf.persist() // 改动需要持久化
  664. rf.voteCount =
  665. rf.electionTimer.Reset(randTimeDuration())
  666.  
  667. for i := range rf.peers {
  668. if i != rf.me {
  669. go func(peer int) {
  670. rf.mu.Lock()
  671. lastLogIndex := len(rf.log) -
  672. args := RequestVoteArgs{
  673. Term: rf.currentTerm,
  674. CandidateId: rf.me,
  675. LastLogIndex: rf.getAbsoluteLogIndex(lastLogIndex),
  676. LastLogTerm: rf.log[lastLogIndex].Term,
  677. }
  678. // DPrintf("raft%v[%v] is sending RequestVote RPC to raft%v\n", rf.me, rf.state, peer)
  679. rf.mu.Unlock()
  680. var reply RequestVoteReply
  681. if rf.sendRequestVote(peer, &args, &reply) {
  682. rf.mu.Lock()
  683. defer rf.mu.Unlock()
  684. if reply.Term > rf.currentTerm {
  685. rf.currentTerm = reply.Term
  686. rf.switchStateTo(Follower)
  687. rf.persist() // 改动需要持久化
  688. }
  689. if reply.VoteGranted && rf.state == Candidate {
  690. rf.voteCount++
  691. if rf.voteCount > len(rf.peers)/ {
  692. rf.switchStateTo(Leader)
  693. }
  694. }
  695. }
  696. }(i)
  697. }
  698. }
  699. }
  700.  
  701. //
  702. // several setters, should be called with a lock
  703. //
  704. func (rf *Raft) setCommitIndex(commitIndex int) {
  705. rf.commitIndex = commitIndex
  706. // apply all entries between lastApplied and committed
  707. // should be called after commitIndex updated
  708. if rf.commitIndex > rf.lastApplied {
  709. DPrintf("%v apply from index %d to %d", rf, rf.lastApplied+, rf.commitIndex)
  710. entriesToApply := append([]LogEntry{}, rf.log[rf.getRelativeLogIndex(rf.lastApplied+):rf.getRelativeLogIndex(rf.commitIndex+)]...)
  711.  
  712. go func(startIdx int, entries []LogEntry) {
  713. for idx, entry := range entries {
  714. var msg ApplyMsg
  715. msg.CommandValid = true
  716. msg.Command = entry.Command
  717. msg.CommandIndex = startIdx + idx
  718. rf.applyCh <- msg
  719. // do not forget to update lastApplied index
  720. // this is another goroutine, so protect it with lock
  721. rf.mu.Lock()
  722. if rf.lastApplied < msg.CommandIndex {
  723. rf.lastApplied = msg.CommandIndex
  724. }
  725. rf.mu.Unlock()
  726. }
  727. }(rf.lastApplied+, entriesToApply)
  728. }
  729. }
  730. func min(x, y int) int {
  731. if x < y {
  732. return x
  733. } else {
  734. return y
  735. }
  736. }
  737.  
  738. //3B
  739. func (rf *Raft) ReplaceLogWithSnapshot(appliedIndex int, kvSnapshot []byte) {
  740. rf.mu.Lock()
  741. defer rf.mu.Unlock()
  742. if appliedIndex <= rf.snapshottedIndex {
  743. return
  744. }
  745. // truncate log, keep snapshottedIndex as a guard at rf.log[0]
  746. // because it must be committed and applied
  747. rf.log = rf.log[rf.getRelativeLogIndex(appliedIndex):]
  748. rf.snapshottedIndex = appliedIndex
  749. rf.persister.SaveStateAndSnapshot(rf.encodeRaftState(), kvSnapshot)
  750.  
  751. // update for other nodes
  752. for i := range rf.peers {
  753. if i == rf.me {
  754. continue
  755. }
  756. go rf.syncSnapshotWith(i)
  757. }
  758. }
  759.  
  760. // invoke by Leader to sync snapshot with one follower
  761. func (rf *Raft) syncSnapshotWith(server int) {
  762. rf.mu.Lock()
  763. if rf.state != Leader {
  764. rf.mu.Unlock()
  765. return
  766. }
  767. args := InstallSnapshotArgs{
  768. Term: rf.currentTerm,
  769. LeaderId: rf.me,
  770. LastIncludedIndex: rf.snapshottedIndex,
  771. LastIncludedTerm: rf.log[].Term,
  772. Data: rf.persister.ReadSnapshot(),
  773. }
  774. DPrintf("%v sync snapshot with server %d for index %d, last snapshotted = %d",
  775. rf, server, args.LastIncludedIndex, rf.snapshottedIndex)
  776. rf.mu.Unlock()
  777.  
  778. var reply InstallSnapshotReply
  779.  
  780. if rf.sendInstallSnapshot(server, &args, &reply) {
  781. rf.mu.Lock()
  782. if reply.Term > rf.currentTerm {
  783. rf.currentTerm = reply.Term
  784. rf.switchStateTo(Follower)
  785. rf.persist()
  786. } else {
  787. if rf.matchIndex[server] < args.LastIncludedIndex {
  788. rf.matchIndex[server] = args.LastIncludedIndex
  789. }
  790. rf.nextIndex[server] = rf.matchIndex[server] +
  791. }
  792. rf.mu.Unlock()
  793. }
  794. }
  795.  
  796. func (rf *Raft) getRelativeLogIndex(index int) int {
  797. // index of rf.log
  798. return index - rf.snapshottedIndex
  799. }
  800.  
  801. func (rf *Raft) getAbsoluteLogIndex(index int) int {
  802. // index of log including snapshotted ones
  803. return index + rf.snapshottedIndex
  804. }
  805.  
  806. type InstallSnapshotArgs struct {
  807. // do not need to implement "chunk"
  808. // remove "offset" and "done"
  809. Term int // 3B
  810. LeaderId int // 3B
  811. LastIncludedIndex int // 3B
  812. LastIncludedTerm int // 3B
  813. Data []byte // 3B
  814. }
  815.  
  816. type InstallSnapshotReply struct {
  817. Term int // 3B
  818. }
  819.  
  820. func (rf *Raft) InstallSnapshot(args *InstallSnapshotArgs, reply *InstallSnapshotReply) {
  821. rf.mu.Lock()
  822. defer rf.mu.Unlock()
  823. // we do not need to call rf.persist() in this function
  824. // because rf.persister.SaveStateAndSnapshot() is called
  825.  
  826. reply.Term = rf.currentTerm
  827. if args.Term < rf.currentTerm || args.LastIncludedIndex < rf.snapshottedIndex {
  828. return
  829. }
  830.  
  831. if args.Term > rf.currentTerm {
  832. rf.currentTerm = args.Term
  833. rf.switchStateTo(Follower)
  834. // do not return here.
  835. }
  836.  
  837. // step 2, 3, 4 is skipped because we simplify the "offset"
  838.  
  839. // 6. if existing log entry has same index and term with
  840. // last log entry in snapshot, retain log entries following it
  841. lastIncludedRelativeIndex := rf.getRelativeLogIndex(args.LastIncludedIndex)
  842. if len(rf.log) > lastIncludedRelativeIndex &&
  843. rf.log[lastIncludedRelativeIndex].Term == args.LastIncludedTerm {
  844. rf.log = rf.log[lastIncludedRelativeIndex:]
  845. } else {
  846. // 7. discard entire log
  847. rf.log = []LogEntry{{Term: args.LastIncludedTerm, Command: nil}}
  848. }
  849. // 5. save snapshot file, discard any existing snapshot
  850. rf.snapshottedIndex = args.LastIncludedIndex
  851. // IMPORTANT: update commitIndex and lastApplied because after sync snapshot,
  852. // it has at least applied all logs before snapshottedIndex
  853. if rf.commitIndex < rf.snapshottedIndex {
  854. rf.commitIndex = rf.snapshottedIndex
  855. }
  856. if rf.lastApplied < rf.snapshottedIndex {
  857. rf.lastApplied = rf.snapshottedIndex
  858. }
  859.  
  860. rf.persister.SaveStateAndSnapshot(rf.encodeRaftState(), args.Data)
  861.  
  862. if rf.lastApplied > rf.snapshottedIndex {
  863. // snapshot is elder than kv's db
  864. // if we install snapshot on kvserver, linearizability will break
  865. return
  866. }
  867.  
  868. installSnapshotCommand := ApplyMsg{
  869. CommandIndex: rf.snapshottedIndex,
  870. Command: "InstallSnapshot",
  871. CommandValid: false,
  872. CommandData: rf.persister.ReadSnapshot(),
  873. }
  874. go func(msg ApplyMsg) {
  875. rf.applyCh <- msg
  876. }(installSnapshotCommand)
  877. }
  878.  
  879. func (rf *Raft) sendInstallSnapshot(server int, args *InstallSnapshotArgs, reply *InstallSnapshotReply) bool {
  880. ok := rf.peers[server].Call("Raft.InstallSnapshot", args, reply)
  881. return ok
  882. }

server.go

package raftkv

import (
"bytes"
"labgob"
"labrpc"
"log"
"raft"
"sync"
"time"
) const Debug = func DPrintf(format string, a ...interface{}) (n int, err error) {
if Debug > {
log.Printf(format, a...)
}
return
} type Op struct {
// Your definitions here.
// Field names must start with capital letters,
// otherwise RPC will break.
Key string
Value string
Name string ClientId int64
RequestId int
} type KVServer struct {
mu sync.Mutex
me int
rf *raft.Raft
applyCh chan raft.ApplyMsg maxraftstate int // snapshot if log grows this big // Your definitions here.
db map[string]string // 3A
dispatcher map[int]chan Notification // 3A
lastAppliedRequestId map[int64]int // 3A appliedRaftLogIndex int // 3B
} // 3B
func (kv *KVServer) shouldTakeSnapshot() bool {
if kv.maxraftstate == - {
return false
} if kv.rf.GetRaftStateSize() >= kv.maxraftstate {
return true
}
return false
} func (kv *KVServer) takeSnapshot() {
w := new(bytes.Buffer)
e := labgob.NewEncoder(w)
kv.mu.Lock()
e.Encode(kv.db)
e.Encode(kv.lastAppliedRequestId)
appliedRaftLogIndex := kv.appliedRaftLogIndex
kv.mu.Unlock() kv.rf.ReplaceLogWithSnapshot(appliedRaftLogIndex, w.Bytes())
} //3A
type Notification struct {
ClientId int64
RequestId int
} func (kv *KVServer) Get(args *GetArgs, reply *GetReply) {
// Your code here.
op := Op{
Key: args.Key,
Name: "Get",
ClientId: args.ClientId,
RequestId: args.RequestId,
} // wait for being applied
// or leader changed (log is overrided, and never gets applied)
reply.WrongLeader = kv.waitApplying(op, *time.Millisecond) if reply.WrongLeader == false {
kv.mu.Lock()
value, ok := kv.db[args.Key]
kv.mu.Unlock()
if ok {
reply.Value = value
return
}
// not found
reply.Err = ErrNoKey
}
} func (kv *KVServer) PutAppend(args *PutAppendArgs, reply *PutAppendReply) {
// Your code here.
op := Op{
Key: args.Key,
Value: args.Value,
Name: args.Op,
ClientId: args.ClientId,
RequestId: args.RequestId,
} // wait for being applied
// or leader changed (log is overrided, and never gets applied)
reply.WrongLeader = kv.waitApplying(op, *time.Millisecond)
} //
// the tester calls Kill() when a KVServer instance won't
// be needed again. you are not required to do anything
// in Kill(), but it might be convenient to (for example)
// turn off debug output from this instance.
//
func (kv *KVServer) Kill() {
kv.rf.Kill()
// Your code here, if desired.
} //
// servers[] contains the ports of the set of
// servers that will cooperate via Raft to
// form the fault-tolerant key/value service.
// me is the index of the current server in servers[].
// the k/v server should store snapshots through the underlying Raft
// implementation, which should call persister.SaveStateAndSnapshot() to
// atomically save the Raft state along with the snapshot.
// the k/v server should snapshot when Raft's saved state exceeds maxraftstate bytes,
// in order to allow Raft to garbage-collect its log. if maxraftstate is -1,
// you don't need to snapshot.
// StartKVServer() must return quickly, so it should start goroutines
// for any long-running work.
//
func StartKVServer(servers []*labrpc.ClientEnd, me int, persister *raft.Persister, maxraftstate int) *KVServer {
// call labgob.Register on structures you want
// Go's RPC library to marshall/unmarshall.
labgob.Register(Op{}) kv := new(KVServer)
kv.me = me
kv.maxraftstate = maxraftstate // You may need initialization code here.
kv.db = make(map[string]string)
kv.dispatcher = make(map[int]chan Notification)
kv.lastAppliedRequestId = make(map[int64]int) kv.applyCh = make(chan raft.ApplyMsg)
kv.rf = raft.Make(servers, me, persister, kv.applyCh) // 3B: recover from snapshot
snapshot := persister.ReadSnapshot()
kv.installSnapshot(snapshot) // You may need initialization code here.
go func() {
for msg := range kv.applyCh {
if msg.CommandValid == false {
//3B
switch msg.Command.(string) {
case "InstallSnapshot":
kv.installSnapshot(msg.CommandData)
}
continue
} op := msg.Command.(Op)
DPrintf("kvserver %d start applying command %s at index %d, request id %d, client id %d",
kv.me, op.Name, msg.CommandIndex, op.RequestId, op.ClientId)
kv.mu.Lock()
if kv.isDuplicateRequest(op.ClientId, op.RequestId) {
kv.mu.Unlock()
continue
}
switch op.Name {
case "Put":
kv.db[op.Key] = op.Value
case "Append":
kv.db[op.Key] += op.Value
// Get() does not need to modify db, skip
}
kv.lastAppliedRequestId[op.ClientId] = op.RequestId
// 3B
kv.appliedRaftLogIndex = msg.CommandIndex if ch, ok := kv.dispatcher[msg.CommandIndex]; ok {
notify := Notification{
ClientId: op.ClientId,
RequestId: op.RequestId,
}
ch <- notify
} kv.mu.Unlock()
DPrintf("kvserver %d applied command %s at index %d, request id %d, client id %d",
kv.me, op.Name, msg.CommandIndex, op.RequestId, op.ClientId)
}
}() return kv
} // should be called with lock
func (kv *KVServer) isDuplicateRequest(clientId int64, requestId int) bool {
appliedRequestId, ok := kv.lastAppliedRequestId[clientId]
if ok == false || requestId > appliedRequestId {
return false
}
return true
} func (kv *KVServer) waitApplying(op Op, timeout time.Duration) bool {
// return common part of GetReply and PutAppendReply
// i.e., WrongLeader
index, _, isLeader := kv.rf.Start(op)
if isLeader == false {
return true
} // 3B
if kv.shouldTakeSnapshot() {
kv.takeSnapshot()
} var wrongLeader bool kv.mu.Lock()
if _, ok := kv.dispatcher[index]; !ok {
kv.dispatcher[index] = make(chan Notification, )
}
ch := kv.dispatcher[index]
kv.mu.Unlock()
select {
case notify := <-ch:
if notify.ClientId != op.ClientId || notify.RequestId != op.RequestId {
// leader has changed
wrongLeader = true
} else {
wrongLeader = false
} case <-time.After(timeout):
kv.mu.Lock()
if kv.isDuplicateRequest(op.ClientId, op.RequestId) {
wrongLeader = false
} else {
wrongLeader = true
}
kv.mu.Unlock()
}
DPrintf("kvserver %d got %s() RPC, insert op %+v at %d, reply WrongLeader = %v",
kv.me, op.Name, op, index, wrongLeader) kv.mu.Lock()
delete(kv.dispatcher, index)
kv.mu.Unlock()
return wrongLeader
} // 3B
func (kv *KVServer) installSnapshot(snapshot []byte) {
kv.mu.Lock()
defer kv.mu.Unlock()
if snapshot != nil {
r := bytes.NewBuffer(snapshot)
d := labgob.NewDecoder(r)
if d.Decode(&kv.db) != nil ||
d.Decode(&kv.lastAppliedRequestId) != nil {
DPrintf("kvserver %d fails to recover from snapshot", kv.me)
}
}
}

go test  -run 3B

go test -race -run 3B

6.824 Lab 3: Fault-tolerant Key/Value Service 3B的更多相关文章

  1. 6.824 Lab 3: Fault-tolerant Key/Value Service 3A

    6.824 Lab 3: Fault-tolerant Key/Value Service Due Part A: Mar 13 23:59 Due Part B: Apr 10 23:59 Intr ...

  2. 6.824 Lab 2: Raft 2A

    6.824 Lab 2: Raft Part 2A Due: Feb 23 at 11:59pm Part 2B Due: Mar 2 at 11:59pm Part 2C Due: Mar 9 at ...

  3. FTH: (7156): *** Fault tolerant heap shim applied to current process. This is usually due to previous crashes. ***

    这两天在Qtcreator上编译程序的时候莫名其妙的出现了FTH: (7156): *** Fault tolerant heap shim applied to current process. T ...

  4. MIT-6.824 Lab 3: Fault-tolerant Key/Value Service

    概述 lab2中实现了raft协议,本lab将在raft之上实现一个可容错的k/v存储服务,第一部分是实现一个不带日志压缩的版本,第二部分是实现日志压缩.时间原因我只完成了第一部分. 设计思路 如上图 ...

  5. 6.824 Lab 2: Raft 2C

    Part 2C Do a git pull to get the latest lab software. If a Raft-based server reboots it should resum ...

  6. Akka的fault tolerant

    要想容错,该怎么办? 父actor首先要获知子actor的失败状态,然后确定该怎么办, “怎么办”这回事叫做“supervisorStrategy".   // Restart the st ...

  7. 6.824 Lab 5: Caching Extents

    Introduction In this lab you will modify YFS to cache extents, reducing the load on the extent serve ...

  8. 6.824 Lab 2: Raft 2B

    Part 2B We want Raft to keep a consistent, replicated log of operations. A call to Start() at the le ...

  9. 解决Qt4.8.6+VS2010运行程序提示 FTH: (6512): *** Fault tolerant heap shim applied to current process. This is usually due to previous crashes

    这个问题偶尔碰到两次,现在又遇上了,解决办法如下: 打开注册表,设置HKLM\Software\Microsoft\FTH\Enabled 为0 打开CMD,运行Rundll32.exe fthsvc ...

随机推荐

  1. linux 走三层内网添加静态路由

    /etc/sysconfig/network-scripts/ifcfg-eth1 #机器1ip route add 10.24.4.0/24 via 10.90.203.1 dev ens33/et ...

  2. linux 文件查找 find命令详解

    一,从索引库查找文件:locate 索引库:操作系统会周期性的遍历根文件系统,然后生成索引库 手动更新索引库:updatedb 语法:locate [OPTION]... PATTERN... 只匹配 ...

  3. dns服务的基本配置

    本文环境:CentOS 7 简介 DNS(Domain Name System)即域名服务系统,是Internet上用的最频繁的服务之一,它的本质是一个范围很广的分布式数据库,组织成域层次结构的计算机 ...

  4. source insight支持查看makefile、kconfig以及.s代码方法

    在用sourceinsight查看linux内核源码的时候,大家会发现不能查看源码中的makefile和kconfig代码,即不能搜索到makefile和kconfig文件.这是因为source in ...

  5. 微信小程序-自制弹出框禁止页面上下滑动

    弹出 fixed 弹窗后,在弹窗上滑动会导致下层的页面一起跟着滚动. 解决方法: 在弹出层加上 catchtouchmove 事件 两种方法:(在电脑上测试是没有用的,这是触摸事件.因此,需要在手机端 ...

  6. 一个IP,一个linux服务器,两个项目,两个域名;如何将两个域名配置到同一个IP的两个项目中。

    一.现有资源: 1.阿里云centOS6.5服务器: 2.安装tomcat8.0+JDK: 3.两个不同maven项目的war包,项目名分别为cloud.am: 4.两个域名http://www.lu ...

  7. SpringBoot框架(5)-- @EableAutoConfiguration项目应用

    场景:在项目中想在当前maven项目中自动装配其他自定义的Maven项目,例如,创建数据库配置中心,被夺多个maven引用,希望简单配置,就实现springboot自动装配数据库配置类. 由此我们联想 ...

  8. 查看是否安装jdk及路径

    JDK能否曾经装置,可以在cmd窗口里输出java -version,假定没有提示出错,就表示曾经装置. JDK的装置途径,可以输出java -verbose,会前往很多信息,其中就包括了JDK的装置 ...

  9. java中System类

    System作为系统类,在JDK的java.lang包中,可见它也是一种java的核心语言特性.System类的构造器由private修饰,不允许被实例化.因此,类中的方法也都是static修饰的静态 ...

  10. linq 分页

    urList = (from u in urList                      orderby u.toolingNo_C                      select u) ...