Part 2B

We want Raft to keep a consistent, replicated log of operations. A call to Start() at the leader starts the process of adding a new operation to the log; the leader sends the new operation to the other servers in AppendEntries RPCs.我们希望Raft保持一致的、复制日志的操作。在leader处调用Start()将启动向日志添加新操作的过程;leader将新操作发送到附录条目rpc中的其他服务器。

Implement the leader and follower code to append new log entries. This will involve implementing Start(), completing the AppendEntries RPC structs, sending them, fleshing out the AppendEntry RPC handler, and advancing the commitIndex at the leader. Your first goal should be to pass the TestBasicAgree2B() test (in test_test.go). Once you have that working, you should get all the 2B tests to pass (go test -run 2B).

  • You will need to implement the election restriction (section 5.4.1 in the paper).实现选举限制:如果日志不是新的,不给投票
  • One way to fail the early Lab 2B tests is to hold un-needed elections, that is, an election even though the current leader is alive and can talk to all peers. This can prevent agreement in situations where the tester believes agreement is possible. Bugs in election timer management, or not sending out heartbeats immediately after winning an election, can cause un-needed elections.不必要的选举可能导致2b测试的过早失败
  • You may need to write code that waits for certain events to occur. Do not write loops that execute continuously without pausing, since that will slow your implementation enough that it fails tests. You can wait efficiently with Go's channels, or Go's condition variables, or (if all else fails) by inserting a time.Sleep(10 * time.Millisecond) in each loop iteration.
  • Give yourself time to rewrite your implementation in light of lessons learned about structuring concurrent code. In later labs you'll thank yourself for having Raft code that's as clear and clean as possible. For ideas, you can re-visit our structurelocking and guide, pages.

The tests for upcoming labs may fail your code if it runs too slowly. You can check how much real time and CPU time your solution uses with the time command. Here's some typical output for Lab 2B:

$ time go test -run 2B

Test (2B): basic agreement ...

... Passed --   0.5  5   28    3

Test (2B): agreement despite follower disconnection ...

... Passed --   3.9  3   69    7

Test (2B): no agreement if too many followers disconnect ...

... Passed --   3.5  5  144    4

Test (2B): concurrent Start()s ...

... Passed --   0.7  3   12    6

Test (2B): rejoin of partitioned leader ...

... Passed --   4.3  3  106    4

Test (2B): leader backs up quickly over incorrect follower logs ...

... Passed --  23.0  5 1302  102

Test (2B): RPC counts aren't too high ...

... Passed --   2.2  3   30   12

PASS

ok      raft    38.029s

real    0m38.511s

user    0m1.460s

sys     0m0.901s

$

The "ok raft 38.029s" means that Go measured the time taken for the 2B tests to be 38.029 seconds of real (wall-clock) time. The "user 0m1.460s" means that the code consumed 1.460 seconds of CPU time, or time spent actually executing instructions (rather than waiting or sleeping). If your solution uses much more than a minute of real time for the 2B tests, or much more than 5 seconds of CPU time, you may run into trouble later on. Look for time spent sleeping or waiting for RPC timeouts, loops that run without sleeping or waiting for conditions or channel messages, or large numbers of RPCs sent.

Be sure you pass the 2A and 2B tests before submitting Part 2B.

my homework code:

  1. package raft
  2.  
  3. //
  4. // this is an outline of the API that raft must expose to
  5. // the service (or tester). see comments below for
  6. // each of these functions for more details.
  7. //
  8. // rf = Make(...)
  9. // create a new Raft server.
  10. // rf.Start(command interface{}) (index, term, isleader)
  11. // start agreement on a new log entry
  12. // rf.GetState() (term, isLeader)
  13. // ask a Raft for its current term, and whether it thinks it is leader
  14. // ApplyMsg
  15. // each time a new entry is committed to the log, each Raft peer
  16. // should send an ApplyMsg to the service (or tester)
  17. // in the same server.
  18. //
  19.  
  20. import (
  21. "labrpc"
  22. "math/rand"
  23. "sync"
  24. "time"
  25. )
  26.  
  27. // import "bytes"
  28. // import "labgob"
  29.  
  30. //
  31. // as each Raft peer becomes aware that successive log entries are
  32. // committed, the peer should send an ApplyMsg to the service (or
  33. // tester) on the same server, via the applyCh passed to Make(). set
  34. // CommandValid to true to indicate that the ApplyMsg contains a newly
  35. // committed log entry.
  36. //
  37. // in Lab 3 you'll want to send other kinds of messages (e.g.,
  38. // snapshots) on the applyCh; at that point you can add fields to
  39. // ApplyMsg, but set CommandValid to false for these other uses.
  40. //
  41. type ApplyMsg struct {
  42. CommandValid bool
  43. Command interface{}
  44. CommandIndex int
  45. }
  46.  
  47. type LogEntry struct {
  48. Command interface{}
  49. Term int
  50. }
  51.  
  52. const (
  53. Follower int =
  54. Candidate int =
  55. Leader int =
  56. HEART_BEAT_TIMEOUT = //心跳超时,要求1秒10次,所以是100ms一次
  57. )
  58.  
  59. //
  60. // A Go object implementing a single Raft peer.
  61. //
  62. type Raft struct {
  63. mu sync.Mutex // Lock to protect shared access to this peer's state
  64. peers []*labrpc.ClientEnd // RPC end points of all peers
  65. persister *Persister // Object to hold this peer's persisted state
  66. me int // this peer's index into peers[]
  67.  
  68. // Your data here (2A, 2B, 2C).
  69. // Look at the paper's Figure 2 for a description of what
  70. // state a Raft server must maintain.
  71. electionTimer *time.Timer // 选举定时器
  72. heartbeatTimer *time.Timer // 心跳定时器
  73. state int // 角色
  74. voteCount int //投票数
  75. applyCh chan ApplyMsg // 提交通道
  76.  
  77. currentTerm int //latest term server has seen (initialized to 0 on first boot, increases monotonically)
  78. votedFor int //candidateId that received vote in current term (or null if none)
  79. log []LogEntry //log entries; each entry contains command for state machine, and term when entry was received by leader (first index is 1)
  80.  
  81. //Volatile state on all servers:
  82. commitIndex int //index of highest log entry known to be committed (initialized to 0, increases monotonically)
  83. lastApplied int //index of highest log entry applied to state machine (initialized to 0, increases monotonically)
  84. //Volatile state on leaders:(Reinitialized after election)
  85. nextIndex []int //for each server, index of the next log entry to send to that server (initialized to leader last log index + 1)
  86. matchIndex []int //for each server, index of highest log entry known to be replicated on server (initialized to 0, increases monotonically)
  87.  
  88. }
  89.  
  90. // return currentTerm and whether this server
  91. // believes it is the leader.
  92. func (rf *Raft) GetState() (int, bool) {
  93. var term int
  94. var isleader bool
  95. // Your code here (2A).
  96. rf.mu.Lock()
  97. defer rf.mu.Unlock()
  98. term = rf.currentTerm
  99. isleader = rf.state == Leader
  100. return term, isleader
  101. }
  102.  
  103. func (rf *Raft) persist() {
  104. // Your code here (2C).
  105. // Example:
  106. // w := new(bytes.Buffer)
  107. // e := labgob.NewEncoder(w)
  108. // e.Encode(rf.xxx)
  109. // e.Encode(rf.yyy)
  110. // data := w.Bytes()
  111. // rf.persister.SaveRaftState(data)
  112. }
  113.  
  114. //
  115. // restore previously persisted state.
  116. //
  117. func (rf *Raft) readPersist(data []byte) {
  118. if data == nil || len(data) < { // bootstrap without any state?
  119. return
  120. }
  121. // Your code here (2C).
  122. // Example:
  123. // r := bytes.NewBuffer(data)
  124. // d := labgob.NewDecoder(r)
  125. // var xxx
  126. // var yyy
  127. // if d.Decode(&xxx) != nil ||
  128. // d.Decode(&yyy) != nil {
  129. // error...
  130. // } else {
  131. // rf.xxx = xxx
  132. // rf.yyy = yyy
  133. // }
  134. }
  135.  
  136. //
  137. // example RequestVote RPC arguments structure.
  138. // field names must start with capital letters!
  139. //
  140. type RequestVoteArgs struct {
  141. // Your data here (2A, 2B).
  142. Term int //candidate’s term
  143. CandidateId int //candidate requesting vote
  144. LastLogIndex int //index of candidate’s last log entry (§5.4)
  145. LastLogTerm int //term of candidate’s last log entry (§5.4)
  146. }
  147.  
  148. //
  149. // example RequestVote RPC reply structure.
  150. // field names must start with capital letters!
  151. //
  152. type RequestVoteReply struct {
  153. // Your data here (2A).
  154. Term int //currentTerm, for candidate to update itself
  155. VoteGranted bool //true means candidate received vote
  156. }
  157.  
  158. //
  159. // example RequestVote RPC handler.
  160. //
  161. func (rf *Raft) RequestVote(args *RequestVoteArgs, reply *RequestVoteReply) {
  162. // Your code here (2A, 2B).
  163. rf.mu.Lock()
  164. defer rf.mu.Unlock()
  165. DPrintf("Candidate[raft%v][term:%v] request vote: raft%v[%v] 's term%v\n", args.CandidateId, args.Term, rf.me, rf.state, rf.currentTerm)
  166. if args.Term < rf.currentTerm ||
  167. (args.Term == rf.currentTerm && rf.votedFor != - && rf.votedFor != args.CandidateId) {
  168. reply.Term = rf.currentTerm
  169. reply.VoteGranted = false
  170. return
  171. }
  172.  
  173. if args.Term > rf.currentTerm {
  174. rf.currentTerm = args.Term
  175. rf.switchStateTo(Follower)
  176. }
  177.  
  178. // 2B: candidate's vote should be at least up-to-date as receiver's log
  179. // "up-to-date" is defined in thesis 5.4.1
  180. lastLogIndex := len(rf.log) -
  181. if args.LastLogTerm < rf.log[lastLogIndex].Term ||
  182. (args.LastLogTerm == rf.log[lastLogIndex].Term &&
  183. args.LastLogIndex < (lastLogIndex)) {
  184. // Receiver is more up-to-date, does not grant vote
  185. reply.Term = rf.currentTerm
  186. reply.VoteGranted = false
  187. return
  188. }
  189.  
  190. rf.votedFor = args.CandidateId
  191. reply.Term = rf.currentTerm
  192. reply.VoteGranted = true
  193. // reset timer after grant vote
  194. rf.electionTimer.Reset(randTimeDuration())
  195. }
  196.  
  197. type AppendEntriesArgs struct {
  198. Term int //leader’s term
  199. LeaderId int //so follower can redirect clients
  200. PrevLogIndex int //index of log entry immediately preceding new ones
  201. PrevLogTerm int //term of prevLogIndex entry
  202. Entries []LogEntry //log entries to store (empty for heartbeat; may send more than one for efficiency)
  203. LeaderCommit int //leader’s commitIndex
  204. }
  205.  
  206. type AppendEntriesReply struct {
  207. Term int //currentTerm, for leader to update itself
  208. Success bool //true if follower contained entry matching prevLogIndex and prevLogTerm
  209. }
  210.  
  211. func (rf *Raft) AppendEntries(args *AppendEntriesArgs, reply *AppendEntriesReply) {
  212. rf.mu.Lock()
  213. defer rf.mu.Unlock()
  214. DPrintf("leader[raft%v][term:%v] beat term:%v [raft%v][%v]\n", args.LeaderId, args.Term, rf.currentTerm, rf.me, rf.state)
  215. reply.Success = true
  216.  
  217. // 1. Reply false if term < currentTerm (§5.1)
  218. if args.Term < rf.currentTerm {
  219. reply.Success = false
  220. reply.Term = rf.currentTerm
  221. return
  222. }
  223. //If RPC request or response contains term T > currentTerm:set currentTerm = T, convert to follower (§5.1)
  224. if args.Term > rf.currentTerm {
  225. rf.currentTerm = args.Term
  226. rf.switchStateTo(Follower)
  227. }
  228.  
  229. // reset election timer even log does not match
  230. // args.LeaderId is the current term's Leader
  231. rf.electionTimer.Reset(randTimeDuration())
  232.  
  233. // 2. Reply false if log doesn’t contain an entry at prevLogIndex
  234. // whose term matches prevLogTerm (§5.3)
  235. lastLogIndex := len(rf.log) -
  236. if lastLogIndex < args.PrevLogIndex {
  237. reply.Success = false
  238. reply.Term = rf.currentTerm
  239. return
  240. }
  241.  
  242. // 3. If an existing entry conflicts with a new one (same index
  243. // but different terms), delete the existing entry and all that
  244. // follow it (§5.3)
  245. if rf.log[(args.PrevLogIndex)].Term != args.PrevLogTerm {
  246. reply.Success = false
  247. reply.Term = rf.currentTerm
  248. return
  249. }
  250.  
  251. // 4. Append any new entries not already in the log
  252. // compare from rf.log[args.PrevLogIndex + 1]
  253. unmatch_idx := -
  254. for idx := range args.Entries {
  255. if len(rf.log) < (args.PrevLogIndex++idx) ||
  256. rf.log[(args.PrevLogIndex++idx)].Term != args.Entries[idx].Term {
  257. // unmatch log found
  258. unmatch_idx = idx
  259. break
  260. }
  261. }
  262.  
  263. if unmatch_idx != - {
  264. // there are unmatch entries
  265. // truncate unmatch Follower entries, and apply Leader entries
  266. rf.log = rf.log[:(args.PrevLogIndex + + unmatch_idx)]
  267. rf.log = append(rf.log, args.Entries[unmatch_idx:]...)
  268. }
  269.  
  270. //5. If leaderCommit > commitIndex, set commitIndex = min(leaderCommit, index of last new entry)
  271. if args.LeaderCommit > rf.commitIndex {
  272. rf.setCommitIndex(min(args.LeaderCommit, len(rf.log)-))
  273. }
  274.  
  275. reply.Success = true
  276. }
  277.  
  278. //
  279. // example code to send a RequestVote RPC to a server.
  280. // server is the index of the target server in rf.peers[].
  281. // expects RPC arguments in args.
  282. // fills in *reply with RPC reply, so caller should
  283. // pass &reply.
  284. // the types of the args and reply passed to Call() must be
  285. // the same as the types of the arguments declared in the
  286. // handler function (including whether they are pointers).
  287. //
  288. // The labrpc package simulates a lossy network, in which servers
  289. // may be unreachable, and in which requests and replies may be lost.
  290. // Call() sends a request and waits for a reply. If a reply arrives
  291. // within a timeout interval, Call() returns true; otherwise
  292. // Call() returns false. Thus Call() may not return for a while.
  293. // A false return can be caused by a dead server, a live server that
  294. // can't be reached, a lost request, or a lost reply.
  295. //
  296. // Call() is guaranteed to return (perhaps after a delay) *except* if the
  297. // handler function on the server side does not return. Thus there
  298. // is no need to implement your own timeouts around Call().
  299. //
  300. // look at the comments in ../labrpc/labrpc.go for more details.
  301. //
  302. // if you're having trouble getting RPC to work, check that you've
  303. // capitalized all field names in structs passed over RPC, and
  304. // that the caller passes the address of the reply struct with &, not
  305. // the struct itself.
  306. //
  307. func (rf *Raft) sendRequestVote(server int, args *RequestVoteArgs, reply *RequestVoteReply) bool {
  308. ok := rf.peers[server].Call("Raft.RequestVote", args, reply)
  309. return ok
  310. }
  311.  
  312. func (rf *Raft) sendAppendEntries(server int, args *AppendEntriesArgs, reply *AppendEntriesReply) bool {
  313. ok := rf.peers[server].Call("Raft.AppendEntries", args, reply)
  314. return ok
  315. }
  316.  
  317. //
  318. // the service using Raft (e.g. a k/v server) wants to start
  319. // agreement on the next command to be appended to Raft's log. if this
  320. // server isn't the leader, returns false. otherwise start the
  321. // agreement and return immediately. there is no guarantee that this
  322. // command will ever be committed to the Raft log, since the leader
  323. // may fail or lose an election. even if the Raft instance has been killed,
  324. // this function should return gracefully.
  325. //
  326. // the first return value is the index that the command will appear at
  327. // if it's ever committed. the second return value is the current
  328. // term. the third return value is true if this server believes it is
  329. // the leader.
  330. //
  331. func (rf *Raft) Start(command interface{}) (int, int, bool) {
  332. index := -
  333. term := -
  334. isLeader := true
  335.  
  336. // Your code here (2B).
  337. rf.mu.Lock()
  338. defer rf.mu.Unlock()
  339. term = rf.currentTerm
  340. isLeader = rf.state == Leader
  341. if isLeader {
  342. rf.log = append(rf.log, LogEntry{Command: command, Term: term})
  343. index = len(rf.log) -
  344. rf.matchIndex[rf.me] = index
  345. rf.nextIndex[rf.me] = index +
  346. }
  347.  
  348. return index, term, isLeader
  349. }
  350.  
  351. //
  352. // the tester calls Kill() when a Raft instance won't
  353. // be needed again. you are not required to do anything
  354. // in Kill(), but it might be convenient to (for example)
  355. // turn off debug output from this instance.
  356. //
  357. func (rf *Raft) Kill() {
  358. // Your code here, if desired.
  359. }
  360.  
  361. //
  362. // the service or tester wants to create a Raft server. the ports
  363. // of all the Raft servers (including this one) are in peers[]. this
  364. // server's port is peers[me]. all the servers' peers[] arrays
  365. // have the same order. persister is a place for this server to
  366. // save its persistent state, and also initially holds the most
  367. // recent saved state, if any. applyCh is a channel on which the
  368. // tester or service expects Raft to send ApplyMsg messages.
  369. // Make() must return quickly, so it should start goroutines
  370. // for any long-running work.
  371. //
  372. func Make(peers []*labrpc.ClientEnd, me int,
  373. persister *Persister, applyCh chan ApplyMsg) *Raft {
  374. rf := &Raft{}
  375. rf.peers = peers
  376. rf.persister = persister
  377. rf.me = me
  378.  
  379. // Your initialization code here (2A, 2B, 2C).
  380. rf.state = Follower
  381. rf.votedFor = -
  382. rf.heartbeatTimer = time.NewTimer(HEART_BEAT_TIMEOUT * time.Millisecond)
  383. rf.electionTimer = time.NewTimer(randTimeDuration())
  384.  
  385. rf.applyCh = applyCh
  386. rf.log = make([]LogEntry, ) // start from index 1
  387.  
  388. rf.nextIndex = make([]int, len(rf.peers))
  389. rf.matchIndex = make([]int, len(rf.peers))
  390.  
  391. //以定时器的维度重写background逻辑
  392. go func() {
  393. for {
  394. select {
  395. case <-rf.electionTimer.C:
  396. rf.mu.Lock()
  397. switch rf.state {
  398. case Follower:
  399. rf.switchStateTo(Candidate)
  400. case Candidate:
  401. rf.startElection()
  402. }
  403. rf.mu.Unlock()
  404.  
  405. case <-rf.heartbeatTimer.C:
  406. rf.mu.Lock()
  407. if rf.state == Leader {
  408. rf.heartbeats()
  409. rf.heartbeatTimer.Reset(HEART_BEAT_TIMEOUT * time.Millisecond)
  410. }
  411. rf.mu.Unlock()
  412. }
  413. }
  414. }()
  415.  
  416. // initialize from state persisted before a crash
  417. rf.readPersist(persister.ReadRaftState())
  418.  
  419. return rf
  420. }
  421.  
  422. func randTimeDuration() time.Duration {
  423. return time.Duration(HEART_BEAT_TIMEOUT*+rand.Intn(HEART_BEAT_TIMEOUT)) * time.Millisecond
  424. }
  425.  
  426. //切换状态,调用者需要加锁
  427. func (rf *Raft) switchStateTo(state int) {
  428. if state == rf.state {
  429. return
  430. }
  431. DPrintf("Term %d: server %d convert from %v to %v\n", rf.currentTerm, rf.me, rf.state, state)
  432. rf.state = state
  433. switch state {
  434. case Follower:
  435. rf.heartbeatTimer.Stop()
  436. rf.electionTimer.Reset(randTimeDuration())
  437. rf.votedFor = -
  438.  
  439. case Candidate:
  440. //成为候选人后立马进行选举
  441. rf.startElection()
  442.  
  443. case Leader:
  444. // initialized to leader last log index + 1
  445. for i := range rf.nextIndex {
  446. rf.nextIndex[i] = (len(rf.log))
  447. }
  448. for i := range rf.matchIndex {
  449. rf.matchIndex[i] =
  450. }
  451.  
  452. rf.electionTimer.Stop()
  453. rf.heartbeats()
  454. rf.heartbeatTimer.Reset(HEART_BEAT_TIMEOUT * time.Millisecond)
  455. }
  456. }
  457.  
  458. // 发送心跳包,调用者需要加锁
  459. func (rf *Raft) heartbeats() {
  460. for i := range rf.peers {
  461. if i != rf.me {
  462. go rf.heartbeat(i)
  463. }
  464. }
  465. }
  466.  
  467. func (rf *Raft) heartbeat(server int) {
  468. rf.mu.Lock()
  469. if rf.state != Leader {
  470. rf.mu.Unlock()
  471. return
  472. }
  473.  
  474. prevLogIndex := rf.nextIndex[server] -
  475.  
  476. // use deep copy to avoid race condition
  477. // when override log in AppendEntries()
  478. entries := make([]LogEntry, len(rf.log[(prevLogIndex+):]))
  479. copy(entries, rf.log[(prevLogIndex+):])
  480.  
  481. args := AppendEntriesArgs{
  482. Term: rf.currentTerm,
  483. LeaderId: rf.me,
  484. PrevLogIndex: prevLogIndex,
  485. PrevLogTerm: rf.log[(prevLogIndex)].Term,
  486. Entries: entries,
  487. LeaderCommit: rf.commitIndex,
  488. }
  489. rf.mu.Unlock()
  490.  
  491. var reply AppendEntriesReply
  492. if rf.sendAppendEntries(server, &args, &reply) {
  493. rf.mu.Lock()
  494. if rf.state != Leader {
  495. rf.mu.Unlock()
  496. return
  497. }
  498. // If last log index ≥ nextIndex for a follower: send
  499. // AppendEntries RPC with log entries starting at nextIndex
  500. // • If successful: update nextIndex and matchIndex for
  501. // follower (§5.3)
  502. // • If AppendEntries fails because of log inconsistency:
  503. // decrement nextIndex and retry (§5.3)
  504. if reply.Success {
  505. // successfully replicated args.Entries
  506. rf.matchIndex[server] = args.PrevLogIndex + len(args.Entries)
  507. rf.nextIndex[server] = rf.matchIndex[server] +
  508.  
  509. // If there exists an N such that N > commitIndex, a majority
  510. // of matchIndex[i] ≥ N, and log[N].term == currentTerm:
  511. // set commitIndex = N (§5.3, §5.4).
  512. for N := (len(rf.log) - ); N > rf.commitIndex; N-- {
  513. count :=
  514. for _, matchIndex := range rf.matchIndex {
  515. if matchIndex >= N {
  516. count +=
  517. }
  518. }
  519.  
  520. if count > len(rf.peers)/ {
  521. // most of nodes agreed on rf.log[i]
  522. rf.setCommitIndex(N)
  523. break
  524. }
  525. }
  526.  
  527. } else {
  528. if reply.Term > rf.currentTerm {
  529. rf.currentTerm = reply.Term
  530. rf.switchStateTo(Follower)
  531. } else {
  532. //如果走到这个分支,那一定是需要前推
  533. rf.nextIndex[server] = args.PrevLogIndex -
  534. }
  535. }
  536. rf.mu.Unlock()
  537. }
  538. }
  539.  
  540. // 开始选举,调用者需要加锁
  541. func (rf *Raft) startElection() {
  542.  
  543. // DPrintf("raft%v is starting election\n", rf.me)
  544. rf.currentTerm +=
  545. rf.votedFor = rf.me //vote for me
  546. rf.voteCount =
  547. rf.electionTimer.Reset(randTimeDuration())
  548.  
  549. for i := range rf.peers {
  550. if i != rf.me {
  551. go func(peer int) {
  552. rf.mu.Lock()
  553. lastLogIndex := len(rf.log) -
  554. args := RequestVoteArgs{
  555. Term: rf.currentTerm,
  556. CandidateId: rf.me,
  557. LastLogIndex: lastLogIndex,
  558. LastLogTerm: rf.log[lastLogIndex].Term,
  559. }
  560. // DPrintf("raft%v[%v] is sending RequestVote RPC to raft%v\n", rf.me, rf.state, peer)
  561. rf.mu.Unlock()
  562. var reply RequestVoteReply
  563. if rf.sendRequestVote(peer, &args, &reply) {
  564. rf.mu.Lock()
  565. defer rf.mu.Unlock()
  566. if reply.Term > rf.currentTerm {
  567. rf.currentTerm = reply.Term
  568. rf.switchStateTo(Follower)
  569. }
  570. if reply.VoteGranted && rf.state == Candidate {
  571. rf.voteCount++
  572. if rf.voteCount > len(rf.peers)/ {
  573. rf.switchStateTo(Leader)
  574. }
  575. }
  576. }
  577. }(i)
  578. }
  579. }
  580. }
  581.  
  582. //
  583. // several setters, should be called with a lock
  584. //
  585. func (rf *Raft) setCommitIndex(commitIndex int) {
  586. rf.commitIndex = commitIndex
  587. // apply all entries between lastApplied and committed
  588. // should be called after commitIndex updated
  589. if rf.commitIndex > rf.lastApplied {
  590. DPrintf("%v apply from index %d to %d", rf, rf.lastApplied+, rf.commitIndex)
  591. entriesToApply := append([]LogEntry{}, rf.log[(rf.lastApplied+):(rf.commitIndex+)]...)
  592.  
  593. go func(startIdx int, entries []LogEntry) {
  594. for idx, entry := range entries {
  595. var msg ApplyMsg
  596. msg.CommandValid = true
  597. msg.Command = entry.Command
  598. msg.CommandIndex = startIdx + idx
  599. rf.applyCh <- msg
  600. // do not forget to update lastApplied index
  601. // this is another goroutine, so protect it with lock
  602. rf.mu.Lock()
  603. if rf.lastApplied < msg.CommandIndex {
  604. rf.lastApplied = msg.CommandIndex
  605. }
  606. rf.mu.Unlock()
  607. }
  608. }(rf.lastApplied+, entriesToApply)
  609. }
  610. }
  611. func min(x, y int) int {
  612. if x < y {
  613. return x
  614. } else {
  615. return y
  616. }
  617. }

test

test -race

6.824 Lab 2: Raft 2B的更多相关文章

  1. 6.824 Lab 2: Raft 2A

    6.824 Lab 2: Raft Part 2A Due: Feb 23 at 11:59pm Part 2B Due: Mar 2 at 11:59pm Part 2C Due: Mar 9 at ...

  2. 6.824 Lab 2: Raft 2C

    Part 2C Do a git pull to get the latest lab software. If a Raft-based server reboots it should resum ...

  3. 6.824 Lab 3: Fault-tolerant Key/Value Service 3A

    6.824 Lab 3: Fault-tolerant Key/Value Service Due Part A: Mar 13 23:59 Due Part B: Apr 10 23:59 Intr ...

  4. 6.824 Lab 3: Fault-tolerant Key/Value Service 3B

    Part B: Key/value service with log compaction Do a git pull to get the latest lab software. As thing ...

  5. MIT-6.824 Lab 3: Fault-tolerant Key/Value Service

    概述 lab2中实现了raft协议,本lab将在raft之上实现一个可容错的k/v存储服务,第一部分是实现一个不带日志压缩的版本,第二部分是实现日志压缩.时间原因我只完成了第一部分. 设计思路 如上图 ...

  6. 6.824 Lab 5: Caching Extents

    Introduction In this lab you will modify YFS to cache extents, reducing the load on the extent serve ...

  7. MIT 6.824 Llab2B Raft之日志复制

    书接上文Raft Part A | MIT 6.824 Lab2A Leader Election. 实验准备 实验代码:git://g.csail.mit.edu/6.824-golabs-2021 ...

  8. MIT 6.824 Lab2D Raft之日志压缩

    书接上文Raft Part C | MIT 6.824 Lab2C Persistence. 实验准备 实验代码:git://g.csail.mit.edu/6.824-golabs-2021/src ...

  9. MIT 6.824 Lab2C Raft之持久化

    书接上文Raft Part B | MIT 6.824 Lab2B Log Replication. 实验准备 实验代码:git://g.csail.mit.edu/6.824-golabs-2021 ...

随机推荐

  1. (转) MiniUI使用

    来源于https://my.oschina.net/yunsy/blog/542597 1.MiniUI页签定位 <body> <input name = "bizType ...

  2. 拯救诺基亚X6

    现象:充电不稳,冲不进去电,后来直接黑屏了. 维修方式:更换手机尾插.或者更换整个尾插小板. 手机主板应该没有问题,这是本人某友的手机,据了解磕碰进水等问题.先前先后因为此问题找手机店,维修过两次,费 ...

  3. rediscli命令

    一.rediscli xxx 发送命令 二.进入客户端后的命令

  4. 第三篇:解析库之re、beautifulsoup、pyquery

    BeatifulSoup模块 一.介绍 Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库.它能够通过你喜欢的转换器实现惯用的文档导航,查找,修改文档的方式.Be ...

  5. python---注册表操作

    手动打开注册表   WIN+R      regedit 利用QSettings 一.创建子健和键值对 settings = QSettings("HKEY_CURRENT_USER\\So ...

  6. java初学者的Springmvc04笔记

    Springmvc04 Springmvc的全局异常处理 springmvc与spring的整合 myBatis 1.Springmvc的全局异常处理 作用:一次配置,对于controller层的所有 ...

  7. JAVA笔记26-网络编程(不等于网站编程)

    一.网络基础(TCP/IP详解) 1.IP协议(Internet Protocol):网络层,支持网间数据报通信.无连接数据报传送,数据报路由选择和差错控制. IPv4 32位(4字节),IPv6 1 ...

  8. shell练习--PAT题目1007:关于素数对(失败案例)

    让我们定义d​n​​为:d​n​​=p​n+1​​−p​n​​,其中p​i​​是第i个素数.显然有d​1​​=1,且对于n>1有d​n​​是偶数.“素数对猜想”认为“存在无穷多对相邻且差为2的素 ...

  9. postman批量调用接口并发测试

    本文出自:https://www.cnblogs.com/2186009311CFF/p/11425913.html 接口测试在开发中很容易遇到,下面是请教别人学会的并发测试,希望能帮到需要用到的你, ...

  10. jmeter--单个接口通,自动化不通时

    单个接口通,自动化不通时,对比两者请求 post 请求的格式,内容编码