peersim中BT网络核心代码解析
首先大概介绍BT网络运行的整体流程:
开始阶段,一个节点加入到网络中,并向tracker节点发送信息,tracker返回若干个邻居的列表
得到列表后,向每个邻居发送bitfiled信息,来获取他们的文件状态。接着确定需要的piece,并向拥有该
piece的邻居发送关注的请求消息。本地节点根据过去20s内邻居节点的带宽传输表现,选出前3,并把它们置为疏通状态,向他们发送块的请求。
当收到请求信息时,返回一个piece信息,注意如果本地节点上传少于10个块,就把当前请求入队,按队列顺序一个个请求处理,直到上传了10个块。
每当一个节点完成了一个piece的下载,就会给所有邻居发送一个hava信息,表明自己有可以分享的piece
接下来贴上bittorent.java,附有自己添加的注释
/*
* Copyright (c) 2007-2008 Fabrizio Frioli, Michele Pedrolli
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
* --
*
* Please send your questions/suggestions to:
* {fabrizio.frioli, michele.pedrolli} at studenti dot unitn dot it
*
*/ package peersim.bittorrent; import peersim.core.*;
import peersim.config.*;
import peersim.edsim.*;
import peersim.transport.*; /**
* This is the class that implements the BitTorrent module for Peersim
*/
public class BitTorrent implements EDProtocol {
//用于配置文件接受数据的字符串
/**
* The size in Megabytes of the file being shared.
* @config
*/
private static final String PAR_SIZE="file_size";
/**
* The Transport used by the the protocol.
* @config
*/
private static final String PAR_TRANSPORT="transport";
/**
* The maximum number of neighbor that a node can have.
* @config
*/
private static final String PAR_SWARM="max_swarm_size";
/**
* The maximum number of peers returned by the tracker when a new
* set of peers is requested through a <tt>TRACKER</tt> message.
* @config
*/
private static final String PAR_PEERSET_SIZE="peerset_size";
/**
* Defines how much the network can grow with respect to the <tt>network.size</tt>
* when {@link NetworkDynamics} is used.
* @config
*/
private static final String PAR_MAX_GROWTH="max_growth";
/**
* Is the number of requests of the same block sent to different peers.
* @config
*/
private static final String PAR_DUP_REQ = "duplicated_requests"; //16种事件的代号定义 /**
* KEEP_ALIVE message.
* @see SimpleEvent#type "Event types"
*/
private static final int KEEP_ALIVE = ; /**
* CHOKE message.
* @see SimpleEvent#type "Event types"
*/
private static final int CHOKE = ; /**
* UNCHOKE message.
* @see SimpleEvent#type "Event types"
*/
private static final int UNCHOKE = ; /**
* INTERESTED message.
* @see SimpleEvent#type "Event types"
*/
private static final int INTERESTED = ; /**
* NOT_INTERESTED message.
* @see SimpleEvent#type "Event types"
*/
private static final int NOT_INTERESTED = ; /**
* HAVE message.
* @see SimpleEvent#type "Event types"
*/
private static final int HAVE = ; /**
* BITFIELD message.
* @see SimpleEvent#type "Event types"
*/
private static final int BITFIELD = ; /**
* REQUEST message.
* @see SimpleEvent#type "Event types"
*/
private static final int REQUEST = ; /**
* PIECE message.
* @see SimpleEvent#type "Event types"
*/
private static final int PIECE = ; /**
* CANCEL message.
* @see SimpleEvent#type "Event types"
*/
private static final int CANCEL = ; /**
* TRACKER message.
* @see SimpleEvent#type "Event types"
*/
private static final int TRACKER = ; /**
* PEERSET message.
* @see SimpleEvent#type "Event types"
*/
private static final int PEERSET = ; /**
* CHOKE_TIME event.
* @see SimpleEvent#type "Event types"
*/
private static final int CHOKE_TIME = ; /**
* OPTUNCHK_TIME event.
* @see SimpleEvent#type "Event types"
*/
private static final int OPTUNCHK_TIME = ; /**
* ANTISNUB_TIME event.
* @see SimpleEvent#type "Event types"
*/
private static final int ANTISNUB_TIME = ; /**
* CHECKALIVE_TIME event.
* @see SimpleEvent#type "Event types"
*/
private static final int CHECKALIVE_TIME = ; /**
* TRACKERALIVE_TIME event.
* @see SimpleEvent#type "Event types"
*/
private static final int TRACKERALIVE_TIME = ; /**
* DOWNLOAD_COMPLETED event.
* @see SimpleEvent#type "Event types"
*/
private static final int DOWNLOAD_COMPLETED = ; //一大堆的变量初始化定义,要仔细看,不记得回头查一下 /**
* The maxium connection speed of the local node.
*/
int maxBandwidth; //本地节点的最大带宽(连接速度) /**
* Stores the neighbors ordered by ID.
* @see Element
*/
private peersim.bittorrent.Element byPeer[]; //按ID存储的邻居节点组 /**
* Contains the neighbors ordered by bandwidth as needed by the unchocking
* algorithm.
*/
private peersim.bittorrent.Element byBandwidth[]; //按带宽存储的邻居节点组 /**
* The Neighbors list.
*/
private Neighbor cache[]; //邻居节点列表,很常用 /**
* Reference to the neighbors that unchocked the local node.
*/
private boolean unchokedBy[]; //对本地疏通的节点组
/**
* Number of neighbors in the cache. When it decreases under 20, a new peerset
* is requested to the tracker.
*/
private int nNodes = ; //邻居节点列表中的数量,降到20以下,需要重新向TRACKER发请求 /**
* Maximum number of nodes in the network.
*/
private int nMaxNodes; //网络中最大节点数 /**
* The status of the local peer. 0 means that the current peer is a leecher, 1 a seeder.
*/
private int peerStatus; //节点状态,0是非种子节点,1是种子 /**
* Defines how much the network can grow with respect to the <tt>network.size</tt>
* when {@link NetworkDynamics} is used.
*/
public int maxGrowth; /**
* File status of the local node. Contains the blocks owned by the local node.
*/
private int status[]; //本地节点拥有的块组 /**
* Current number of Bitfield request sent. It must be taken into account
* before sending another one.
*/
private int nBitfieldSent = ; /**
* Current number of pieces in upload from the local peer.
*/
public int nPiecesUp = ;
/**
* Current number of pieces in download to the local peer.
*/
public int nPiecesDown = ; /**
* Current number of piece completed.
*/
private int nPieceCompleted = ; /**
* Current downloading piece ID, the previous lastInterested piece.
*/
int currentPiece = -; //正在下载的piece的ID /**
* Used to compute the average download rates in choking algorithm. Stores the
* number of <tt>CHOKE</tt> events.
*/
int n_choke_time = ; /**
* Used to send the <tt>TRACKER</tt> message when the local node has 20 neighbors
* for the first time.
*/
boolean lock = false; /**
* Number of peers interested to my pieces.
*/
int numInterestedPeers = ; /**
* Last piece for which the local node sent an <tt>INTERESTED</tt> message.
*/
int lastInterested = -; /**
* The status of the current piece in download. Length 16, every time the local node
* receives a PIECE message, it updates the corrisponding block's cell. The cell
* contains the ID for that block of that piece. If an already owned
* block is received this is discarded.
*/
private int pieceStatus[]; /**
* Length of the file. Stored as number of pieces (256KB each one).
*/
int nPieces; //文件长度,有几个PIECE /**
* Contains the neighbors's status of the file. Every row represents a
* node and every a cell has value O if the neighbor doesn't
* have the piece, 1 otherwise. It has {@link #swarmSize} rows and {@link #nPieces}
* columns.
*/
int [][]swarm; //节点的文件状态组,行代表每个节点,列代表每个piece /**
* The summation of the swarm's rows. Calculated every time a {@link #BITFIELD} message
* is received and updated every time HAVE message is received.
*/
int rarestPieceSet[]; //最少优先集合 /**
* The five pending block requests.
*/
int pendingRequest[]; //待处理组 /**
* The maximum swarm size (default is 80)
*/
int swarmSize; /**
* The size of the peerset. This is the number of "friends" nodes
* sent from the tracker to each new node (default: 50)
*/
int peersetSize; /**
* The ID of the current node
*/
private long thisNodeID; /**
* Number of duplicated requests as specified in the configuration file.
* @see BitTorrent#PAR_DUP_REQ
*/
private int numberOfDuplicatedRequests; /**
* The queue where the requests to serve are stored.
* The default dimension of the queue is 20.
*/
Queue requestToServe = null; /**
* The queue where the out of sequence incoming pieces are stored
* waiting for the right moment to be processed.
* The default dimension of the queue is 100.
*/
Queue incomingPieces = null; /**
* The Transport ID.
* @see BitTorrent#PAR_TRANSPORT
*/
int tid; /**
* The reference to the tracker node. If equals to <tt>null</tt>, the local
* node is the tracker.
*/
private Node tracker = null; /**
* The default constructor. Reads the configuration file and initializes the
* configuration parameters.
* @param prefix the component prefix declared in the configuration file
*/
public BitTorrent(String prefix){ // Used for the tracker's protocol
tid = Configuration.getPid(prefix+"."+PAR_TRANSPORT);
nPieces = (int)((Configuration.getInt(prefix+"."+PAR_SIZE))*/);
swarmSize = (int)Configuration.getInt(prefix+"."+PAR_SWARM);
peersetSize = (int)Configuration.getInt(prefix+"."+PAR_PEERSET_SIZE);
numberOfDuplicatedRequests = (int)Configuration.getInt(prefix+"."+PAR_DUP_REQ);
maxGrowth = (int)Configuration.getInt(prefix+"."+PAR_MAX_GROWTH);
nMaxNodes = Network.getCapacity()-;
} /**
* Gets the reference to the tracker node.
* @return the reference to the tracker
*/
public Node getTracker(){
return tracker;
} /**
* Gets the number of neighbors currently stored in the cache of the local node.
* @return the number of neighbors in the cache
*/
public int getNNodes(){
return this.nNodes;
} /**
* Sets the reference to the tracker node.
* @param t the tracker node
*/
public void setTracker(Node t){
tracker = t;
} /**
* Sets the ID of the local node.
* @param id the ID of the node
*/
public void setThisNodeID(long id) {
this.thisNodeID = id;
} /**
* Gets the ID of the local node.
* @return the ID of the local node
*/
public long getThisNodeID(){
return this.thisNodeID;
} /**
* Gets the file status of the local node.
* @return the file status of the local node
*/
public int[] getFileStatus(){
return this.status;
} /**
* Initializes the tracker node. This method
* only performs the initialization of the tracker's cache.
*/
public void initializeTracker() {
cache = new Neighbor[nMaxNodes+maxGrowth];
for(int i=; i<nMaxNodes+maxGrowth; i++){
cache[i]= new Neighbor();
}
} /**
* <p>Checks the number of neighbors and if it is equal to 20
* sends a TRACKER messages to the tracker, asking for a new
* peer set.</p>
*
* <p>This method *must* be called after every call of {@link #removeNeighbor}
* in {@link #processEvent}.
* </p>
*/
private void processNeighborListSize(Node node, int pid) {
if (nNodes==) {
Object ev;
long latency;
ev = new SimpleMsg(TRACKER, node);
Node tracker = ((BitTorrent)node.getProtocol(pid)).tracker;
if(tracker != null){
latency = ((Transport)node.getProtocol(tid)).getLatency(node, tracker);
EDSimulator.add(latency,ev,tracker,pid);
}
}
} /**
* The standard method that processes incoming events.
* @param node reference to the local node for which the event is going to be processed
* @param pid BitTorrent's protocol id
* @param event the event to process
*/
public void processEvent(Node node, int pid, Object event){ //核心函数,处理16种消息和事件,对照手册查看功能 Object ev;
long latency;
switch(((SimpleEvent)event).getType()){ case KEEP_ALIVE: //
{
Node sender = ((IntMsg)event).getSender();
int isResponse = ((IntMsg)event).getInt();
//System.out.println("process, keep_alive: sender is "+sender.getID()+", local is "+node.getID());
Element e = search(sender.getID());
if(e!= null){ //if I know the sender
cache[e.peer].isAlive();
if(isResponse== && alive(sender)){
Object msg = new IntMsg(KEEP_ALIVE,node,);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
EDSimulator.add(latency,msg,sender,pid);
cache[e.peer].justSent();
}
}
else{
System.err.println("despite it should never happen, it happened");
ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
nBitfieldSent++;
} };break; case CHOKE: // 2, CHOKE message.
{
Node sender = ((SimpleMsg)event).getSender();
//System.out.println("process, choke: sender is "+sender.getID()+", local is "+node.getID());
Element e = search(sender.getID());
if(e!= null){ //if I know the sender
cache[e.peer].isAlive();
unchokedBy[e.peer]= false; // I'm choked by it
}
else{
System.err.println("despite it should never happen, it happened");
ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
nBitfieldSent++;
}
};break; case UNCHOKE: // 3, UNCHOKE message.
{ Node sender = ((SimpleMsg)event).getSender();
//System.out.println("process, unchoke: sender is "+sender.getID()+", local is "+node.getID());
Element e = search(sender.getID());
if(e != null){ // If I know the sender
int senderIndex = e.peer;
cache[senderIndex].isAlive();
/* I send to it some of the pending requests not yet satisfied. */
int t = numberOfDuplicatedRequests;
for(int i=;i>= && t>;i--){
if(pendingRequest[i]==-)
break;
if(alive(cache[senderIndex].node) && swarm[senderIndex][decode(pendingRequest[i],)]==){ //If the sender has that piece
ev = new IntMsg(REQUEST, node,pendingRequest[i] );
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev, sender,pid);
cache[senderIndex].justSent();
}
if(!alive(cache[senderIndex].node)){
System.out.println("unchoke1 rm neigh "+ cache[i].node.getID() );
removeNeighbor(cache[senderIndex].node);
processNeighborListSize(node,pid);
return;
}
t--;
}
// I request missing blocks to fill the queue
int block = getBlock();
int piece;
while(block != -){ //while still available request to send
if(block < ){ // No more block to request for the current piece
piece = getPiece();
if(piece == -){ // no more piece to request
break;
}
for(int j=; j<swarmSize; j++){// send the interested message to those
// nodes which have that piece
lastInterested = piece;
if(alive(cache[j].node) && swarm[j][piece]==){ ev = new IntMsg(INTERESTED, node, lastInterested);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[j].node);
EDSimulator.add(latency,ev,cache[j].node,pid);
cache[j].justSent();
} if(!alive(cache[j].node)){
//System.out.println("unchoke2 rm neigh "+ cache[j].node.getID() );
removeNeighbor(cache[j].node);
processNeighborListSize(node,pid);
}
}
block = getBlock();
}
else{ // block value referred to a real block
if(alive(cache[senderIndex].node) && swarm[senderIndex][decode(block,)]== && addRequest(block)){ // The sender has that block
ev = new IntMsg(REQUEST, node, block);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
cache[senderIndex].justSent();
}
else{
if(!alive(cache[senderIndex].node)){
System.out.println("unchoke3 rm neigh "+ cache[senderIndex].node.getID() );
removeNeighbor(cache[senderIndex].node);
processNeighborListSize(node,pid);
}
return;
}
block = getBlock();
}
}
unchokedBy[senderIndex] = true; // I add the sender to the list
}
else // It should never happen.
{
System.err.println("despite it should never happen, it happened");
for(int i=; i<swarmSize; i++)
if(cache[i].node !=null)
System.err.println(cache[i].node.getID());
ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
nBitfieldSent++;
}
};break; case INTERESTED: // 4, INTERESTED message.
{
numInterestedPeers++;
Node sender = ((IntMsg)event).getSender();
//System.out.println("process, interested: sender is "+sender.getID()+", local is "+node.getID());
int value = ((IntMsg)event).getInt();
Element e = search(sender.getID());
if(e!=null){
cache[e.peer].isAlive();
cache[e.peer].interested = value;
}
else{
System.err.println("despite it should never happen, it happened");
ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
nBitfieldSent++;
} }; break; case NOT_INTERESTED: // 5, NOT_INTERESTED message.
{
numInterestedPeers--;
Node sender = ((IntMsg)event).getSender();
//System.out.println("process, not_interested: sender is "+sender.getID()+", local is "+node.getID());
int value = ((IntMsg)event).getInt();
Element e = search(sender.getID());
if(e!=null){
cache[e.peer].isAlive();
if(cache[e.peer].interested == value)
cache[e.peer].interested = -; // not interested
}
}; break; case HAVE: // 6, HAVE message.
{ Node sender = ((IntMsg)event).getSender();
//System.out.println("process, have: sender is "+sender.getID()+", local is "+node.getID());
int piece = ((IntMsg)event).getInt();
Element e = search(sender.getID());
if(e!=null){
cache[e.peer].isAlive();
swarm[e.peer][piece]=;
rarestPieceSet[piece]++;
boolean isSeeder = true;
for(int i=; i<nPieces; i++){
isSeeder = isSeeder && (swarm[e.peer][i]==);
}
e.isSeeder = isSeeder;
}
else{
System.err.println("despite it should never happen, it happened");
ev = new BitfieldMsg(BITFIELD, true, false, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
nBitfieldSent++;
}
}; break; case BITFIELD: // 7, BITFIELD message
{ Node sender = ((BitfieldMsg)event).getSender();
int []fileStatus = ((BitfieldMsg)event).getArray();
/*Response with NACK*/
if(!((BitfieldMsg)event).isRequest && !((BitfieldMsg)event).ack){
Element e = search(sender.getID());
if(e == null) // if is a response with nack that follows a request
nBitfieldSent--;
// otherwise is a response with ack that follows a duplicate
// insertion attempt
//System.out.println("process, bitfield_resp_nack: sender is "+sender.getID()+", local is "+node.getID());
return;
}
/*Request with NACK*/
if(((BitfieldMsg)event).isRequest && !((BitfieldMsg)event).ack){
//System.out.println("process, bitfield_req_nack: sender is "+sender.getID()+", local is "+node.getID());
if(alive(sender)){
Element e = search(sender.getID());
ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
cache[e.peer].justSent();
}
}
/*Response with ACK*/
if(!((BitfieldMsg)event).isRequest && ((BitfieldMsg)event).ack){
nBitfieldSent--;
//System.out.println("process, bitfield_resp_ack: sender is "+sender.getID()+", local is "+node.getID());
if(alive(sender)){
if(addNeighbor(sender)){
Element e = search(sender.getID());
cache[e.peer].isAlive();
swarm[e.peer] = fileStatus;
boolean isSeeder = true;
for(int i=; i<nPieces; i++){
rarestPieceSet[i]+= fileStatus[i];
isSeeder = isSeeder && (fileStatus[i]==);
}
e.isSeeder = isSeeder; if(nNodes== && !lock){ // I begin to request pieces
lock = true;
int piece = getPiece();
if(piece == -)
return;
lastInterested = piece;
currentPiece = lastInterested;
ev = new IntMsg(INTERESTED, node, lastInterested);
for(int i=; i<swarmSize; i++){// send the interested message to those
// nodes which have that piece
if(alive(cache[i].node) && swarm[i][piece]==){ latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
} } }
}
else
System.out.println("Sender "+sender.getID()+" not alive");
}
/*Request with ACK*/
if(((BitfieldMsg)event).isRequest && ((BitfieldMsg)event).ack){
//System.out.println("process, bitfield_req_ack: sender is "+sender.getID()+", local is "+node.getID());
if(alive(sender)){
if(addNeighbor(sender)){
Element e = search(sender.getID());
cache[e.peer].isAlive();
swarm[e.peer] = fileStatus;
boolean isSeeder = true;
for(int i=; i<nPieces; i++){
rarestPieceSet[i]+= fileStatus[i]; // I update the rarestPieceSet with the pieces of the new node
isSeeder = isSeeder && (fileStatus[i]==); // I check if the new node is a seeder
}
e.isSeeder = isSeeder;
ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
cache[e.peer].justSent();
if(nNodes== && !lock){ // I begin to request pieces
int piece = getPiece();
if(piece == -)
return;
lastInterested = piece;
currentPiece = lastInterested;
ev = new IntMsg(INTERESTED, node, lastInterested);
for(int i=; i<swarmSize; i++){// send the interested message to those
// nodes which have that piece
if(alive(cache[i].node) && swarm[i][piece]==){ latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
} }
}
else {
Element e;
if((e = search(sender.getID()))!=null){ // The sender was already in the cache
cache[e.peer].isAlive();
ev = new BitfieldMsg(BITFIELD, false, true, node, status, nPieces); //response with ack
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
cache[e.peer].justSent();
}
else{ // Was not be possible add the sender (nBitfield+nNodes > swarmSize)
ev = new BitfieldMsg(BITFIELD, false, false, node, status, nPieces); //response with nack
latency = ((Transport)node.getProtocol(tid)).getLatency(node,sender);
EDSimulator.add(latency,ev,sender,pid);
}
} }
else
System.out.println("Sender "+sender.getID()+" not alive");
}
};break; case REQUEST: // 8, REQUEST message.
{
Object evnt;
Node sender = ((IntMsg)event).getSender();
int value = ((IntMsg)event).getInt();
Element e;
BitTorrent senderP;
int remoteRate;
int localRate;
int bandwidth;
int downloadTime; e = search(sender.getID());
if (e==null)
return;
cache[e.peer].isAlive(); requestToServe.enqueue(value, sender); /*I serve the enqueued requests until 10 uploding pieces or an empty queue*/
while(!requestToServe.empty() && nPiecesUp <){
Request req = requestToServe.dequeue();
e = search(req.sender.getID());
if(e!=null && alive(req.sender)){
ev = new IntMsg(PIECE, node, req.id);
nPiecesUp++;
e.valueUP++;
senderP = ((BitTorrent)req.sender.getProtocol(pid));
senderP.nPiecesDown++;
remoteRate = senderP.maxBandwidth/(senderP.nPiecesUp + senderP.nPiecesDown);
localRate = maxBandwidth/(nPiecesUp + nPiecesDown);
bandwidth = Math.min(remoteRate, localRate);
downloadTime = ((*)/(bandwidth))*; // in milliseconds
latency = ((Transport)node.getProtocol(tid)).getLatency(node,req.sender);
EDSimulator.add(latency+downloadTime,ev,req.sender,pid);
cache[e.peer].justSent();
/*I send to me an event to indicate that the download is completed.
This prevent that, when the receiver death occurres, my value nPiecesUp
doesn't decrease.*/
evnt = new SimpleMsg(DOWNLOAD_COMPLETED, req.sender);
EDSimulator.add(latency+downloadTime,evnt,node,pid);
}
}
}; break; case PIECE: // 9, PIECE message.
{
Node sender = ((IntMsg)event).getSender();
/* Set the correct value for the local uploading and remote
downloading number of pieces */
nPiecesDown--; if(peerStatus == )// To save CPU cycles
return;
//System.out.println("process, piece: sender is "+sender.getID()+", local is "+node.getID());
Element e = search(sender.getID()); if(e==null){ //I can't accept a piece not wait
return;
}
e.valueDOWN++; cache[e.peer].isAlive(); int value = ((IntMsg)event).getInt();
int piece = decode(value,);
int block = decode(value,);
/* If the block has not been already downloaded and it belongs to
the current downloading piece.*/
if(piece == currentPiece && decode(pieceStatus[block],)!= piece){
pieceStatus[block] = value;
status[piece]++;
removeRequest(value); requestNextBlocks(node, pid, e.peer); }else{ // Either a future piece or an owned piece
if(piece!=currentPiece && status[piece]!=){ // Piece not owned, will be considered later
incomingPieces.enqueue(value, sender);
} }
ev = new IntMsg(CANCEL, node, value);
/* I send a CANCEL to all nodes to which I previously requested the block*/
for(int i=; i<swarmSize; i++){
if(alive(cache[i].node) && unchokedBy[i]==true && swarm[i][decode(block,)]== && cache[i].node != sender){
latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
} if(status[currentPiece]==){ // if piece completed, I change the currentPiece to the next wanted
nPieceCompleted++;
ev = new IntMsg(HAVE, node, currentPiece);
for(int i=; i<swarmSize; i++){ // I send the HAVE for the piece
if(alive(cache[i].node)){
latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
if(!alive(cache[i].node)){
//System.out.println("piece3 rm neigh "+ cache[i].node.getID() ); removeNeighbor(cache[i].node);
processNeighborListSize(node,pid);
}
}
ev = new IntMsg(NOT_INTERESTED, node, currentPiece);
for(int i=; i<swarmSize; i++){ // I send the NOT_INTERESTED to which peer I sent an INTERESTED
if(swarm[i][piece]== && alive(cache[i].node)){
latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
if(!alive(cache[i].node)){
//System.out.println("piece4 rm neigh "+ cache[i].node.getID() ); removeNeighbor(cache[i].node);
processNeighborListSize(node,pid);
}
}
if(nPieceCompleted == nPieces){
System.out.println("FILE COMPLETED for peer "+node.getID());
this.peerStatus = ;
} /* I set the currentPiece to the lastInterested. Then I extract
the queued received blocks
*/ currentPiece = lastInterested;
int m = incomingPieces.dim;
while(m > ){ // I process the queue
m--;
Request temp = incomingPieces.dequeue();
int p = decode(temp.id,); // piece id
int b = decode(temp.id,); // block id
Element s = search(temp.sender.getID());
if(s==null) // if the node that sent the block in the queue is dead
continue;
if(p==currentPiece && decode(pieceStatus[b],)!= p){
pieceStatus[b] = temp.id;
status[p]++;
removeRequest(temp.id);
requestNextBlocks(node, pid, s.peer);
}
else{ // The piece not currently desired will be moved to the tail
if(p!= currentPiece) // If not a duplicate block but belongs to another piece
incomingPieces.enqueue(temp.id,temp.sender);
else // duplicate block
requestNextBlocks(node, pid, s.peer);
}
}
}
}; break; case CANCEL:
{
Node sender = ((IntMsg)event).getSender();
int value = ((IntMsg)event).getInt();
requestToServe.remove(sender, value);
};break; case PEERSET: // PEERSET message
{
Node sender = ((PeerSetMsg)event).getSender();
//System.out.println("process, peerset: sender is "+sender.getID()+", local is "+node.getID());
Neighbor n[] = ((PeerSetMsg)event).getPeerSet(); for(int i=; i<peersetSize; i++){
if( n[i]!=null && alive(n[i].node) && search(n[i].node.getID())==null && nNodes+nBitfieldSent <swarmSize-) {
ev = new BitfieldMsg(BITFIELD, true, true, node, status, nPieces);
latency = ((Transport)node.getProtocol(tid)).getLatency(node,n[i].node);
EDSimulator.add(latency,ev,n[i].node,pid);
nBitfieldSent++;
// Here I should call the Neighbor.justSent(), but here
// the node is not yet in the cache.
}
}
}; break; case TRACKER: // TRACKER message
{ int j=;
Node sender = ((SimpleMsg)event).getSender();
//System.out.println("process, tracker: sender is "+sender.getID()+", local is "+node.getID());
if(!alive(sender))
return;
Neighbor tmp[] = new Neighbor[peersetSize];
int k=;
if(nNodes <= peersetSize){
for(int i=; i< nMaxNodes+maxGrowth; i++){
if(cache[i].node != null && cache[i].node.getID()!= sender.getID()){
tmp[k]=cache[i];
k++;
}
}
ev = new PeerSetMsg(PEERSET, tmp, node);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
EDSimulator.add(latency,ev,sender,pid);
return;
} while(j < peersetSize){
int i = CommonState.r.nextInt(nMaxNodes+maxGrowth);
for (int z=; z<j; z++){
if(cache[i].node==null || tmp[z].node.getID() == cache[i].node.getID() || cache[i].node.getID() == sender.getID()){
z=;
i= CommonState.r.nextInt(nMaxNodes+maxGrowth);
}
}
if(cache[i].node != null){
tmp[j] = cache[i];
j++;
}
}
ev = new PeerSetMsg(PEERSET, tmp, node);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, sender);
EDSimulator.add(latency,ev,sender,pid);
}; break; case CHOKE_TIME: //Every 10 secs.
{
n_choke_time++; ev = new SimpleEvent(CHOKE_TIME);
EDSimulator.add(,ev,node,pid);
int j=;
/*I copy the interested nodes in the byBandwidth array*/
for(int i=;i< swarmSize && byPeer[i].peer != -; i++){
if(cache[byPeer[i].peer].interested > ){
byBandwidth[j]=byPeer[i]; //shallow copy
j++;
}
} /*It ensures that in the next 20sec, if there are less nodes interested
than now, those in surplus will not be ordered. */
for(;j<swarmSize;j++){
byBandwidth[j]=null;
}
sortByBandwidth();
int optimistic = ;
int luckies[] = new int[];
try{ // It takes the first three neighbors
luckies[] = byBandwidth[].peer;
optimistic--;
luckies[] = byBandwidth[].peer;
optimistic--;
luckies[] = byBandwidth[].peer;
}
catch(NullPointerException e){ // If not enough peer in byBandwidth it chooses the other romdomly
for(int z = optimistic; z>;z--){
int lucky = CommonState.r.nextInt(nNodes);
while(cache[byPeer[lucky].peer].status == && alive(cache[byPeer[lucky].peer].node) &&
cache[byPeer[lucky].peer].interested == )// until the lucky peer is already unchoked or not interested
lucky = CommonState.r.nextInt(nNodes);
luckies[-z]= byPeer[lucky].peer;
}
}
for(int i=; i<swarmSize; i++){ // I perform the chokes and the unchokes
if((i==luckies[] || i==luckies[] || i==luckies[]) && alive(cache[i].node) && cache[i].status != ){ //the unchokes
cache[i].status = ;
ev = new SimpleMsg(UNCHOKE, node);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
//System.out.println("average time, unchoked: "+cache[i].node.getID());
}
else{ // the chokes
if(alive(cache[i].node) && (cache[i].status == || cache[i].status == )){
cache[i].status = ;
ev = new SimpleMsg(CHOKE, node);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
EDSimulator.add(latency,ev,cache[i].node,pid);
cache[i].justSent();
}
}
} if(n_choke_time%==){ //every 20 secs. Used in computing the average download rates
for(int i=; i<nNodes; i++){
if(this.peerStatus == ){ // I'm a leeacher
byPeer[i].head20 = byPeer[i].valueDOWN;
}
else{
byPeer[i].head20 = byPeer[i].valueUP;
}
}
}
}; break; case OPTUNCHK_TIME:
{ //System.out.println("process, optunchk_time"); ev = new SimpleEvent(OPTUNCHK_TIME);
EDSimulator.add(,ev,node,pid);
int lucky = CommonState.r.nextInt(nNodes);
while(cache[byPeer[lucky].peer].status ==)// until the lucky peer is already unchoked
lucky = CommonState.r.nextInt(nNodes);
if(!alive(cache[byPeer[lucky].peer].node))
return;
cache[byPeer[lucky].peer].status = ;
Object msg = new SimpleMsg(UNCHOKE,node);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[byPeer[lucky].peer].node);
EDSimulator.add(latency,msg,cache[byPeer[lucky].peer].node,pid);
cache[byPeer[lucky].peer].justSent();
}; break; case ANTISNUB_TIME:
{
if(this.peerStatus == ) // I'm a seeder, I don't update the event
return;
//System.out.println("process, antisnub_time");
for(int i=; i<nNodes; i++){
if(byPeer[i].valueDOWN > && (byPeer[i].valueDOWN - byPeer[i].head60)==){// No blocks downloaded in 1 min
cache[byPeer[i].peer].status = ; // I'm snubbed by it
}
byPeer[i].head60 = byPeer[i].valueDOWN;
}
ev = new SimpleEvent(ANTISNUB_TIME);
EDSimulator.add(,ev,node,pid);
long time = CommonState.getTime();
}; break; case CHECKALIVE_TIME:
{ //System.out.println("process, checkalive_time"); long now = CommonState.getTime();
for(int i=; i<swarmSize; i++){
/*If are at least 2 minutes (plus 1 sec of tolerance) that
I don't send anything to it.*/
if(alive(cache[i].node) && (cache[i].lastSent < (now-))){
Object msg = new IntMsg(KEEP_ALIVE,node,);
latency = ((Transport)node.getProtocol(tid)).getLatency(node, cache[i].node);
EDSimulator.add(latency,msg,cache[i].node,pid);
cache[i].justSent();
}
/*If are at least 2 minutes (plus 1 sec of tolerance) that I don't
receive anything from it though I sent a keepalive 2 minutes ago*/
else{
if(cache[i].lastSeen <(now-) && cache[i].node != null && cache[i].lastSent < (now-)){
System.out.println("process, checkalive_time, rm neigh " + cache[i].node.getID());
if(cache[i].node.getIndex() != -){
System.out.println("This should never happen: I remove a node that is not effectively died");
}
removeNeighbor(cache[i].node);
processNeighborListSize(node,pid);
}
}
}
ev = new SimpleEvent(CHECKALIVE_TIME);
EDSimulator.add(,ev,node,pid);
}; break; case TRACKERALIVE_TIME:
{
//System.out.println("process, trackeralive_time");
if(alive(tracker)){
ev = new SimpleEvent(TRACKERALIVE_TIME);
EDSimulator.add(,ev,node,pid);
}
else
tracker=null; }; break; case DOWNLOAD_COMPLETED:
{
nPiecesUp--;
}; break; }
} /**
* Given a piece index and a block index it encodes them in an unique integer value.
* @param piece the index of the piece to encode.
* @param block the index of the block to encode.
* @return the encoding of the piece and the block indexes.
*/
private int encode(int piece, int block){ //对piece和blockID编码,例如pieceID是1234,其中第2个块的ID就是123402
return (piece*)+block; }
/**
* Returns either the piece or the block that contained in the <tt>value</tt> depending
* on <tt>part</tt>: 0 means the piece value, 1 the block value.
* @param value the ID of the block to decode.
* @param part the information to extract from <tt>value</tt>. 0 means the piece index, 1 the block index.
* @return the piece or the block index depending about the value of <tt>part</tt>
*/
private int decode(int value, int part){ //解码,0返回pieceID,1返回blockID
if (value==-) // Not a true value to decode
return -;
if(part == ) // I'm interested to the piece
return value/;
else // I'm interested to the block
return value%;
} /**
* Used by {@link NodeInitializer#choosePieces(int, BitTorrent) NodeInitializer} to set
* the number of piece completed from the beginning in according with
* the distribution in the configuration file.
* @param number the number of piece completed
*/
public void setCompleted(int number){
this.nPieceCompleted = number;
} /**
* Sets the status (the set of blocks) of the file for the current node.
* Note that a piece is considered <i>completed</i> if the number
* of downloaded blocks is 16.
* @param index The index of the piece
* @param value Number of blocks downloaded for the piece index.
*/
public void setStatus(int index, int value){
status[index]=value;
} /**
* Sets the status of the local node.
* @param status The status of the node: 1 means seeder, 0 leecher
*/
public void setPeerStatus(int status){
this.peerStatus = status;
} /**
* Gets the status of the local node.
* @return The status of the local node: 1 means seeder, 0 leecher
*/
public int getPeerStatus(){
return peerStatus;
} /**
* Gets the number of blocks for a given piece owned by the local node.
* @param index The index of the piece
* @return Number of blocks downloaded for the piece index
*/
public int getStatus(int index){
return status[index];
} /**
* Sets the maximum bandwdith for the local node.
* @param value The value of bandwidth in Kbps
*/
public void setBandwidth(int value){
maxBandwidth = value;
} /**
* Checks if a node is still alive in the simulated network.
* @param node The node to check
* @return true if the node <tt>node</tt> is up, false otherwise
* @see peersim.core.GeneralNode#isUp
*/
public boolean alive(Node node){
if(node == null)
return false;
else
return node.isUp();
} /**
* Adds a neighbor to the cache of the local node.
* The new neighbor is put in the first null position.
* @param neighbor The neighbor node to add
* @return <tt>false</tt> if the neighbor is already present in the cache (this can happen when the peer requests a
* new peer set to the tracker an there is still this neighbor within) or no place is available.
* Otherwise, returns true if the node is correctly added to the cache.
*/
public boolean addNeighbor(Node neighbor){
if(search(neighbor.getID()) !=null){// if already exists
// System.err.println("Node "+neighbor.getID() + " not added, already exist.");
return false;
}
if(this.tracker == null){ // I'm in the tracker's BitTorrent protocol
for(int i=; i< nMaxNodes+maxGrowth; i++){
if(cache[i].node == null){
cache[i].node = neighbor;
cache[i].status = ; //chocked
cache[i].interested = -; //not interested
this.nNodes++; //System.err.println("i: " + i +" nMaxNodes: " + nMaxNodes);
return true;
}
}
}
else{
if((nNodes+nBitfieldSent) < swarmSize){
//System.out.println("I'm the node " + this.thisNodeID + ", trying to add node "+neighbor.getID());
for(int i=; i<swarmSize; i++){
if(cache[i].node == null){
cache[i].node = neighbor;
cache[i].status = ; //choked
cache[i].interested = -; // not interested
byPeer[nNodes].peer = i;
byPeer[nNodes].ID = neighbor.getID();
sortByPeer();
this.nNodes++;
//System.out.println(neighbor.getID()+" added!");
return true;
}
}
System.out.println("Node not added, no places available");
}
}
return false;
} /**
* Removes a neighbor from the cache of the local node.
* @param neighbor The node to remove
* @return true if the node is correctly removed, false otherwise.
*/
public boolean removeNeighbor(Node neighbor) { if (neighbor == null)
return true; // this is the tracker's bittorrent protocol
if (this.tracker == null) {
for (int i=; i< (nMaxNodes+maxGrowth); i++) { // check the feasibility of the removal
if ( (cache[i] != null) && (cache[i].node != null) &&
(cache[i].node.getID() == neighbor.getID()) ) {
cache[i].node = null;
this.nNodes--;
return true;
}
}
return false;
}
// this is the bittorrent protocol of a peer
else { Element e = search(neighbor.getID()); if (e != null) {
for (int i=; i<nPieces; i++) {
rarestPieceSet[i] -= swarm[e.peer][i];
swarm[e.peer][i] = ;
} cache[e.peer].node = null;
cache[e.peer].status = ;
cache[e.peer].interested = -;
unchokedBy[e.peer] = false;
this.nNodes--;
e.peer = -;
e.ID = Integer.MAX_VALUE;
e.valueUP = ;
e.valueDOWN = ;
e.head20 = ;
e.head60 = ;
sortByPeer(); return true;
}
}
return false;
} /**
* Adds a request to the pendingRequest queue.
* @param block The requested block
* @return true if the request has been successfully added to the queue, false otherwise
*/
private boolean addRequest(int block){
int i=;
while(i>= && pendingRequest[i]!=-){
i--;
}
if(i>=){
pendingRequest[i] = block;
return true;
}
else { // It should never happen
//System.err.println("pendingRequest queue full");
return false;
}
} /**
* Removes the block with the given <tt>id</tt> from the {@link #pendingRequest} queue
* and sorts the queue leaving the empty cell at the left.
* @param id the id of the requested block
*/
private void removeRequest(int id){
int i = ;
for(; i>=; i--){
if(pendingRequest[i]==id)
break;
}
for(; i>=; i--){
if(i==)
pendingRequest[i] = -;
else
pendingRequest[i] = pendingRequest[i-];
}
} /**
* Requests new block until the {@link #pendingRequest} is full to the sender of the just received piece.
* It calls {@link #getNewBlock(Node, int)} to implement the <i>strict priority</i> strategy.
* @param node the local node
* @param pid the BitTorrent protocol id
* @param sender the sender of the just received received piece.
*/
private void requestNextBlocks(Node node, int pid, int sender){
int block = getNewBlock(node, pid);
while(block != -){
if(unchokedBy[sender]==true && alive(cache[sender].node) && addRequest(block)){
Object ev = new IntMsg(REQUEST, node, block);
long latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[sender].node);
EDSimulator.add(latency,ev,cache[sender].node,pid);
cache[sender].justSent();
}
else{ // I cannot send request
if(!alive(cache[sender].node) && cache[sender].node!=null){
System.out.println("piece2 rm neigh "+ cache[sender].node.getID() );
removeNeighbor(cache[sender].node);
processNeighborListSize(node,pid);
}
return;
}
block = getNewBlock(node, pid);
}
} /**
* It returns the id of the next block to request. Sends <tt>INTERESTED</tt> if the new
* block belongs to a new piece.
* It uses {@link #getBlock()} to get the next block of a piece and calls {@link #getPiece()}
* when all the blocks for the {@link #currentPiece} have been requested.
* @param node the local node
* @param pid the BitTorrent protocol id
* @return -2 if no more places available in the <tt>pendingRequest</tt> queue;<br/>
* the value of the next block to request otherwise</p>
*/
private int getNewBlock(Node node, int pid){ //返回下一个请求的块,包括了队列满等情况,申请新的piece,返回对应的块
int block = getBlock();
if(block < ){ // No more block to request for the current piece if(block ==-) // Pending request queue full
return -; int newPiece = getPiece();
if(newPiece == -){ // no more piece to request
return -;
} lastInterested = newPiece;
Object ev = new IntMsg(INTERESTED, node, lastInterested); for(int j=; j<swarmSize; j++){// send the interested message to those
// nodes which have that piece
if(alive(cache[j].node) && swarm[j][newPiece]==){
long latency = ((Transport)node.getProtocol(tid)).getLatency(node,cache[j].node);
EDSimulator.add(latency,ev,cache[j].node,pid);
cache[j].justSent();
}
if(!alive(cache[j].node)){
//System.out.println("piece1 rm neigh "+ cache[j].node.getID() ); removeNeighbor(cache[j].node);
processNeighborListSize(node,pid);
}
}
block = getBlock();
return block;
}
else{
// block value referred to a real block
return block;
}
} /**
* Returns the next block to request for the {@link #currentPiece}.
* @return an index of a block of the <tt>currentPiece</tt> if there are still
* available places in the {@link #pendingRequest} queue;<br/>
* -2 if the <tt>pendingRequest</tt> queue is full;<br/>
* -1 if no more blocks to request for the current piece.
*/
private int getBlock(){ //返回请求的块,返回-2说明待处理队列满,-1说明当前piece没有需要下的块
int i=;
while(i>= && pendingRequest[i]!=-){ // i is the first empty position from the head
i--;
}
if(i==-){// No places in the pendingRequest available
//System.out.println("Pendig request queue full!");
return -;
}
int j;
//The queue is not empty & last requested block belongs to lastInterested piece
if(i!= && decode(pendingRequest[i+],)==lastInterested)
j=decode(pendingRequest[i+],)+; // the block following the last requested
else // I don't know which is the next block, so I search it.
j=;
/* I search another block until the current has been already received.
* If in pieceStatus at position j there is a block that belongs to
* lastInterested piece, means that the block j has been already
* received, otherwise I can request it.
*/
while(j< && decode(pieceStatus[j],)==lastInterested){
j++;
}
if(j==) // No more block to request for lastInterested piece
return -;
return encode(lastInterested,j);
} /**
* Returns the next correct piece to download. It choose the piece by using the
* <i>random first</i> and <i>rarest first</i> policy. For the beginning 4 pieces
* of a file the first one is used then the pieces are chosen using <i>rarest first</i>.
* @see "Documentation about the BitTorrent module"
* @return the next piece to download. If the whole file has been requested
* -1 is returned.
*/
private int getPiece(){ //返回下一个需要下载的合适的piece(随机优先和最少优先两种算法)
int piece = -;
if(nPieceCompleted < ){ //Uses random first piece //一开始,使用随机优先,随机选择piece,先下载一个完整的piece再说
piece = CommonState.r.nextInt(nPieces);
while(status[piece]== || piece == currentPiece) // until the piece is owned
piece = CommonState.r.nextInt(nPieces);
return piece;
}
else{ //Uses rarest piece first //否则就说明已经拥有完整的piece了,就可以使用最少优先算法
int j=;
for(; j<nPieces; j++){ // I find the first not owned piece
if(status[j]==){
piece = j;
if(piece != lastInterested) // teoretically it works because
// there should be only one interested
// piece not yet downloaded
break;
}
}
if(piece==-){ // Never entered in the previous 'if' statement; for all
// pieces an has been sent
return -;
} int rarestPieces[] = new int[nPieces-j]; // the pieces with the less number of occurences\
rarestPieces[] = j;
int nValues = ; // number of pieces less distributed in the network
for(int i=j+; i<nPieces; i++){ // Finds the rarest piece not owned
if(rarestPieceSet[i]< rarestPieceSet[rarestPieces[]] && status[i]==){ // if strictly less than the current one
rarestPieces[] = i;
nValues = ;
}
if(rarestPieceSet[i]==rarestPieceSet[rarestPieces[]] && status[i]==){ // if equal
rarestPieces[nValues] = i;
nValues++;
}
} piece = CommonState.r.nextInt(nValues); // one of the less owned pieces
return rarestPieces[piece];
}
} /**
* Returns the file's size as number of pieces of 256KB.
* @return number of pieces that compose the file.
*/
public int getNPieces(){ //返回文件大小(有多少piece)
return nPieces;
}
/**
* Clone method of the class. Returns a deep copy of the BitTorrent class. Used
* by the simulation to initialize the {@link peersim.core.Network}
* @return the deep copy of the BitTorrent class.
*/
public Object clone(){
Object prot = null;
try{
prot = (BitTorrent)super.clone();
}
catch(CloneNotSupportedException e){}; ((BitTorrent)prot).cache = new Neighbor[swarmSize];
for(int i=; i<swarmSize; i++){
((BitTorrent)prot).cache[i] = new Neighbor();
} ((BitTorrent)prot).byPeer = new Element[swarmSize];
for(int i=; i<swarmSize; i++){
((BitTorrent)prot).byPeer[i] = new Element();
} ((BitTorrent)prot).unchokedBy = new boolean[swarmSize]; ((BitTorrent)prot).byBandwidth = new Element[swarmSize];
((BitTorrent)prot).status = new int[nPieces];
((BitTorrent)prot).pieceStatus = new int[];
for(int i=; i<;i++)
((BitTorrent)prot).pieceStatus[i] = -;
((BitTorrent)prot).pendingRequest = new int[];
for(int i=; i<;i++)
((BitTorrent)prot).pendingRequest[i] = -;
((BitTorrent)prot).rarestPieceSet = new int[nPieces];
for(int i=; i<nPieces;i++)
((BitTorrent)prot).rarestPieceSet[i] = ;
((BitTorrent)prot).swarm = new int[swarmSize][nPieces];
((BitTorrent)prot).requestToServe = new Queue();
((BitTorrent)prot).incomingPieces = new Queue();
return prot;
} /**
* Sorts {@link #byPeer} array by peer's ID. It implements the <i>InsertionSort</i>
* algorithm.
*/
public void sortByPeer(){ //按节点顺序排序,插入排序算法
int i; for(int j=; j<swarmSize; j++) // out is dividing line
{
Element key = new Element();
byPeer[j].copyTo(key) ; // remove marked item
i = j-; // start shifts at out
while(i>= && (byPeer[i].ID > key.ID)) // until one is smaller,
{
byPeer[i].copyTo(byPeer[i+]); // shift item right,
i--; // go left one position
}
key.copyTo(byPeer[i+]); // insert marked item
} } /**
* Sorts the array {@link #byBandwidth} using <i>QuickSort</i> algorithm.
* <tt>null</tt> elements and seeders are moved to the end of the array.
*/
public void sortByBandwidth() { //按传输带宽给节点排序,使用快排算法
quicksort(, swarmSize-);
} /**
* Used by {@link #sortByBandwidth()}. It's the implementation of the
* <i>QuickSort</i> algorithm.
* @param left the leftmost index of the array to sort.
* @param right the rightmost index of the array to sort.
*/
private void quicksort(int left, int right) {
if (right <= left) return;
int i = partition(left, right);
quicksort(left, i-);
quicksort(i+, right);
} /**
* Used by {@link #quicksort(int, int)}, partitions the subarray to sort returning
* the splitting point as stated by the <i>QuickSort</i> algorithm.
* @see "The <i>QuickSort</i> algorithm".
*/
private int partition(int left, int right) { //快排算法的中间函数
int i = left - ;
int j = right;
while (true) {
while (greater(byBandwidth[++i], byBandwidth[right])) // find item on left to swap
; // a[right] acts as sentinel
while (greater(byBandwidth[right], byBandwidth[--j])) { // find item on right to swap
if (j == left) break; // don't go out-of-bounds
}
if (i >= j) break; // check if pointers cross
swap(i, j); // swap two elements into place
}
swap(i, right); // swap with partition element
return i;
} /**
* Aswers to the question "is x > y?". Compares the {@link Element}s given as
* parameters. <tt>Element x</tt> is greater than <tt>y</tt> if isn't <tt>null</tt>
* and in the last 20 seconds the local node has downloaded ("uploaded" if the local node is a
* seeder) more blocks than from <tt>y</tt>.
* @param x the first <tt>Element</tt> to compare.
* @param y the second <tt>Element</tt> to compare
* @return <tt>true</tt> if x > y;<br/>
* <tt>false</tt> otherwise.
*/
private boolean greater(Element x, Element y) { //中间函数,用于比较两个节点的大小,用在排序算法里
/*
* Null elements and seeders are shifted at the end
* of the array
*/
if (x==null) return false;
if (y==null) return true;
if (x.isSeeder) return false;
if (y.isSeeder) return true; // if the local node is a leecher
if (peerStatus==) {
if ((x.valueDOWN - x.head20) >
(y.valueDOWN -y.head20))
return true;
else return false;
} // if peerStatus==1 (the local node is a seeder)
else {
if ((x.valueUP - x.head20) >
(y.valueUP -y.head20))
return true;
else return false;
}
} /**
* Swaps {@link Element} <tt>i</tt> with <tt>j</tt> in the {@link #byBandwidth}.<br/>
* Used by {@link #partition(int, int)}
* @param i index of the first element to swap
* @param j index of the second element to swap
*/
private void swap(int i, int j) { //用下面的查询节点的中间函数
Element swap = byBandwidth[i];
byBandwidth[i] = byBandwidth[j];
byBandwidth[j] = swap;
} /** Searches the node with the given ID. It does a dychotomic
* search.
* @param ID ID of the node to search.
* @return the {@link Element} in {@link #byPeer} which represents the node with the
* given ID.
*/
public Element search(long ID){ //二分法来查找节点
int low = ;
int high = swarmSize-;
int p = low+((high-low)/); //Initial probe position
while ( low <= high) {
if ( byPeer[p] == null || byPeer[p].ID > ID)
high = p - ;
else {
if( byPeer[p].ID < ID ) //Wasteful second comparison forced by syntax limitations.
low = p + ;
else
return byPeer[p];
}
p = low+((high-low)/); //Next probe position.
}
return null;
}
} /**
* This class is used to store the main informations about a neighbors regarding
* the calculation of the Downloading/Uploading rates. Is the class of items in
* {@link example.bittorrent.BitTorrent#byPeer} and {@link example.bittorrent.BitTorrent#byBandwidth}.
*/
class Element{ //用来存储邻居节点的一些关于上传下载速率信息的类
/**
* ID of the represented node.
*/
public long ID = Integer.MAX_VALUE; //索引,用来查询节点
/**
* Index position of the node in the {@link example.bittorrent.BitTorrent#cache} array.
*/
public int peer = -;
/**
* Number of blocks uploaded to anyone since the beginning.
*/
public int valueUP = ; //从开始上传过的块的数量
/**
* Number of blocks downloaded from anyone since the beginning.
*/
public int valueDOWN = ; //从开始下载过的块的数量
/**
* Value of either {@link #valueUP} or {@link #valueDOWN} (depending by
* {@link example.bittorrent.BitTorrent#peerStatus}) 20 seconds before.
*/
public int head20 = ; //前20S内上传和下载总的流量,同下面60S前
/**
* Value of either {@link #valueUP} or {@link #valueDOWN} (depending by
* {@link example.bittorrent.BitTorrent#peerStatus}) 60 seconds before.
*/
public int head60 = ;
/**
* <tt>true</tt> if the node is a seeder, <tt>false</tt> otherwise.
*/
public boolean isSeeder = false;
/**
* Makes a deep copy of the Element to <tt>destination</tt>
* @param destination Element instance where to make the copy
*/
public void copyTo(Element destination){
destination.ID = this.ID;
destination.peer = this.peer;
destination.valueUP = this.valueUP;
destination.valueDOWN = this.valueDOWN;
destination.head20 = this.head20;
destination.head60 = this.head60;
}
} /**
* This class stores information about the neighbors regarding their status. It is
* the type of the items in the {@link example.bittorrent.BitTorrent#cache}.
*/
class Neighbor{ //邻居节点类
/**
* Reference to the node in the {@link peersim.core.Network}.
*/
public Node node = null;
/**
* -1 means not interested<br/>
* Other values means the last piece number for which the node is interested.
*/
public int interested;
/**
* 0 means CHOKED<br/>
* 1 means UNCHOKED<br/>
* 2 means SNUBBED_BY. If this value is set and the node is to be unchocked,
* value 2 has the priority.
*/
public int status; //记录阻塞,疏通和拒绝状态
/**
* Last time a message from the node represented has been received.
*/
public long lastSeen = ;
/**
* Last time a message to the node represented has been sent.
*/
public long lastSent = ; /**
* Sets the last time the neighbor was seen.
*/
public void isAlive(){ //更新最后一次接受节点信息的时间
long now = CommonState.getTime();
this.lastSeen = now;
} /*
* Sets the last time the local peer sent something to the neighbor.
*/
public void justSent(){ //更新最后一次给节点发送信息的时间
long now = CommonState.getTime();
this.lastSent = now;
} } /**
* Class type of the queues's items in {@link example.bittorrent.BitTorrent#incomingPieces}
* and {@link example.bittorrent.BitTorrent#requestToServe}.
*/
class Queue{ //请求的队列类,包含基本的出入队,判空,查询移除功能
int maxSize;
int head = ;
int tail = ;
int dim = ;
Request queue[]; /**
* Public constructor. Creates a queue of size <tt>size</tt>.
*/
public Queue(int size){
maxSize = size;
queue = new Request[size];
for(int i=; i< size; i++)
queue[i]= new Request();
} /**
* Enqueues the request of the block <tt>id</tt> and its <tt>sender</tt>
* @param id the id of the block in the request
* @param sender a reference to the sender of the request
* @return <tt>true</tt> if the request has been correctly added, <tt>false</tt>
* otherwise.
*/
public boolean enqueue(int id, Node sender){
if(dim < maxSize){
queue[tail%maxSize].id = id;
queue[tail%maxSize].sender = sender;
tail++;
dim++;
return true;
}
else return false;
} /**
* Returns the {@link Request} in the head of the queue.
* @return the element in the head.<br/>
* <tt>null</tt> if the queue is empty.
*/
public Request dequeue(){
Request value;
if(dim > ){
value = queue[head%maxSize];
head++;
dim--;
return value;
}
else return null; //empty queue
} /**
* Returns the status of the queue.
* @return <tt>true</tt> if the queue is empty, <tt>false</tt>
* otherwise.
*/
public boolean empty(){
return (dim == );
} /**
* Returns <tt>true</tt> if block given as parameter is in.
* @param value the id of the block to search.
* @return <tt>true</tt> if the block <tt>value</tt> is in the queue, <tt>false</tt>
* otherwise.
*/
public boolean contains(int value){
if(empty())
return false;
for(int i=head; i<head+dim; i++){
if(queue[i%maxSize].id == value)
return true;
}
return false;
} /**
* Removes a request from the queue.
* @param sender the sender of the request.
* @param value the id of the block requested.
* @return <tt>true</tt> if the request has been correctly removed, <tt>false</tt>
* otherwise.
*/
public boolean remove(Node sender, int value){
if(empty())
return false;
for(int i=head; i<head+dim; i++){
if(queue[i%maxSize].id == value && queue[i%maxSize].sender == sender){
for(int j=i; j>head; j--){ // Shifts the elements for the removal
queue[j%maxSize]= queue[(j-)%maxSize];
}
head++;
dim--;
return true;
}
}
return false;
}
} /**
* This class represent an enqueued request of a block.
*/
class Request{ //请求类,包括块的ID和请求的发送方
/**
* The id of the block.
*/
public int id;
/**
* The sender of the request.
*/
public Node sender;
}
bittorrent.java
BT网络中涉及到了许多小的算法,例如块的选择,如何确定阻塞疏通状态等,这里附上
下载文件片断的选择(Piece Selection)
选择一个好的顺序来下载片断,对提高性能非常重要。一个差的片断选择算法可能导致所有的片断都处于下载中,或者另一种情况,没有任何片断被上载给其它 peers。
1)严格的优先级(Strict Priority)
片断选择的第一个策略是:一旦请求了某个片断的子片断,那么该片断剩下的子片断优先被请求。这样,可以尽可能快的获得一个完整的片断。
2)最少的优先(Rarest First)
对一个下载者来说,在选择下一个被下载的片断时,通常选择的是它的peers们所拥有的最少的那个片断,也就是所谓的“最少优先”。这种技术,确保了每个下载者都拥有它的peers们最希望得到的那些片断,从而一旦有需要,上载就可以开始。这也确保了那些越普通的片断越放在最后下载,从而减少了这样一种可能性,即某个peer当前正提供上载,而随后却没有任何的被别人感兴趣的片断了。也就说说,每个peer都优先选择整个系统中最少的那些片断去下载,而那些在系统中相对较多的片断,放在后面下载,这样,整个系统就趋向于一种更优的状态。如果不用这种算法,大家都去下载最多的那些片断,那么这些片断就会在系统中分布的越来越多,而那些在系统中相对较少的片断仍然很少,最后,某些 peer 就不再拥有其它 peer 感兴趣的片断了,那么系统的参与者越来越少,整个系统的性能就下降。
在BT系统中,充分考虑了经济学的概念,处处从整个系统的性能出发,参与者越多,系统越优化。
信息理论显示除非种子上传了文件的所有片断,否则没有任何下载者可以完成所有文件的下载。如果在一个部署中,只有一个种子,而且种子的上载能力比它的大多数下载者都要差,那么,不同的下载者从种子那里下载不同的片断,性能就会变得比较好,因为,重复的下载浪费了种子获取更多信息的机会。“最少优先”使得下载者只从种子处下载新的片断(也就是整个系统中其它peer都没有的片断),因为,下载者能够看到其它peers那里已经有了种子已经上传的片断。
在某些部署中,原始的种子由于某些原因最终关闭,只好由剩下的这些下载者们来负责上传。这样显然会带来一个风险:某些片断任何一个下载者都不拥有。“最少优先”也很好的处理了这种情况。通过尽快的复制最少的片断,减少了这种由于当前的peers停止上载后带来的风险。
3)随机的第一个片断(Random First Piece)
“ 最少优先”的一个例外是在下载刚开始的时候。此时,下载者没有任何片断可供上传,所以,需要尽快的获取一个完整的片断。而最少的片断,通常只有某一个peer拥有,所以,它可能比多个peers都拥有的那些片断下载的要慢。因此,第一个片断是随机选择的,直到第一个片断下载完成,才切换到“最少优先”的策略。
4)最后阶段模式(Endgame Mode)
有时候,从一个速率很慢的peer那里请求一个片断。在下载的中间阶段,这不是什么问题,但是却可能潜在的延迟下载的完成。为了防止这种情况,在最后阶段,peer向它的所有的peers们都发送某片断的子片断的请求,一旦某些子片断到了,那么就会向其它peer发送 cancel 消息,取消对这些子片断的请求,以避免带宽的浪费。实际上,用这种方法并没有浪费多少带宽,而文件的结束部分也一直下载的非常快。
2、阻塞算法(Choking Algorithms)
BT 并不集中分配资源。每个peer自己有责任来尽可能的提高它的下载速率。Peers从它可以连接的peers处下载文件,并根据对方提供的下载速率给予同等的上传回报(你敬我一尺,我敬你一丈)。对于合作者,提供上传服务,对于不合作的,就阻塞对方。所以说,阻塞是一种临时的拒绝上传策略,虽然上传停止了,但是下载仍然继续。在阻塞停止的时候,连接并不需要重新建立。
阻塞算法并不属于BT对等协议(指peers 之间交互的协议)的技术部分,但是对提高性能是必要的。一个好的阻塞算法应该利用所有可用的资源,为所有下载者提供一致可靠的下载速率,并适当惩罚那些只下载而不上传的peers。
1)帕累托效率(Pareto Efficiency)
在经济学里,帕累托效率可以这样来定义:一种状态(资源配置、社会制度等)被称为帕累托最优状态,如果不存在另外一种可选择的状态使得没有任何人的处境变差而至少有一个人的处境变得更好。这意味着,当满足给定的约束条件,一种资源配置的状态已经使没有人能够按照自己的偏好在不损害别人的条件下变得更好,那么就是达到了帕累托最优状态。可以通俗地理解为,如果处于这种状态:除非损人,就不能利己,这就达到了帕累托最优。在计算机领域,寻求帕累托有效是一种本地优化算法BitTorrent的阻塞算法,用一种针锋相对的方式来试图达到帕累托最优。(原文不太好翻译,我简化了)。Peers对那些向他提供上传服务的peers给予同样的回报,目的是希望在任何时候都有若干个连接正在进行着双向传输。
2)BitTorrent的阻塞算法
从技术层面上说,BT的每个peer一直与固定数量的其它 peers 保持疏通(通常是4个),所以问题就变成了哪些peers应该保持疏通?这种方法使得TCP的拥塞控制性能能够可靠的饱和上传容量。(也就是说,尽量让整个系统的上传能力达到最大)。
严格的根据当前的下载速率来决定哪些peers应该保持疏通。令人惊讶的是,计算当前下载速率是个大难题。当前的实现实质上是一个每隔20秒的轮询。而原来的算法是对一个长时间的网络传输进行总计,但这种方法很差劲,因为由于资源可用或者不可用,带宽会变化的很快。
为了避免因为频繁的阻塞和疏通 peers造成的资源浪费,BT每隔10秒计算一次哪个peer需要被阻塞,然后将这种状态保持到下一个10秒。10秒已经足够使得TCP来调整它的传输性能到最大。
3)开放检测(Optimistic Unchoking)
如果只是简单的为提供最好的下载速率的peers们提供上载,那么就没有办法来发现那些空闲的连接是否比当前正使用的连接更好。为了解决这个问题,在任何时候,每个peer都拥有一个称为“optimistic unchoking”的连接,这个连接总是保持疏通状态,而不管它的下载速率是怎样。每隔30秒,重新计算一次哪个连接应该是“optimistic unchoking”。30秒足以让上传能力达到最大,下载能力也相应的达到最大。这种和针锋相对类似的思想非常的伟大。“optimistic unchoking”非常和谐的与“囚徒困境”合作。
4)反对歧视(Anti-snubbing)
某些情况下,一个peer可能被它所有的peers都阻塞了,这种情况下,它将会保持较低的下载速率直到通过 “optimistic unchoking”找到更好peers。为了减轻这种问题,如果一段时间过后,从某个peer那里一个片断也没有得到,那么这个peer认为自己被对方 “怠慢”了,于是不再为对方提供上传,除非对方是“optimistic unchoking”。这种情况频繁发生,会导致多于一个的并发的“optimistic unchoking”。
5)仅仅上传(Upload Only)
一旦某个peer完成了下载,它不能再通过下载速率(因为下载速率已经为0了)来决定为哪些 peers 提供上载了。目前采用的解决办法是,优先选择那些从它这里得到更好的上载速率的peers。这样的理由是可以尽可能的利用上载带宽。
peersim中BT网络核心代码解析的更多相关文章
- Asp.net中操作Excel的代码解析
一 . 使用Excel对象模型创建Excel文档: 1.创建简单的文档 try { 3 //创建Excel程序对象 Microsoft.Office.Interop.Excel.Application ...
- Delphi中methodaddress的汇编代码解析
class function TObject.MethodAddress(const Name: ShortString): Pointer;asm { -> EAX ...
- capjoint中的tel3核心代码teleseis3.f90
为了加入更多层的模型 将 teleseis3.f90 /home/capjoint-master/src/tel3/teleseis3.90的地层模型读取部分改为: program test PARA ...
- 关于BT网络的一些改进
这几天一直在研究如何改进现有的BT网络的效率,现在有了一点小小的成果 大概思路是这样的,对于一些已经拓扑结构以及节点之间延迟的网络(并不算太苛刻,对于例如数据中心的网络来说,是可以实现的), 普通的B ...
- 【Android应用开发】 Universal Image Loader ( 使用简介 | 示例代码解析 )
作者 : 韩曙亮 转载请注明出处 : http://blog.csdn.net/shulianghan/article/details/50824912 相关地址介绍 : -- Universal I ...
- 用ASP创建API。NET Core (Day2):在ASP中创建API。网络核心
下载PDF article - 1.5 MB 下载source - 152.4 KB 下载source - 206.3 KB 下载source code from GitHub 表的内容 中间件路线图 ...
- Spring源码解析 - springMVC核心代码
一.首先来讲解下springMVC的底层工作流程 1.首先我们重点放在前端控制器(DispatcherServlet) 其类图: 因为从流程图看,用户的请求最先到达就是DispatcherServle ...
- 长文梳理muduo网络库核心代码、剖析优秀编程细节
前言 muduo库是陈硕个人开发的tcp网络编程库,支持Reactor模型,推荐大家阅读陈硕写的<Linux多线程服务端编程:使用muduo C++网络库>.本人前段时间出于个人学习.找工 ...
- 【算法】(查找你附近的人) GeoHash核心原理解析及代码实现
本文地址 原文地址 分享提纲: 0. 引子 1. 感性认识GeoHash 2. GeoHash算法的步骤 3. GeoHash Base32编码长度与精度 4. GeoHash算法 5. 使用注意点( ...
随机推荐
- Java IO教程
1 Java IO 教程 2 Java IO 概述 3 Java IO: 文件 4 Java IO: 管道 5 Java IO: 网络 6 Java IO: 字节和字符数组 7 Java IO: S ...
- markdown语法书
因为初用markdown,所以对它的语法还不是很熟悉.喜欢简书的风格,特地拷贝了一份markdown语法手册,可以实现效果立显. http://www.jianshu.com/writer#/note ...
- ADO.NET学习
ADO.NET重要的类 在.NET访问MySql数据库时的几点经验! string connstr=Setting.Instance().GetConnectionString("MySql ...
- 关于jquery中html()、text()、val()的区别
1. .html()用为读取和修改元素的HTML标签 对应js中的innerHTML .html()是用来读取元素的HTML内容(包括其Html标签),.html()方法使用在多个元素上时,只读 ...
- 通过npm安装 Cordova
通过npm安装 Cordova 首先请确保你在本地已经安装了NodeJS(可以调用npm命令), 并且是联网状态的.如果不知道如何安装NodeJS, 请参考 ”四步完成NodeJS安装,配置和测试”. ...
- Redis设计与实现(一~五整合版)【搬运】
Redis设计与实现(一~五整合版) by @飘过的小牛 一 前言 项目中用到了redis,但用到的都是最最基本的功能,比如简单的slave机制,数据结构只使用了字符串.但是一直听说redis是一个很 ...
- callback res.end 记得return(Javascript需要养成的良好习惯)
错误示例: app.get('do',function(req,res,next){ getUserId(function(err,userId){ if(err){ res.end(err);//错 ...
- DataSet集合直接根据传入的类转List<T>集合
最近比较忙,好久没写博客了.个人感觉最好的进步就是写东西.哈哈. 一般我们使用ADO.net从数据库中读取数据返回的集合是DataSet类型的.有时候我们需要进行转换成List<T>集合. ...
- Android - 设置TextView的字体行间距 - TextView
xml文件中给TextView添加: android:lineSpacingExtra="10dp"// 行间距 android:lineSpacingMultiplier=&qu ...
- Storm进程通信机制
storm的worker进程之间消息传递机制图: 每个worker都有一个独立的监听进程,监听配置文件中配置过的端口列表supervisor.slots.ports,topology.receiver ...