Selective Acknowledgment 选项 浅析 1

抓包的时候,发现 tcp 三次握手中一般会有几个options 一个是mss 一个是ws 一个sack perm 这次主要是来说一说 sack 这个选项:
1. 只重传超时的数据包,比较实用与后面的数据包都能够正常接收的状况,只重传超时的数据包,但是如果比较坏的情况下,丢失了很多封包呢? 那就需要一个一个的等待超时了,很浪费时间。
2. 重传这个片段以及之后的所有包,这种方法在最坏的状况下,看起来效率还是挺高的,但是如果只有一个包丢失,就去重传后面所有接受到的包,流量浪费也是很严重的。
所以需要知道哪些报文丢失了!!!!!!!
rfc2883/rfc2018:Selective Acknowledgement(SACK) Option [RFC2018] for TCP. RFC 2018 specified the use of theSACK option for acknowledging out-of-sequence data not covered by TCP's cumulative acknowledgement field.
The SACK option as defined in RFC 2018 is as follows: +--------+--------+
| Kind=5 | Length |
+--------+--------+--------+--------+
| Left Edge of 1st Block |
+--------+--------+--------+--------+
| Right Edge of 1st Block |
+--------+--------+--------+--------+
| |
/ . . . /
| |
+--------+--------+--------+--------+
| Left Edge of nth Block |
+--------+--------+--------+--------+
| Right Edge of nth Block |
+--------+--------+--------+--------+
Example 2:
Reporting an out-of-order segment and a duplicatesegment.
Following a lost data packet, the receiver receives an out-of-order
data segment, which triggers the SACK option as specified in RFC
2018. Because of several lost ACK packets, the sender then
retransmits a data packet. The receiver receives the duplicate
packet, and reports it in the first, D-SACK block: Transmitted Received ACK Sent
Segment Segment (Including SACK Blocks) 3000-3499 3000-3499 3500 (ACK dropped)
3500-3999 3500-3999 4000 (ACK dropped)
4000-4499 (data packet dropped)
4500-4999 4500-4999 4000, SACK=4500-5000 (ACK dropped)
3000-3499 3000-3499 4000, SACK=3000-3500, 4500-5000
Example 3:
Reporting a duplicate of an out-of-order segment.
Because of a lost data packet, the receiver receives two out-of-order segments. The receiver next receives a duplicate segment for one of
these out-of-order segments:
Transmitted Received ACK Sent
Segment Segment (Including SACK Blocks) 3500-3999 3500-3999 4000
4000-4499 (data packet dropped)
4500-4999 4500-4999 4000, SACK=4500-5000
5000-5499 5000-5499 4000, SACK=4500-5500
(duplicated packet)
5000-5499 4000, SACK=5000-5500, 4500-5500
---------
Example 5:
Two non-contiguous duplicate subsegments covered by the cumulative acknowledgement.
After the sender increases its packet size from 500 bytes to 1500
bytes, the receiver receives a packet containing two non-contiguous
duplicate 500-byte subsegments which are less than the cumulative
acknowledgement field. The receiver reports the first such duplicate
segment in a single D-SACK block. Transmitted Received ACK Sent
Segment Segment (Including SACK Blocks) 500-999 500-999 1000
1000-1499 (delayed)
1500-1999 (data packet dropped)
2000-2499 (delayed)
2500-2999 (data packet dropped)
3000-3499 3000-3499 1000, SACK=3000-3500
1000-2499 1000-1499 1500, SACK=3000-3500
2000-2499 1500, SACK=2000-2500, 3000-3500
1000-2499 2500, SACK=1000-1500, 3000-3500
---------
Retransmit Timeout Due to ACK Loss
If an entire window of ACKs is lost, a timeout will result. Anexample of this is given below:
Transmitted Received ACK Sent
Segment Segment (Including SACK Blocks) 500-999 500-999 1000 (ACK dropped)
1000-1499 1000-1499 1500 (ACK dropped)
1500-1999 1500-1999 2000 (ACK dropped)
2000-2499 2000-2499 2500 (ACK dropped)
(timeout)
500-999 500-999 2500, SACK=500-1000
--------
In this case, all of the ACKs are dropped, resulting in a timeout. This condition can be identified because the first ACK received
following the timeout carries a D-SACK block indicating duplicate data was received. WITHOUT D-SACK:
Without the use of D-SACK, the sender in this case would be unable to
decide that no data packets has been dropped.
False retransmit due to reordering
If packets are reordered in the network such that a segment arrives more than 3 packets out of order, TCP's Fast Retransmit algorithm
will retransmit the out-of-order packet. An example of this is shown below:
Transmitted Received ACK Sent
Segment Segment (Including SACK Blocks) 500-999 500-999 1000
1000-1499 (delayed)
1500-1999 1500-1999 1000, SACK=1500-2000
2000-2499 2000-2499 1000, SACK=1500-2500
2500-2999 2500-2999 1000, SACK=1500-3000
1000-1499 1000-1499 3000
1000-1499 3000, SACK=1000-1500
---------
In this case, an ACK containing a SACK block which is lower than its ACK field and identical to a previously retransmitted segment is
indicative of a significant reordering followed by a false (unnecessary) retransmission. WITHOUT D-SACK: With the use of D-SACK illustrated above, the sender knows that either the first transmission of segment 1000-1499 was delayed in the
network, or the first transmission of segment 1000-1499 was dropped and the second transmission of segment 1000-1499 was duplicated.
Given that no other segments have been duplicated in the network,
this second option can be considered unlikely. Without the use of D-SACK, the sender would only know that either the first transmission of segment 1000-1499 was delayed in the network,
or that either one of the data segments or the final ACK was duplicated in the network. Thus, the use of D-SACK allows the sender
to more reliably infer that the first transmission of segment 1000-1499 was not dropped.
如果接收方选择发送带有SACK的ACK,需要遵循如下规则:
1. 第一个block需要指出是哪一个segment触发SACK option ,我认为就是谁乱序了,才会导致SACK
2. 尽可能多的把所有的block填满
3. SACK 要报告最近接收的不连续的数据块
接收端的行为:
1. 数据没有被确认前,都会保持在滑动窗口内
2. 每一个数据包都有一个SACKed的标志,对于已经标示的segment,重新发送的时候会忽略
3. 如果SACK丢失,超时重传之后,重置所有数据包SACKed 标志
Use of the SACK option for reporting a duplicate segment
This section specifies the use of SACK blocks when the SACK option is
used in reporting a duplicate segment. When D-SACK is used, the
first block of the SACK option should be a D-SACK block specifying
the sequence numbers for the duplicate segment that triggers the
acknowledgement. If the duplicate segment is part of a larger block
of non-contiguous data in the receiver's data queue, then the
following SACK block should be used to specify this larger block.
Additional SACK blocks can be used to specify additional non-
contiguous blocks of data, as specified in RFC 2018. The guidelines for reporting duplicate segments are summarized below:
(1) A D-SACK block is only used to report a duplicate contiguous
sequence of data received by the receiver in the most recent packet. (2) Each duplicate contiguous sequence of data received is reported
in at most one D-SACK block. (I.e., the receiver sends two identical
D-SACK blocks in subsequent packets only if the receiver receives two
duplicate segments.) (3) The left edge of the D-SACK block specifies the first sequence
number of the duplicate contiguous sequence, and the right edge of
the D-SACK block specifies the sequence number immediately following
the last sequence in the duplicate contiguous sequence. (4) If the D-SACK block reports a duplicate contiguous sequence from
a (possibly larger) block of data in the receiver's data queue above
the cumulative acknowledgement, then the second SACK block in that
SACK option should specify that (possibly larger) block of data. (5) Following the SACK blocks described above for reporting duplicate
segments, additional SACK blocks can be used for reporting additional
blocks of data, as specified in RFC 2018.
D-SACK仅仅是接收端报告一个重复的连续的片段 每个重复的连续片段只能在一个block中 第二个block指的是data没有被确认的
总起来说D-SACK还是带了诸多的好处,能否让发送方了解是ACK丢了还是发送的数据包丢了
note:关于 Interaction Between D-SACK and PAWS 先不看了
再次强调:早期TCP 中 通过收到duplicate ack(默认为3个),连续三次ack某一个包来告诉sender某个丢包了,然后进入fast retransmition state,sender重传这个包,当receiver收到这个重传包后,便会发一个ack(ack新的包)给sender,告诉sender下一个要发送的包是哪一个。可以看到,通过duplicate ack每次只能重传一个包,如果有多个丢包,在等待重传过程中,很容易timeout,造成带宽利用率下降(underutilized)。
而如果开启sack,每一个sack段记录的是已经收到的连续的包,sack段与sack段之间断片的,也就是还没收到的(可能已经丢失,也可能是reorder)。通过sack段便可以知道多个可能已经丢失的包,这样便可以一次性的重传,而不是一个一个重传,避免因等待时间长造成的timeout问题。
要注意的是开启sack选项,也是有弊端的,因为丢包意味着网络很可能已经拥塞,这时如果一次重传多个包,很可能会造成网络更加拥塞。
参考:RFC 2018 TCP Selective Acknowledgment Options
Selective Acknowledgment 选项 浅析 1的更多相关文章
- Selective Acknowledgment 选项 浅析 2
来自:http://abcdxyzk.github.io/blog/2013/09/06/kernel-net-sack/ static int tcp_sacktag_write_queue(str ...
- 从TCP三次握手说起–浅析TCP协议中的疑难杂症(2)
版权声明:本文由黄日成原创文章,转载请注明出处: 文章原文链接:https://www.qcloud.com/community/article/108 来源:腾云阁 https://www.qclo ...
- 浅析TCP协议---转载
https://cloud.tencent.com/developer/article/1150971 前言 说到TCP协议,相信大家都比较熟悉了,对于TCP协议总能说个一二三来,但是TCP协议又是一 ...
- 在深谈TCP/IP三步握手&四步挥手原理及衍生问题—长文解剖IP
如果对网络工程基础不牢,建议通读<细说OSI七层协议模型及OSI参考模型中的数据封装过程?> 下面就是TCP/IP(Transmission Control Protoco/Interne ...
- [转帖]技术扫盲:新一代基于UDP的低延时网络传输层协议——QUIC详解
技术扫盲:新一代基于UDP的低延时网络传输层协议——QUIC详解 http://www.52im.net/thread-1309-1-1.html 本文来自腾讯资深研发工程师罗成的技术分享, ...
- PROC 文件系统调节参数介绍(netstat -us)
转自:http://www.cnblogs.com/super-king/p/3296333.html /proc/net/* snmp文件 Ip: ip项 Forwarding : 是 ...
- linux下TCP/IP及内核参数优化调优(转)
Linux下TCP/IP及内核参数优化有多种方式,参数配置得当可以大大提高系统的性能,也可以根据特定场景进行专门的优化,如TIME_WAIT过高,DDOS攻击等等. 如下配置是写在sysctl.con ...
- (转)Linux服务器调优
Linux内核参数 http://space.itpub.net/17283404/viewspace-694350 net.ipv4.tcp_syncookies = 1 表示开启SYN Cooki ...
- Linux内核参数配置
Linux在系统运行时修改内核参数(/proc/sys与/etc/sysctl.conf),而不需要重新引导系统,这个功能是通过/proc虚拟文件系统实现的. 在/proc/sys目录下存放着大多数的 ...
随机推荐
- http_parser
最近读了 http_parser 的源码,记录下. 有意思的地方: 1) 协议解析可以不完全解析完,但是当前 parser 会记录解析状态,这样可以继续解析 2) 协议解析首要还是要了解协议 ...
- 深入了解Redis(7)-缓存穿透,雪崩,击穿
redis作为一个内存数据库,在生产环境中使用会遇到许多问题,特别是像电商系统用来存储热点数据,容易出现缓存穿透,雪崩,击穿等问题.所以实际运用中需要做好前期处理工作. 一.缓存雪崩 1.概念 缓存雪 ...
- VitualBox CentOS增强功能的安装使用 - Linux操作系统
本人因为电脑配置原因,安装的是CentOS 6.6 minimal版本,虚拟环境为VirtualBox 4.3.18. 当我使用的时候,想从本机(WindowXP)电脑将文件共享到虚拟(Cen ...
- centos8环境判断当前操作系统是否虚拟机或容器
一,阿里云ECS的centos环境 1,执行systemd-detect-virt [root@yjweb ~]# systemd-detect-virt kvm 说明阿里云的ecs是在一个kvm环境 ...
- MySQL字段添加注释,但不改变字段的类型
之前在导数据库数据的时候,忘记将字段的注释导过来了.现在需要将所有字段都加上注释(崩溃).由于导数据的过程比较长,业务那边从原始数据库导出了一个 Excel,里面有所有字段的注释,然后让我们根据这个注 ...
- 51nod 最大M子段和系列(1052、1053、1115)
51nod1052 数据量小,可使用O(N*M)的DPAC,递推公式: dp[i][j]=max(dp[i-1][j-1], dp[i][j-1])+a[j]; dp[i][j]表示前j个数取 i 段 ...
- python提取视频第一帧图片
一.实现代码 # -*- coding: utf-8 -*- import cv2 from PIL import Image from io import BytesIO def tryTime(m ...
- ASP.NET CORE 开发微信公众号(一、测试号管理)
一.注册账号 百度微信公众平台,点击进入. 二.公众平台测试账号 点击进入平台后居然是小程序,我也很费解.以前是找到开发->开发者工具->公众平台测试账号,现在毛都没有了. 不过可以点击这 ...
- Spark如何删除无效rdd checkpoint
spark可以使用checkpoint来作为检查点,将rdd的数据写入hdfs文件,也可以利用本地缓存子系统. 当我们使用checkpoint将rdd保存到hdfs文件时,如果任务的临时文件长时间不删 ...
- Idea启动报错 Error:java: System Java Compiler was not found in classpath
报错信息:Error:java: System Java Compiler was not found in classpath 使用IDEA启动的时候出现了这个错误,查找了很久,才找到解决办法 1. ...