COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION

Hardware-based solutions are generally referred to as cache coherence protocols.
These solutions provide dynamic recognition at run time of potential inconsistency
conditions. Because the problem is only dealt with when it actually arises, there
is more effective use of caches, leading to improved performance over a software
approach. In addition, these approaches are transparent to the programmer and the
compiler, reducing the software development burden.
Hardware schemes differ in a number of particulars, including where the state
information about data lines is held, how that information is organized, where coher-
ence is enforced, and the enforcement mechanisms. In general, hardware schemes
can be divided into two categories: directory protocols and snoopy protocols.

DIRECTORY PROTOCOLS Directory protocols collect and maintain information
about where copies of lines reside. Typically, there is a centralized controller that is
part of the main memory controller, and a directory that is stored in main memory.
The directory contains global state information about the contents of the various
local caches. When an individual cache controller makes a request, the centralized
controller checks and issues necessary commands for data transfer between
memory and caches or between caches. It is also responsible for keeping the state
information up to date; therefore, every local action that can affect the global state
of a line must be reported to the central controller.
Typically, the controller maintains information about which processors have
a copy of which lines. Before a processor can write to a local copy of a line, it
must request exclusive access to the line from the controller. Before granting this
exclusive access, the controller sends a message to all processors with a cached
copy of this line, forcing each processor to invalidate its copy. After receiving
acknowledgments back from each such processor, the controller grants exclusive
access to the requesting processor. When another processor tries to read a line
that is exclusively granted to another processor, it will send a miss notification
to the controller. The controller then issues a command to the processor hold-
ing that line that requires the processor to do a write back to main memory. The
line may now be shared for reading by the original processor and the requesting
processor.
Directory schemes suffer from the drawbacks of a central bottleneck and the
overhead of communication between the various cache controllers and the central
controller. However, they are effective in large-scale systems that involve multiple
buses or some other complex interconnection scheme.

SNOOPY PROTOCOLS Snoopy protocols distribute the responsibility for
maintaining cache coherence among all of the cache controllers in a multiprocessor.
A cache must recognize when a line that it holds is shared with other caches.

When an update action is performed on a shared cache line, it must be announced
to all other caches by a broadcast mechanism. Each cache controller is able to
“snoop” on the network to observe these broadcasted notifications, and react
accordingly.
Snoopy protocols are ideally suited to a bus-based multiprocessor, because
the shared bus provides a simple means for broadcasting and snooping. However,
because one of the objectives of the use of local caches is to avoid bus accesses, care
must be taken that the increased bus traffic required for broadcasting and snooping
does not cancel out the gains from the use of local caches.
Two basic approaches to the snoopy protocol have been explored: write inval-
idate and write update (or write broadcast). With a write-invalidate protocol, there
can be multiple readers but only one writer at a time. Initially, a line may be shared
among several caches for reading purposes. When one of the caches wants to per-
form a write to the line, it first issues a notice that invalidates that line in the other
caches, making the line exclusive to the writing cache. Once the line is exclusive, the
owning processor can make cheap local writes until some other processor requires
the same line.
With a write-update protocol, there can be multiple writers as well as multiple
readers. When a processor wishes to update a shared line, the word to be updated is
distributed to all others, and caches containing that line can update it.
Neither of these two approaches is superior to the other under all circum-
stances. Performance depends on the number of local caches and the pattern of
memory reads and writes. Some systems implement adaptive protocols that employ
both write-invalidate and write-update mechanisms.
The write-invalidate approach is the most widely used in commercial multi-
processor systems, such as the Pentium 4 and Power PC. It marks the state of every
cache line (using two extra bits in the cache tag) as modified, exclusive, shared, or
invalid. For this reason, the write-invalidate protocol is called MESI. In the remain-
der of this section, we will look at its use among local caches across a multiproces-
sor. For simplicity in the presentation, we do not examine the mechanisms involved
in coordinating among both level 1 and level 2 locally as well as at the same time
coordinating across the distributed multiprocessor. This would not add any new
principles but would greatly complicate the discussion.

Hardware Solutions CACHE COHERENCE AND THE MESI PROTOCOL的更多相关文章

  1. Software Solutions CACHE COHERENCE AND THE MESI PROTOCOL

    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION Software cache cohere ...

  2. CACHE COHERENCE AND THE MESI PROTOCOL

    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION In contemporary multi ...

  3. Cache coherence protocol

    A cache coherence protocol facilitates a distributed cache coherency conflict resolution in a multi- ...

  4. The MESI Protocol

    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION To provide cache cons ...

  5. Multiprocessing system employing pending tags to maintain cache coherence

    A pending tag system and method to maintain data coherence in a processing node during pending trans ...

  6. 计算机系统结构总结_Multiprocessor & cache coherence

    Textbook:<计算机组成与设计——硬件/软件接口>  HI<计算机体系结构——量化研究方法>          QR 最后一节来看看如何实现parallelism 在多处 ...

  7. 《大话处理器》Cache一致性协议之MESI (转)

    原文链接:http://blog.csdn.net/muxiqingyang/article/details/6615199 Cache一致性协议之MESI 处理器上有一套完整的协议,来保证Cache ...

  8. Cache一致性协议之MESI

    http://blog.csdn.net/muxiqingyang/article/details/6615199 Cache一致性协议之MESI 处理器上有一套完整的协议,来保证Cache一致性.比 ...

  9. 《大话处理器》Cache一致性协议之MESI【转】

    转自:https://blog.csdn.net/muxiqingyang/article/details/6615199 版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载 ...

随机推荐

  1. tomcat相关问题

    动态资源:需要转换成静态资源后再响应给客户端,例如:jsp.servlet,其他语言的动态资源有:asp.php 静态资源:无需转发即可直接响应给客户端,例如:html.css.javascript ...

  2. git workflow

    1) fork map-matcher.git repo 2) add ssh-keygen public key to gitlab 3) clone repo git clone git@git. ...

  3. ACM/ICPC 之 电力网络-EK算法(POJ1459)

    按照电站发电(从源点到电站),消费者消费(从消费者到汇点)的想法构建网络,以下是EK解法 //网络流EK算法 //Time:922Ms memory:224K #include<iostream ...

  4. springBoot专题3---->springBoot与多数据源的配置

    最近有点忙,更新有点慢.今天进来说说一说springBoot中如何配置多数据源. 第一,新建一个名为springBoot-mutidata的maven项目,完整的pom.xml配置如下: <?x ...

  5. SQL查询语句行转列横向显示

    http://blog.163.com/dreamman_yx/blog/static/26526894201121595846270/

  6. Logistic回归 python实现

    Logistic回归 算法优缺点: 1.计算代价不高,易于理解和实现2.容易欠拟合,分类精度可能不高3.适用数据类型:数值型和标称型 算法思想: 其实就我的理解来说,logistic回归实际上就是加了 ...

  7. 13.linux中断处理程序

    linux中断处理程序 一.中断处理流程 在linux内核代码中进入entry-armv.S目录: linux统一的入口:__irq svc. 进入了统一的入口之后,程序跳到irq_handler标号 ...

  8. Guava学习笔记(4):Ordering犀利的比较器

    转自:http://www.cnblogs.com/peida/p/Guava_Ordering.html Ordering是Guava类库提供的一个犀利强大的比较器工具,Guava的Ordering ...

  9. android 得到缩略图

    转载至 http://blog.csdn.net/dxh040431104/article/details/6667448 怎样获取图片的大小?思路很简单:首先我们把这个图片转成Bitmap,然后再利 ...

  10. CodeForces 455D 分块

    题目链接:http://codeforces.com/problemset/problem/455/D 题意:给定一个长度为n的序列a[]. m次操作.共有两种操作 1 l r:将序列的a[l].a[ ...