http://getakka.net/docs/concepts/terminology

Terminology and Concepts

In this chapter we attempt to establish a common terminology to define a solid ground for communicating about concurrent, distributed systems which Akka.NET targets. Please note that, for many of these terms, there is no single agreed definition. We simply seek to give working definitions that will be used in the scope of the Akka.NET documentation.

Concurrency vs. Parallelism

Concurrency and parallelism are related concepts, but there are small differences. Concurrency means that two or more tasks are making progress even though they might not be executing simultaneously. This can for example be realized with time slicing where parts of tasks are executed sequentially and mixed with parts of other tasks. Parallelism on the other hand arise when the execution can be truly simultaneous.

Concurrency

Parallelism

Asynchronous vs. Synchronous

A method call is considered synchronous if the caller cannot make progress until the method returns a value or throws an exception. On the other hand, an asynchronous call allows the caller to progress after a finite number of steps, and the completion of the method may be signalled via some additional mechanism (it might be a registered callback, a Future, or a message).

A synchronous API may use blocking to implement synchrony, but this is not a necessity. A very CPU intensive task might give a similar behavior as blocking. In general, it is preferred to use asynchronous APIs, as they guarantee that the system is able to progress. Actors are asynchronous by nature: an actor can progress after a message send without waiting for the actual delivery to happen.

Non-blocking vs. Blocking

We talk about blocking if the delay of one thread can indefinitely delay some of the other threads. A good example is a resource which can be used exclusively by one thread using mutual exclusion. If a thread holds on to the resource indefinitely (for example accidentally running an infinite loop) other threads waiting on the resource can not progress. In contrast, non-blocking means that no thread is able to indefinitely delay others.

Non-blocking operations are preferred to blocking ones, as the overall progress of the system is not trivially guaranteed when it contains blocking operations.

Deadlock vs. Starvation vs. Live-lock

Deadlock arises when several participants are waiting on each other to reach a specific state to be able to progress. As none of them can progress without some other participant to reach a certain state (a "Catch-22" problem) all affected subsystems stall. Deadlock is closely related to blocking, as it is necessary that a participant thread be able to delay the progression of other threads indefinitely.

In the case of deadlock, no participants can make progress, while in contrast Starvation happens, when there are participants that can make progress, but there might be one or more that cannot. Typical scenario is the case of a naive scheduling algorithm that always selects high-priority tasks over low-priority ones. If the number of incoming high-priority tasks is constantly high enough, no low-priority ones will be ever finished.

Livelock is similar to deadlock as none of the participants make progress. The difference though is that instead of being frozen in a state of waiting for others to progress, the participants continuously change their state. An example scenario when two participants have two identical resources available. They each try to get the resource, but they also check if the other needs the resource, too. If the resource is requested by the other participant, they try to get the other instance of the resource. In the unfortunate case it might happen that the two participants "bounce" between the two resources, never acquiring it, but always yielding to the other.

Race Condition

We call it a Race condition when an assumption about the ordering of a set of events might be violated by external non-deterministic effects. Race conditions often arise when multiple threads have a shared mutable state, and the operations of thread on the state might be interleaved causing unexpected behavior. While this is a common case, shared state is not necessary to have race conditions. One example could be a client sending unordered packets (e.g UDP datagrams) P1, P2 to a server. As the packets might potentially travel via different network routes, it is possible that the server receives P2 first and P1 afterwards. If the messages contain no information about their sending order it is impossible to determine by the server that they were sent in a different order. Depending on the meaning of the packets this can cause race conditions.

Note

The only guarantee that Akka.NET provides about messages sent between a given pair of actors is that their order is always preserved. see Message Delivery Reliability

Non-blocking Guarantees (Progress Conditions)

As discussed in the previous sections blocking is undesirable for several reasons, including the dangers of deadlocks and reduced throughput in the system. In the following sections we discuss various non-blocking properties with different strength.

Wait-freedom

A method is wait-free if every call is guaranteed to finish in a finite number of steps. If a method is bounded wait-free then the number of steps has a finite upper bound.

From this definition it follows that wait-free methods are never blocking, therefore deadlock can not happen. Additionally, as each participant can progress after a finite number of steps (when the call finishes), wait-free methods are free of starvation.

Lock-freedom

Lock-freedom is a weaker property than wait-freedom. In the case of lock-free calls, infinitely often some method finishes in a finite number of steps. This definition implies that no deadlock is possible for lock-free calls. On the other hand, the guarantee that some call finishes in a finite number of steps is not enough to guarantee that all of them eventually finish. In other words, lock-freedom is not enough to guarantee the lack of starvation.

Obstruction-freedom

Obstruction-freedom is the weakest non-blocking guarantee discussed here. A method is called obstruction-free if there is a point in time after which it executes in isolation (other threads make no steps, e.g.: become suspended), it finishes in a bounded number of steps. All lock-free objects are obstruction-free, but the opposite is generally not true.

Optimistic concurrency control (OCC) methods are usually obstruction-free. The OCC approach is that every participant tries to execute its operation on the shared object, but if a participant detects conflicts from others, it rolls back the modifications, and tries again according to some schedule. If there is a point in time, where one of the participants is the only one trying, the operation will succeed.

Recommended literature

  • The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914
  • Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606

Concurrency vs. Parallelism的更多相关文章

  1. [更新中]并发和并行(Concurrency and Parallelism)

    书籍的简称: CSPPSE: Computer System: a programmer's perspective Second Edition 术语并发是一个通用的概念, 指同时具有多个活动的系统 ...

  2. Concurrency != Parallelism

    前段时间在公司给大家分享GO语言的一些特性,然后讲到了并发概念,大家表示很迷茫,然后分享过程中我拿来了Rob Pike大神的Slides <Concurrency is not Parallel ...

  3. Concurrency Is Not Parallelism (Rob pike)

    Rob pike发表过一个有名的演讲<Concurrency is not parallelism>(https://blog.golang.org/concurrency-is-not- ...

  4. actor concurrency

    The hardware we rely on is changing rapidly as ever-faster chips are replaced by ever-increasing num ...

  5. Python 多线程教程:并发与并行

    转载于: https://my.oschina.net/leejun2005/blog/398826 在批评Python的讨论中,常常说起Python多线程是多么的难用.还有人对 global int ...

  6. goroutine

    Go语言从诞生到普及已经三年了,先行者大都是Web开发的背景,也有了一些普及型的书籍,可系统开发背景的人在学习这些书籍的时候,总有语焉不详的感觉,网上也有若干流传甚广的文章,可其中或多或少总有些与事实 ...

  7. 浅入了解GCD 并发 并行 同步 异步 多线程

     什么是 GCD?! GCD就是一个函数库(废话) 用来压榨系统的资源,解决多线程处理中一些问题的库(知道这个就够了,很多电影角色都是因为知道太多死得很惨!!!!!) 1.并发与并行 Concurre ...

  8. GCD的深入理解

    GCD 深入理解(一) 本文由@nixzhu翻译至raywenderlich的<grand-central-dispatch-in-depth-part-1> 虽然 GCD 已经出现过一段 ...

  9. 【GoLang】50 个 Go 开发者常犯的错误

    1. { 换行:   Opening Brace Can't Be Placed on a Separate Line 2. 定义未使用的变量:  Unused Variables 2. import ...

随机推荐

  1. C#面试(2016年4月)

    1.WebForm和MVC的区别 MVC: 1)通过model.view.controller将处理后台逻辑代码与前台展示逻辑代码进行了很好的分离: 2)通过修改路由规则,可以控制生成自定义的url, ...

  2. PythonNote01_HTML标签

    >头标签<head> >>位置 头标签要放在头部之间 >>种类 <title> : 指定整个网页的标题,在浏览器最上方显示. <meta&g ...

  3. Python之路----------内置函数

    1.abs(x)绝对值 #coding=utf-8 a = 1 b = -2 print(abs(a)) print(abs(b)) 2.all(iterable)可迭代对象里面所有内容为真返回真,空 ...

  4. (转载)spring mvc DispatcherServlet详解之一---处理请求深入解析

    要深入理解spring mvc的工作流程,就需要先了解spring mvc的架构: 从上图可以看到 前端控制器DispatcherServlet在其中起着主导作用,理解了DispatcherServl ...

  5. 灵活运用SQL Server SSIS变量

    在SSIS开发ETL(Extract-Transform-Load),数据抽取.转换.装载的过程.我们需要自己定义变量 一.SSIS变量简介 SSIS(SQL Server Integration S ...

  6. A记录、CNAME、MX记录、NS记录

    1. A记录(IP指向) 又称IP指向,用户可以在此设置子域名并指向到自己的目标主机地址上,从而实现通过域名找到服务器找到相应网页的功能. 说明:指向的目标主机地址类型只能使用IP地址. 2. CNA ...

  7. Elasticsearch之client源码简要分析

    问题 让我们带着问题去学习,效率会更高 1  es集群只配置一个节点,client是否能够自动发现集群中的所有节点?是如何发现的? 2  es client如何做到负载均衡? 3  一个es node ...

  8. 添加右键菜单命令 在此处打开命令窗口(E)(带图标)

    @color 0A @title 添加右键菜单命令 在此处打开命令窗口(^&E)(带图标) by wjshan0808 @echo off reg add HKCR\Directory\Bac ...

  9. C++之路进阶——codevs3566(紧急疏散)

    3566 紧急疏散  时间限制: 1 s  空间限制: 128000 KB  题目等级 : 黄金 Gold     题目描述 Description 发生了火警,所有人员需要紧急疏散!假设每个房间是一 ...

  10. 突袭HTML5之SVG 2D入门1 - SVG综述////////////////zzzzzzzz

    以二次贝塞尔曲线的公式为例: js函数: //p0.p1.p2三个点,其中p0为起点,p2为终点,p1为控制点 //它们的坐标用数组表示[x,y] //t的范围是0-1 function qBerzi ...