Concurrency vs. Parallelism
http://getakka.net/docs/concepts/terminology
Terminology and Concepts
In this chapter we attempt to establish a common terminology to define a solid ground for communicating about concurrent, distributed systems which Akka.NET targets. Please note that, for many of these terms, there is no single agreed definition. We simply seek to give working definitions that will be used in the scope of the Akka.NET documentation.
Concurrency vs. Parallelism
Concurrency and parallelism are related concepts, but there are small differences. Concurrency means that two or more tasks are making progress even though they might not be executing simultaneously. This can for example be realized with time slicing where parts of tasks are executed sequentially and mixed with parts of other tasks. Parallelism on the other hand arise when the execution can be truly simultaneous.
Concurrency
Parallelism
Asynchronous vs. Synchronous
A method call is considered synchronous if the caller cannot make progress until the method returns a value or throws an exception. On the other hand, an asynchronous call allows the caller to progress after a finite number of steps, and the completion of the method may be signalled via some additional mechanism (it might be a registered callback, a Future, or a message).
A synchronous API may use blocking to implement synchrony, but this is not a necessity. A very CPU intensive task might give a similar behavior as blocking. In general, it is preferred to use asynchronous APIs, as they guarantee that the system is able to progress. Actors are asynchronous by nature: an actor can progress after a message send without waiting for the actual delivery to happen.
Non-blocking vs. Blocking
We talk about blocking if the delay of one thread can indefinitely delay some of the other threads. A good example is a resource which can be used exclusively by one thread using mutual exclusion. If a thread holds on to the resource indefinitely (for example accidentally running an infinite loop) other threads waiting on the resource can not progress. In contrast, non-blocking means that no thread is able to indefinitely delay others.
Non-blocking operations are preferred to blocking ones, as the overall progress of the system is not trivially guaranteed when it contains blocking operations.
Deadlock vs. Starvation vs. Live-lock
Deadlock arises when several participants are waiting on each other to reach a specific state to be able to progress. As none of them can progress without some other participant to reach a certain state (a "Catch-22" problem) all affected subsystems stall. Deadlock is closely related to blocking, as it is necessary that a participant thread be able to delay the progression of other threads indefinitely.
In the case of deadlock, no participants can make progress, while in contrast Starvation happens, when there are participants that can make progress, but there might be one or more that cannot. Typical scenario is the case of a naive scheduling algorithm that always selects high-priority tasks over low-priority ones. If the number of incoming high-priority tasks is constantly high enough, no low-priority ones will be ever finished.
Livelock is similar to deadlock as none of the participants make progress. The difference though is that instead of being frozen in a state of waiting for others to progress, the participants continuously change their state. An example scenario when two participants have two identical resources available. They each try to get the resource, but they also check if the other needs the resource, too. If the resource is requested by the other participant, they try to get the other instance of the resource. In the unfortunate case it might happen that the two participants "bounce" between the two resources, never acquiring it, but always yielding to the other.
Race Condition
We call it a Race condition when an assumption about the ordering of a set of events might be violated by external non-deterministic effects. Race conditions often arise when multiple threads have a shared mutable state, and the operations of thread on the state might be interleaved causing unexpected behavior. While this is a common case, shared state is not necessary to have race conditions. One example could be a client sending unordered packets (e.g UDP datagrams) P1, P2 to a server. As the packets might potentially travel via different network routes, it is possible that the server receives P2 first and P1 afterwards. If the messages contain no information about their sending order it is impossible to determine by the server that they were sent in a different order. Depending on the meaning of the packets this can cause race conditions.
Note
The only guarantee that Akka.NET provides about messages sent between a given pair of actors is that their order is always preserved. see Message Delivery Reliability
Non-blocking Guarantees (Progress Conditions)
As discussed in the previous sections blocking is undesirable for several reasons, including the dangers of deadlocks and reduced throughput in the system. In the following sections we discuss various non-blocking properties with different strength.
Wait-freedom
A method is wait-free if every call is guaranteed to finish in a finite number of steps. If a method is bounded wait-free then the number of steps has a finite upper bound.
From this definition it follows that wait-free methods are never blocking, therefore deadlock can not happen. Additionally, as each participant can progress after a finite number of steps (when the call finishes), wait-free methods are free of starvation.
Lock-freedom
Lock-freedom is a weaker property than wait-freedom. In the case of lock-free calls, infinitely often some method finishes in a finite number of steps. This definition implies that no deadlock is possible for lock-free calls. On the other hand, the guarantee that some call finishes in a finite number of steps is not enough to guarantee that all of them eventually finish. In other words, lock-freedom is not enough to guarantee the lack of starvation.
Obstruction-freedom
Obstruction-freedom is the weakest non-blocking guarantee discussed here. A method is called obstruction-free if there is a point in time after which it executes in isolation (other threads make no steps, e.g.: become suspended), it finishes in a bounded number of steps. All lock-free objects are obstruction-free, but the opposite is generally not true.
Optimistic concurrency control (OCC) methods are usually obstruction-free. The OCC approach is that every participant tries to execute its operation on the shared object, but if a participant detects conflicts from others, it rolls back the modifications, and tries again according to some schedule. If there is a point in time, where one of the participants is the only one trying, the operation will succeed.
Recommended literature
- The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914
- Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606
Concurrency vs. Parallelism的更多相关文章
- [更新中]并发和并行(Concurrency and Parallelism)
书籍的简称: CSPPSE: Computer System: a programmer's perspective Second Edition 术语并发是一个通用的概念, 指同时具有多个活动的系统 ...
- Concurrency != Parallelism
前段时间在公司给大家分享GO语言的一些特性,然后讲到了并发概念,大家表示很迷茫,然后分享过程中我拿来了Rob Pike大神的Slides <Concurrency is not Parallel ...
- Concurrency Is Not Parallelism (Rob pike)
Rob pike发表过一个有名的演讲<Concurrency is not parallelism>(https://blog.golang.org/concurrency-is-not- ...
- actor concurrency
The hardware we rely on is changing rapidly as ever-faster chips are replaced by ever-increasing num ...
- Python 多线程教程:并发与并行
转载于: https://my.oschina.net/leejun2005/blog/398826 在批评Python的讨论中,常常说起Python多线程是多么的难用.还有人对 global int ...
- goroutine
Go语言从诞生到普及已经三年了,先行者大都是Web开发的背景,也有了一些普及型的书籍,可系统开发背景的人在学习这些书籍的时候,总有语焉不详的感觉,网上也有若干流传甚广的文章,可其中或多或少总有些与事实 ...
- 浅入了解GCD 并发 并行 同步 异步 多线程
什么是 GCD?! GCD就是一个函数库(废话) 用来压榨系统的资源,解决多线程处理中一些问题的库(知道这个就够了,很多电影角色都是因为知道太多死得很惨!!!!!) 1.并发与并行 Concurre ...
- GCD的深入理解
GCD 深入理解(一) 本文由@nixzhu翻译至raywenderlich的<grand-central-dispatch-in-depth-part-1> 虽然 GCD 已经出现过一段 ...
- 【GoLang】50 个 Go 开发者常犯的错误
1. { 换行: Opening Brace Can't Be Placed on a Separate Line 2. 定义未使用的变量: Unused Variables 2. import ...
随机推荐
- sparksql---通过pyspark实现
上次在spark的一个群里面,众大神议论:dataset会取代rdd么? 大神1:听说之后的mlib都会用dataset来实现,呜呜,rdd要狗带 大神2:dataset主要是用来实现sql的,跟ml ...
- LeetCode Design Hit Counter
原题链接在这里:https://leetcode.com/problems/design-hit-counter/. 题目: Design a hit counter which counts the ...
- 我理解的IOC技术在Java和C#中比较分析
一直想用心写这个系列的文章,其实看得越多,也就越觉得自己在这方面的功力太浅,也就越不想班门弄斧啦,作为一个开篇,我想把这个技术深层次化,在之前的.net的一个MVC系列文章其实已经涉及到了,只是.ne ...
- 使用linq的好处
1.linq非常方便,把复杂的业务逻辑从数据库分离,起到了很好的优化作用 2.linq非常灵活,可以用基本统一的访问方式,访问各种数据源,对项目的管理和维护,起到了十分便捷的作用 3.用linq可以不 ...
- python csv用法
csv打开文件的时候,如下代码,出错: import csv name = "D:\\selenium\\data\\name.csv" inf= csv.reader(open( ...
- javascript sandbox
用途 https://github.com/gf3/sandbox Can be used to execute untrusted code. Support for timeouts (e.g. ...
- sql rank()函数
RANK() OVER([<partiton_by_clause>]) partition_by_clause 将from子句生成的结果集划分为应用到RANK函数的分区. Order_b ...
- C语言第五次作业
#include<stdio.h> int main() { int a,b,c; printf("请输入3个整数:"); scanf("%d %d %d&q ...
- python实现汉诺塔
经典递归算法汉诺塔分析: 当A柱子只有1个盘子,直接A --> C 当A柱子上有3个盘子,A上第一个盘子 --> B, A上最后一个盘子 --> C, B上所有盘子(1个) --&g ...
- var ball0=new Ball("executing") 是怎样被执行的?
function Ball(message){ alert(message); }; var ball0=new Ball("executing"); //var ball0=ne ...