并发 头文件<future> <thread>

高级接口

async()、future<>

future<int> result1; //int为func1返回值
result1 = async(func1); //启动func1,但有可能被推迟,直到调用get或wait


future<int> result1(async(func1));

result1.get();//获得返回值

async()的参数可以是函数、成员函数、函数对象或lambda

async([]{})

//绝不推迟
future<int> result1 = async(launch::async, func1);

//强制延缓,直到调用f.get()或f.wait()
future<int> f(async(lunch::deferred, func1));

auto f1 = sync(lunch::deferred, task1);
auto f2 = sync(lunch::deferred, task2);

auto val = b?f1.get():f2.get();

auto f1 = async(task1);

try
{
f1.get();
}
catch (exception* e)
{
cerr << "EXCEPTION: " << e->what() << endl;
}

一个future<>只能被调用get()一次,之后future就处于无效状态,
这种状态可调用valid()来检测,

只要对future调用wait(),就可以强制启动该future象征的线程并等待这一后台操作终止
future<...> f(asybc(func));
...
f.wait();

f.wait_for(seconds(10));
f.wait_until(system_clock::now()+chrono::minutes(1));

不论wait_for()或wait_until()都可以返回一下三种东西之一:
1.future_status::deferred
如果async()延缓了操作而程序中又完全没有调用wait()或get()
2.future_status::timeout
如果某个操作被异步启动但尚未结束,而waiting又已逾期
3.future_status::ready
操作已完成

future<...> f(async(task));
如果在单线程环境,有可能延缓而没有启动
所以先检查

if (f.wait_for(chrono::seconds(0)) != future_status::deferred){
while (f.wait_for(chrono::seconds(0)) != future_status::ready)){
...
this_thread::yeild();
//或this_thread::sleep_for(milliseconds(100));
}
}
...
auto r = f.get();

例子:

#include <chrono>
#include <iostream>
#include <future>
#include <thread>
#include <random>
#include <exception>

using namespace std;
using namespace std::chrono;

int doSomething(char c)
{
default_random_engine dre(c);
uniform_int_distribution<int> id(10, 1000);

for (int i = 0; i < 10; ++i)
{
this_thread::sleep_for(milliseconds(id(dre)));
cout.put(c).flush();
}

return c;
}

int main()
{
auto f1 = async([]{doSomething('.'); });
auto f2 = async([]{doSomething('+'); });

if (f1.wait_for(seconds(0)) != future_status::deferred ||
f2.wait_for(seconds(0)) != future_status::deferred)
{
while (f1.wait_for(seconds(0)) != future_status::ready &&//有一个线程完成就跳出循环
f2.wait_for(seconds(0)) != future_status::ready)
{
this_thread::yield();//提示重新安排到下一个线程
}
}
cout.put('\n').flush();

try {
f1.get();
f2.get();
}
catch (const exception& e){
cout << "\nException: " << e.what() << endl;
}
cout << "\ndone" << endl;

}

传递实参

上例使用了lambda并让它调用后台函数
auto f1 = async([]{doSomething('.'); });

也可以传递async语句之前就存在的实参
char c = '@';
auto f1 = async([=]{doSomething(c); });

[=]表示传递给lambda的是c的拷贝

char c = '@';
//传递c的拷贝
auto f1 = async([=]{doSomething(c); });
auto f1 = async(doSomething, c);

//传递c的引用
auto f1 = async([&]{doSomething(c); });
auto f1 = async(doSomething, ref(c));

如果使用async,就应该以值传递方式来传递所有用来处理目标函数的必要object,
使async只需使用局部拷贝

调用成员函数
class X
{
public:
void mem(int num){ cout << "memfunc: " << num << endl; };
};

int main()
{
X x;
auto f = async(&X::mem, x, 42); //x.mem(42);
f.get();
}
传给async一个成员函数的pointer,之后的第一个实参必须是个reference或pointer,指向某个object

多次处理
shared_future 可多次调用get()

int queryNum()
{
cout << "read number: ";
int num;
cin >> num;

if (!cin)
{
throw runtime_error("no number read");
}
return num;
}

void doSomething1(char c, shared_future<int> f)
{
try{
int num = f.get();

for (int i = 0; i < num; ++i)
{
this_thread::sleep_for(chrono::milliseconds(100));
cout.put(c).flush();
}
}
catch (const exception& e)
{
cerr << "EXCEPTION in thread " << this_thread::get_id() << ": " << e.what() << endl;
}
}

int main()
{
try{
shared_future<int> f = async(queryNum);

auto f1 = async(launch::async, doSomething1, '.', f);
auto f2 = async(launch::async, doSomething1, '+', f);
auto f3 = async(launch::async, doSomething1, '*', f);

f1.get();
f2.get();
f3.get();
}
catch (const exception& e){
cout << "\nEXCEPTION: " << e.what() << endl;
}
cout << "\ndone" << endl;
}

确保f的寿命不短于被启动的线程

低层接口
Thread 和 Promise

std::thread

void doSomething();

thread t(doSomething); //启动
...
t.join();//等待结束

t.detach();//卸载

cout << thread::hardware_concurrency() << endl; //CPU线程个数

和async相比
1.没有所谓发射策略,试着将目标函数启动与一个新线程中,如果无法做到会抛出system_error,并带有错误码resource_unavailable_try_again
2.没有接口处理线程结果,唯一可获得的是一个独一无二的线程ID
3.如果发生异常,但未捕捉于线程之内,程序会立刻中止并调用terminate()
4.你必须声明是否想要等待线程结束(join)
或打算将它自母体卸载使它运行与后台而不受任何控制(detach)
如果你在thread object寿命结束前不这么做,或如果它发生了一次move assignment,
程序会中止并调用terminate()
4.如果你让线程运行于后台而main结束了,所有线程会被鲁莽而硬性地终止

Detached Thread(卸离后的线程)

一般性规则:
Detached Thread应该宁可只访问local copy
。。。

Thread ID

this_thread::get_id()

thread t(doSomething2, 5, '.');
t.get_id()

thread::id(); //默认构造,生成一个独一无二的ID用来表现 no thread

Promise

void doSomething3(promise<string>& p)
{
try{
cout << "read char ('x' for exception): ";
char c = cin.get();
if (c == 'x'){
throw runtime_error(string("char ") + c + " read");
}
string s = string("char ") + c + " processed";
p.set_value(move(s));//存放一个值
//p.set_value_at_thread_exit(move(s));//线程结束时存放
}
catch (...){
p.set_exception(current_exception());//存放一个异常
//p.set_exception_at_thread_exit(current_exception());
}
}

int main()
{
try {
promise<string> p;
thread t(doSomething3, ref(p));//传入一个promise引用
t.detach();

future<string> f(p.get_future());//get_future取出值

cout << "result: " << f.get() << endl;//取得存储结果//get会停滞,直到p.set_value
}
catch (const exception& e){
cerr << "EXCEPTION: " << e.what() << endl;
}
catch (...){
cerr << "EXCEPTION " << endl;
}
}

this_thread

this_thread::get_id()
this_thread::sleep_for(duration)
this_thread::sleep_until(timepoint)
this_thread::yeild() //建议释放控制以便重新调度,让下一个线程能够执行

当心 Concurrency (并发)

多个线程并发处理相同数据而又不曾同步化,那么唯一安全的情况就是:所有线程只读取数据

除非另有说明,c++标准库提供的函数通常不支持读或写动作与另一个写动作(写至同一笔数据)并发执行

Mutex 和 Lock

mutex

mutex valMutex;

valMutex.lock();
++val;
valMutex.unlock();

lock_guard:

{
//lock、析构函数自动释放lock(unlock) (加一个大括号使其更快释放)
lock_guard<mutex> lg(valMutex);
++val;
}

//recursive_mutex 允许同一线程多次锁定,并在最近一次(last)相应的unlock时释放lock
recursive_mutex dbMutex;
lock_guard<recursive_mutex> lg(dbMutex);

尝试性的Lock

mutex m;
//试图获得一个lock,成功返回true,为了仍能够使用lock_guard,要额外传入实参 adopt_lock
//有可能假性失败,lock为被他人拿走也可能失败返回false
while (m.try_lock() == false)
{
//doSomeOtherStuff();
}

lock_guard<mutex> lg1(m, adopt_lock);

timed_mutex m1; //recursive_timed_mutex
//if (m1.try_lock_until(system_clock::now() + chrono::seconds(1)))
if (m1.try_lock_for(chrono::seconds(1))){
lock_guard<timed_mutex> lg2(m1, adopt_lock);
}
else
{
//couldNotGetTheLock();
}

处理多个Lock,易发生互锁,死锁
使用全局函数lock解决

mutex

mutex m1;
mutex m2;
{
//lock会阻塞,直到所有mutex被锁定或出现异常
lock(m1, m2);
//成功锁定后使用lock_guard,以adopt_lock作为第二实参,确保这些mutex在离开作用域时解锁
lock_guard<mutex> lockM2(m1, adopt_lock);
lock_guard<mutex> lockM3(m2, adopt_lock);
//...
}//自动unlock

//或者使用try_lock,取得所有lock时返回-1,否则返回第一失败的lock的索引且其它成功的会unlock
int idx = try_lock(m1, m2);
if (idx < 0)
{
lock_guard<mutex> lockM2(m1, adopt_lock);
lock_guard<mutex> lockM3(m2, adopt_lock);
//...
}//自动unlock
else
{
cerr << "could not lock mutex m" << idx + 1 << endl;
}

//使用lock、try_lock后使用adopt_lock过继给lock_guard才能自动解锁

只调用一次

once_flag oc;
call_once(oc, initialize);//保证initialize函数只调用一次

once_flag oc1;
call_once(oc1, []{staticData = initializeStaticData(); });

class X
{
private:
mutable once_flag initDataFlag;
void initData()const;
public:
int GetData()const{
call_once(initDataFlag, &X::initData, this);
//...
}
};

Condition Variable(条件变量) <condition_variable>

future从某线程传递数据到另一线程只能一次,且future的主要目的是处理线程的返回值或异常

条件变量可用来同步化线程之间的数据流逻辑依赖关系

使用ready flag(一个bool变量)让某线程等待另一线程是一个粗浅办法
while (!readyFlag){
//...
this_thread::yeild();
}
消耗宝贵的CPU时间重复检查flag,
此外很难找出适当的sleep周期,
2次检查间隔太短则仍旧浪费CPU时间于检查动作上,太长则会发生延误

一个较好的做法是使用条件变量,使一个线程可以唤醒一或多个其他等待中的线程

包含<mutex> <condition_variable>

mutex readyMutex;
condition_variable readyCondVar;

//激发的条件终于满足的线程(或多线程之一)必须调用
readyCondVar.notify_one();
//或
readyCondVar.notify_all();
//等待条件满足的线程必须调用
unique_lock<mutex> l(readyMutex);
readyCondVar.wait(l);

但是有可能出现假醒,即wait在condition varibale尚未被notified时便返回
假醒无法被预测,实质上是随机的
在wakeup之后你仍需要代码去验证条件实际已达成,
例如必须检查数据是否真正备妥,或仍需要诸如ready flag之类的东西

#include <condition_variable>
#include <mutex>
#include <future>
#include <iostream>
using namespace std;

bool readyFlag; //表示条件真的满足了
mutex readyMutex;
condition_variable readyCondVar;

void thread1()
{
cout << "<return>" << endl;
cin.get();
{
lock_guard<mutex> lg(readyMutex);
readyFlag = true;
}
readyCondVar.notify_one();
}

void thread2()
{
{//这里必须使用unque_lock,不可使用lock_guard,因为wait的内部会明确地对mutex进行解锁和锁定
unique_lock<mutex> ul(readyMutex);
//lambda当做第二实参,用来检查条件是否真的满足,直到返回true
readyCondVar.wait(ul, []{return readyFlag; });
//相当于
//while (!readyFlag){
// readyCondVar.wait(ul);
//}
}
cout << "done" << endl;
}

int main()
{
auto f1 = async(launch::async, thread1);
auto f2 = async(launch::async, thread2);

f1.wait();
f2.wait();
}

例子

std::queue<int> queue1;//并发使用,被一个mutex和一个condition variable保护着
mutex queueMutex;
condition_variable queueCondVar;

void provider(int val)
{
for (int i = 0; i < 6; ++i)
{
{
lock_guard<mutex> ul(queueMutex);
queue1.push(val + i);
}
queueCondVar.notify_one();
this_thread::sleep_for(chrono::milliseconds(val));
}
}

void consumer(int num)
{
while (true)
{
int val;
{
unique_lock<mutex> ul(queueMutex);
queueCondVar.wait(ul, []{return !queue1.empty(); });
val = queue1.front();
queue1.pop();
cout << "consumer" << num << ": " << val << endl;

//使用wait_for 设置时间  wait_until设置一个时间点
//if (queueCondVar.wait_for(ul, chrono::seconds(1), []{return !queue1.empty(); }))
//if (queueCondVar.wait_for(ul, chrono::seconds(1)) == cv_status::no_timeout)//这个没有判断readyFlag
//{
// val = queue1.front();
// queue1.pop();
// cout << "consumer" << num << ": " << val << endl;
//}
}

}
}

int main()
{
auto p1 = async(launch::async, provider, 100);
auto p2 = async(launch::async, provider, 300);
auto p3 = async(launch::async, provider, 500);

auto c1 = async(launch::async, consumer, 1);
auto c2 = async(launch::async, consumer, 2);

//c1.wait();
getchar();
exit(0);
}

atomic

即使基本数据类型,读写也不是atomic(不可切割的),readyFlag可能读到被写一半的bool
编译器生成的代码有可能改变操作次序

借由mutex可解决上述2个问题,但从必要的资源和潜藏的独占访问来看,
mutex也许是个相对昂贵的操作,所以也许值得以atomic取代mutex和lock

atomic<bool> readyFlag(false);

void thread1()
{
//...
readyFlag.store(true);//赋予一个新值
}

void thread2()
{
while (!readyFlag.load())//取当前值
{
this_thread::sleep_for(chrono::milliseconds(100));
}
//...
}

void provider()
{
while (true)
{
cout << "<return>" << endl;
char arr[100];
cin.getline(arr, 100);

data = 42;
readyFlag.store(true);
}
}

void consumer()
{
while (true)
{
while (!readyFlag.load()){
cout.put('.').flush();
this_thread::sleep_for(chrono::milliseconds(50));
}
cout << "\nvalue : " << data << endl;
readyFlag = false;
}

}

int main()
{
auto p = async(launch::async, provider);
auto c = async(launch::async, consumer);

c.wait();
}

atomic<int> ai(0);
int x = ai;
ai = 10;
ai++;
ai -= 17;

atomic<int> a;
atomic_init(&a, 0);//没有初始化则使用这个函数进行初始化

#include "stdafx.h"

//#include <memory>
#include <chrono>
#include <ctime>
#include <string>
#include <iostream>
#include <future>
#include <thread>
#include <random>
#include <exception>
#include <vector> using namespace std;
using namespace std::chrono; int doSomething(/*const char&*/char c)
{
default_random_engine dre(c);
uniform_int_distribution<int> id(10, 1000); for (int i = 0; i < 10; ++i)
{
this_thread::sleep_for(milliseconds(id(dre)));
cout.put(c).flush();
} return c;
} int func1()
{
return doSomething('.');
} int func2()
{
return doSomething('+');
}
#include <list>
void task1()
{
list<int> v;
while (true)
{
for (int i = 0; i < 1000000; ++i)
{
v.push_back(i);
}
cout.put('.').flush();
}
} #if 0
int quickComputation(); //快速直接 quick and dirty
int accurateComputation(); //精确但是慢 future<int> f; int bestResultInTime()
{
auto tp = chrono::system_clock::now() + chrono::minutes(1); f = async(launch::async, accurateComputation);
int guess = quickComputation(); future_status s = f.wait_until(tp); //等待 if (s == future_status::ready) //完成
{
return f.get();
}
else
{
return guess;
}
}
#endif #if 0
int main()
{
//传递实参
//auto f1 = async([]{ doSomething('.'); });
char c = '@';
//传递c的拷贝
//auto f1 = async([=]{ doSomething(c); });
//auto f1 = async(doSomething, c); //传递c的引用
//auto f1 = async([&]{ doSomething(c); });//这个需要把doSomething(const char& c) ???
auto f1 = async( doSomething, ref(c)); auto f2 = async([]{ doSomething('+'); }); c = '_'; if (f1.wait_for(seconds(0)) != future_status::deferred ||
f2.wait_for(seconds(0)) != future_status::deferred)
{
while (f1.wait_for(seconds(0)) != future_status::ready && //有一个线程完成就跳出循环
f2.wait_for(seconds(0)) != future_status::ready)
{
this_thread::yield();//提示重新安排到下一个线程
}
} cout.put('\n').flush(); try {
f1.get(); //一个future只能调用一次get,之后future处于无效状态
f2.get();
}
catch (const exception& e){
cout << "\nException: " << e.what() << endl;
}
cout << "\ndone" << endl; //cout << f1._Get_value() << endl; //相当于
cout << f1._Is_ready() << endl;
cout << f1.valid() << endl;
}
#endif // 0 #if 0
class X
{
public:
void mem(int num){ cout << "memfunc: " << num << endl; };
}; int main()
{
X x;
auto f = async(&X::mem, x, 42); //x.mem(42);
f.get();
}
#endif // 0 #if 0 int queryNum()
{
cout << "read number: ";
int num;
cin >> num; if (!cin)
{
throw runtime_error("no number read");
}
return num;
} void doSomething1(char c, shared_future<int> f)
{
try{
int num = f.get(); for (int i = 0; i < num; ++i)
{
this_thread::sleep_for(chrono::milliseconds(100));
cout.put(c).flush();
}
}
catch (const exception& e)
{
cerr << "EXCEPTION in thread " << this_thread::get_id() << ": " << e.what() << endl;
}
} int main()
{
try{
shared_future<int> f = async(queryNum); auto f1 = async(launch::async, doSomething1, '.', f);
auto f2 = async(launch::async, doSomething1, '+', f);
auto f3 = async(launch::async, doSomething1, '*', f); f1.get();
f2.get();
f3.get();
}
catch (const exception& e){
cout << "\nEXCEPTION: " << e.what() << endl;
}
cout << "\ndone" << endl;
}
#endif // 0 #include <thread> #if 0
void doSomething2(int num, char c)
{
try {
default_random_engine dre(42 * c);
uniform_int_distribution<int> id(10, 1000);
for (int i = 0; i < num; ++i)
{
this_thread::sleep_for(milliseconds(id(dre)));
cout.put(c).flush();
}
}
catch (...){
cerr << "THREAD-EXCEPTION (thread " << this_thread::get_id() << ")" << endl;
}
} int main()
{
try {
thread t1(doSomething2, 5, '.');
cout << "- start fg thread " << t1.get_id() << endl; for (int i = 0; i < 5; ++i)
{
thread t(doSomething2, 10, 'a' + i);
cout << "-detach start bg thread " << t.get_id() << endl;
t.detach();
}
cin.get();
cout << "- join fg thread " << t1.get_id() << endl;
t1.join();
}
catch (const exception& e){
cerr << "EXCEPTION: " << e.what() << endl;
}
}
#endif // 0 #if 0
void doSomething3(promise<string>& p)
{
try{
cout << "read char ('x' for exception): ";
char c = cin.get();
if (c == 'x'){
throw runtime_error(string("char ") + c + " read");
}
string s = string("char ") + c + " processed";
p.set_value(move(s));//存放一个值
//p.set_value_at_thread_exit(move(s));//线程结束时存放
}
catch (...){
p.set_exception(current_exception());//存放一个异常
//p.set_exception_at_thread_exit(current_exception());
}
} int main()
{
try {
promise<string> p;
thread t(doSomething3, ref(p));//传入一个promise引用
t.detach(); future<string> f(p.get_future());//get_future取出值 cout << "result: " << f.get() << endl;//取得存储结果//get会停滞,直到p.set_value
}
catch (const exception& e){
cerr << "EXCEPTION: " << e.what() << endl;
}
catch (...){
cerr << "EXCEPTION " << endl;
}
}
#endif // 0 mutex printMutex;
void print(const string& s)
{
lock_guard<mutex> l(printMutex);
for (char c : s)
{
cout.put(c);
}
cout << endl;
} int initialize()
{
return 0;
}
vector<string> initializeStaticData()
{
vector<string> staticData;
return staticData;
} vector<string> staticData; #if 0 int main()
{
auto f1 = async(launch::async, print, "Hello from a first thread");
auto f2 = async(launch::async, print, "Hello from a second thread");
print("Hello from the main thread"); try {//确保mutex销毁之前,f1,f2 线程完成
f1.wait();
f2.wait();
}
catch (...)
{
} //递归的 Lock 造成死锁,在第二次lock抛出异常system_error,错误码resource_deadlock_would_occur
//lock_guard<mutex> l(printMutex);
//print("Hello from the main thread"); recursive_mutex dbMutex;
lock_guard<recursive_mutex> lg(dbMutex);
//recursive_mutex 允许同一线程多次锁定,并在最近一次(last)相应的unlock时释放lock mutex m;
//试图获得一个lock,成功返回true,为了仍能够使用lock_guard,要额外传入实参 adopt_lock
//有可能假性失败,lock为被他人拿走也可能失败返回false
while (m.try_lock() == false)
{
//doSomeOtherStuff();
} lock_guard<mutex> lg1(m, adopt_lock); timed_mutex mm; //recursive_timed_mutex
//if (m1.try_lock_until(system_clock::now() + chrono::seconds(1)))
if (mm.try_lock_for(chrono::seconds(1))){
lock_guard<timed_mutex> lg2(mm, adopt_lock);
}
else
{
//couldNotGetTheLock();
} mutex m1;
mutex m2;
{
//lock会阻塞,直到所有mutex被锁定或出现异常
lock(m1, m2);
//成功锁定后使用lock_guard,以adopt_lock作为第二实参,确保这些mutex在离开作用域时解锁
lock_guard<mutex> lockM2(m1, adopt_lock);
lock_guard<mutex> lockM3(m2, adopt_lock);
//...
}//自动unlock //或者使用try_lock,取得所有lock时返回-1,否则返回第一失败的lock的索引且其它成功的会unlock
int idx = try_lock(m1, m2);
if (idx < 0)
{
lock_guard<mutex> lockM2(m1, adopt_lock);
lock_guard<mutex> lockM3(m2, adopt_lock);
//...
}//自动unlock
else
{
cerr << "could not lock mutex m" << idx + 1 << endl;
} //使用lock、try_lock后使用adopt_lock过继给lock_guard才能自动解锁 //只调用一次
once_flag oc;
call_once(oc, initialize);//保证initialize函数只调用一次 once_flag oc1;
call_once(oc1, []{staticData = initializeStaticData(); }); } class X
{
private:
mutable once_flag initDataFlag;
void initData()const;
public:
int GetData()const{
call_once(initDataFlag, &X::initData, this);
//...
}
};
#endif // _DEBUG #if 0
//条件变量
#include <condition_variable>
//mutex readyMutex;
//condition_variable readyCondVar;
////激发的条件终于满足的线程(或多线程之一)必须调用
//readyCondVar.notify_one(); //一个
////或
//readyCondVar.notify_all(); //所有
////等待条件满足的线程必须调用
//unique_lock<mutex> l(readyMutex);
//readyCondVar.wait(l); bool readyFlag; //表示条件真的满足了
mutex readyMutex;
condition_variable readyCondVar; void thread1()
{
cout << "<return>" << endl;
cin.get();
{
lock_guard<mutex> lg(readyMutex);
readyFlag = true;
}
readyCondVar.notify_one();
} void thread2()
{
{//这里必须使用unque_lock,不可使用lock_guard,因为wait的内部会明确地对mutex进行解锁和锁定
unique_lock<mutex> ul(readyMutex);
//lambda当做第二实参,用来检查条件是否真的满足,直到返回true
readyCondVar.wait(ul, []{return readyFlag; });//使用了第二参数,需重置flag, readyFlag = false;
         
//相当于
//while (!readyFlag){
// readyCondVar.wait(ul);
//}
}
cout << "done" << endl;
} int main()
{
auto f1 = async(launch::async, thread1);
auto f2 = async(launch::async, thread2); f1.wait();
f2.wait();
}
#endif // 0 #include <queue> #if 0
std::queue<int> queue1;//并发使用,被一个mutex和一个condition variable保护着
mutex queueMutex;
condition_variable queueCondVar; void provider(int val)
{
for (int i = 0; i < 6; ++i)
{
{
lock_guard<mutex> ul(queueMutex);
queue1.push(val + i);
}
queueCondVar.notify_one();
this_thread::sleep_for(chrono::milliseconds(val));
}
} void consumer(int num)
{
while (true)
{
int val;
{
unique_lock<mutex> ul(queueMutex);
queueCondVar.wait(ul, []{return !queue1.empty(); });
val = queue1.front();
queue1.pop();
cout << "consumer" << num << ": " << val << endl; //使用wait_for wait_until则要判断返回值
//if (queueCondVar.wait_for(ul, chrono::seconds(1), []{return !queue1.empty(); }))
//if (queueCondVar.wait_for(ul, chrono::seconds(1)) == cv_status::no_timeout)//这个没有判断readyFlag
//{
// val = queue1.front();
// queue1.pop();
// cout << "consumer" << num << ": " << val << endl;
//}
} }
} int main()
{
auto p1 = async(launch::async, provider, 100);
auto p2 = async(launch::async, provider, 300);
auto p3 = async(launch::async, provider, 500); auto c1 = async(launch::async, consumer, 1);
auto c2 = async(launch::async, consumer, 2); //c1.wait();
getchar();
exit(0);
}
#endif // 0 #include <atomic> void foo(){
atomic<int> ai(0);
int x = ai;
ai = 10;
ai++;
ai -= 17; atomic<int> a;
atomic_init(&a, 0);//没有初始化则使用这个进行初始化
} long data;
atomic<bool> readyFlag(false); //void thread1()
//{
// //...
// readyFlag.store(true);//赋予一个新值
//}
//
//void thread2()
//{
// while (!readyFlag.load())//取当前值
// {
// this_thread::sleep_for(chrono::milliseconds(100));
// }
// //...
//} void provider()
{
while (true)
{
cout << "<return>" << endl;
char arr[100];
cin.getline(arr, 100); data = 42;
readyFlag.store(true);
}
} void consumer()
{
while (true)
{
while (!readyFlag.load()){
cout.put('.').flush();
this_thread::sleep_for(chrono::milliseconds(50));
}
cout << "\nvalue : " << data << endl;
readyFlag = false;
} } int main()
{
auto p = async(launch::async, provider);
auto c = async(launch::async, consumer); c.wait();
} //与条件变量一起使用
//unique_lock<mutex> ul(readyMutex);
//readyCondVar.wait(ul, []{return !readyFlag.load(); });

  

C++11 线程并发的更多相关文章

  1. Spring如何处理线程并发

    Spring如何处理线程并发   我们知道Spring通过各种DAO模板类降低了开发者使用各种数据持久技术的难度.这些模板类都是线程安全的,也就是说,多个DAO可以复用同一个模板实例而不会发生冲突.我 ...

  2. Java 线程并发策略

    1 什么是并发问题. 多个进程或线程同时(或着说在同一段时间内)访问同一资源会产生并发问题. 2 java中synchronized的用法 用法1 public class Test{ public ...

  3. java--加强之 Java5的线程并发库

    转载请申明出处:http://blog.csdn.net/xmxkf/article/details/9945499 01. 传统线程技术回顾 创建线程的两种传统方式: 1.在Thread子类覆盖的r ...

  4. c++ 11 线程池---完全使用c++ 11新特性

    前言: 目前网上的c++线程池资源多是使用老版本或者使用系统接口实现,使用c++ 11新特性的不多,最近研究了一下,实现一个简单版本,可实现任意任意参数函数的调用以及获得返回值. 0 前置知识 首先介 ...

  5. Java线程并发:知识点

    Java线程并发:知识点   发布:一个对象是使它能够被当前范围之外的代码所引用: 常见形式:将对象的的引用存储到公共静态域:非私有方法中返回引用:发布内部类实例,包含引用.   逃逸:在对象尚未准备 ...

  6. Java多线程与并发库高级应用-java5线程并发库

    java5 中的线程并发库 主要在java.util.concurrent包中 还有 java.util.concurrent.atomic子包和java.util.concurrent.lock子包 ...

  7. 线程高级应用-心得8-java5线程并发库中同步集合Collections工具类的应用及案例分析

    1.  HashSet与HashMap的联系与区别? 区别:前者是单列后者是双列,就是hashmap有键有值,hashset只有键: 联系:HashSet的底层就是HashMap,可以参考HashSe ...

  8. 线程高级应用-心得5-java5线程并发库中Lock和Condition实现线程同步通讯

    1.Lock相关知识介绍 好比我同时种了几块地的麦子,然后就等待收割.收割时,则是哪块先熟了,先收割哪块. 下面举一个面试题的例子来引出Lock缓存读写锁的案例,一个load()和get()方法返回值 ...

  9. 线程高级应用-心得4-java5线程并发库介绍,及新技术案例分析

    1.  java5线程并发库新知识介绍 2.线程并发库案例分析 package com.itcast.family; import java.util.concurrent.ExecutorServi ...

随机推荐

  1. 动态代理之: com.sun.proxy.$Proxy0 cannot be cast to 问题

    转: 动态代理之: com.sun.proxy.$Proxy0 cannot be cast to 问题 2018年05月13日 00:40:32 codingCoge 阅读数:1211   版权声明 ...

  2. Codeforces Round #525 (Div. 2) D. Ehab and another another xor problem(待完成)

    参考资料: [1]:https://blog.csdn.net/weixin_43790474/article/details/84815383 [2]:http://www.cnblogs.com/ ...

  3. 2017-12-15python全栈9期第二天第三节之使用while循环输出0到10

    #!/user/bin/python# -*- coding:utf-8 -*-count = 0while count < 10: count += 1 print(count)

  4. flume常见异常汇总以及解决方案

    flume常见异常汇总以及解决方案 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 实际生产环境中,我用flume将kafka的数据定期的往hdfs集群中上传数据,也遇到过一系列的坑 ...

  5. CentOS7 下 Hadoop 单节点(伪分布式)部署

    Hadoop 下载 (2.9.2) https://hadoop.apache.org/releases.html 准备工作 关闭防火墙 (也可放行) # 停止防火墙 systemctl stop f ...

  6. java动态获取上传文件的编码类型

    package com.sjfl.main; import java.io.BufferedReader; import java.io.File; import java.io.FileInputS ...

  7. 基于zookeeper(集群)+LevelDB的ActiveMq高可用集群安装、配置、测试

    一. zookeeper安装(集群):http://www.cnblogs.com/wangfajun/p/8692117.html  √ 二. ActiveMq配置: 1. ActiveMq集群部署 ...

  8. Hadoop记录-Hadoop监控指标汇总

    系统参数监控metrics load_one            每分钟的系统平均负载 load_fifteen        每15分钟的系统平均负载 load_five           每5 ...

  9. Web项目发布步骤总结

    1.在开发好项目,打包成war格式 2.购买云服务器,建议去阿里云购买(ecs),教程如下 http://jingyan.baidu.com/article/4e5b3e195ae68a91901e2 ...

  10. Dash VS Underscore

    Dash Dashes are recommended by Google over underscores (source). Dashes are more familiar to the end ...