Fortran并行计算的一些例子
以下例子来自https://computing.llnl.gov/tutorials/openMP/exercise.html网站
一、打印线程(Hello world)
C******************************************************************************
C FILE: omp_hello.f
C DESCRIPTION:
C OpenMP Example - Hello World - Fortran Version
C In this simple example, the master thread forks a parallel region.
C All threads in the team obtain their unique thread number and print it.
C The master thread only prints the total number of threads. Two OpenMP
C library routines are used to obtain the number of threads and each
C thread's number.
C AUTHOR: Blaise Barney 5/99
C LAST REVISED:
C****************************************************************************** PROGRAM HELLO INTEGER NTHREADS, TID, OMP_GET_NUM_THREADS, OMP_GET_THREAD_NUM C Fork a team of threads giving them their own copies of variables
!$OMP PARALLEL PRIVATE(NTHREADS, TID) C Obtain thread number
TID = OMP_GET_THREAD_NUM()
PRINT *, 'Hello World from thread = ', TID C Only master thread does this
IF (TID .EQ. 0) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
END IF C All threads join master thread and disband
!$OMP END PARALLEL END
二、循环(Loop work-sharing)
C******************************************************************************
C FILE: omp_workshare1.f
C DESCRIPTION:
C OpenMP Example - Loop Work-sharing - Fortran Version
C In this example, the iterations of a loop are scheduled dynamically
C across the team of threads. A thread will perform CHUNK iterations
C at a time before being scheduled for the next CHUNK of work.
C AUTHOR: Blaise Barney /
C LAST REVISED: //
C****************************************************************************** PROGRAM WORKSHARE1 INTEGER NTHREADS, TID, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM, N, CHUNKSIZE, CHUNK, I
PARAMETER (N=)
PARAMETER (CHUNKSIZE=)
REAL A(N), B(N), C(N) ! Some initializations
DO I = , N
A(I) = I * 1.0
B(I) = A(I)
ENDDO
CHUNK = CHUNKSIZE !$OMP PARALLEL SHARED(A,B,C,NTHREADS,CHUNK) PRIVATE(I,TID) TID = OMP_GET_THREAD_NUM()
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads =', NTHREADS
END IF
PRINT *, 'Thread',TID,' starting...'
!$OMP DO SCHEDULE(DYNAMIC,CHUNK)
DO I = , N
C(I) = A(I) + B(I)
WRITE(*,) TID,I,C(I)
FORMAT(' Thread',I2,': C(',I3,')=',F8.)
ENDDO
!$OMP END DO NOWAIT
PRINT *, 'Thread',TID,' done.'
!$OMP END PARALLEL END
三、Sections work-sharing
C******************************************************************************
C FILE: omp_workshare2.f
C DESCRIPTION:
C OpenMP Example - Sections Work-sharing - Fortran Version
C In this example, the OpenMP SECTION directive is used to assign
C different array operations to each thread that executes a SECTION.
C AUTHOR: Blaise Barney /
C LAST REVISED: //
C****************************************************************************** PROGRAM WORKSHARE2 INTEGER N, I, NTHREADS, TID, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM
PARAMETER (N=)
REAL A(N), B(N), C(N), D(N) ! Some initializations
DO I = , N
A(I) = I * 1.5
B(I) = I + 22.35
C(N) = 0.0
D(N) = 0.0
ENDDO !$OMP PARALLEL SHARED(A,B,C,D,NTHREADS), PRIVATE(I,TID)
TID = OMP_GET_THREAD_NUM()
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads =', NTHREADS
END IF
PRINT *, 'Thread',TID,' starting...' !$OMP SECTIONS !$OMP SECTION
PRINT *, 'Thread',TID,' doing section 1'
DO I = , N
C(I) = A(I) + B(I)
WRITE(*,) TID,I,C(I)
FORMAT(' Thread',I2,': C(',I2,')=',F8.)
ENDDO !$OMP SECTION
PRINT *, 'Thread',TID,' doing section 2'
DO I = , N
D(I) = A(I) * B(I)
WRITE(*,) TID,I,D(I)
ENDDO !$OMP END SECTIONS NOWAIT PRINT *, 'Thread',TID,' done.' !$OMP END PARALLEL END
四、Combined parallel loop reduction
C******************************************************************************
C FILE: omp_reduction.f
C DESCRIPTION:
C OpenMP Example - Combined Parallel Loop Reduction - Fortran Version
C This example demonstrates a sum reduction within a combined parallel loop
C construct. Notice that default data element scoping is assumed - there
C are no clauses specifying shared or private variables. OpenMP will
C automatically make loop index variables private within team threads, and
C global variables shared.
C AUTHOR: Blaise Barney /
C LAST REVISED:
C****************************************************************************** PROGRAM REDUCTION INTEGER I, N
REAL A(), B(), SUM ! Some initializations
N =
DO I = , N
A(I) = I *1.0
B(I) = A(I)
ENDDO
SUM = 0.0 !$OMP PARALLEL DO REDUCTION(+:SUM)
DO I = , N
SUM = SUM + (A(I) * B(I))
ENDDO PRINT *, ' Sum = ', SUM
END
五、Orphaned parallel loop reduction
C******************************************************************************
C FILE: omp_orphan.f
C DESCRIPTION:
C OpenMP Example - Parallel region with an orphaned directive - Fortran
C Version
C This example demonstrates a dot product being performed by an orphaned
C loop reduction construct. Scoping of the reduction variable is critical.
C AUTHOR: Blaise Barney /
C LAST REVISED:
C****************************************************************************** PROGRAM ORPHAN
COMMON /DOTDATA/ A, B, SUM
INTEGER I, VECLEN
PARAMETER (VECLEN = )
REAL* A(VECLEN), B(VECLEN), SUM DO I=, VECLEN
A(I) = 1.0 * I
B(I) = A(I)
ENDDO
SUM = 0.0
!$OMP PARALLEL
CALL DOTPROD
!$OMP END PARALLEL
WRITE(*,*) "Sum = ", SUM
END SUBROUTINE DOTPROD
COMMON /DOTDATA/ A, B, SUM
INTEGER I, TID, OMP_GET_THREAD_NUM, VECLEN
PARAMETER (VECLEN = )
REAL* A(VECLEN), B(VECLEN), SUM TID = OMP_GET_THREAD_NUM()
!$OMP DO REDUCTION(+:SUM)
DO I=, VECLEN
SUM = SUM + (A(I)*B(I))
PRINT *, ' TID= ',TID,'I= ',I
ENDDO
RETURN
END
六、Matrix multiply
C******************************************************************************
C FILE: omp_mm.f
C DESCRIPTION:
C OpenMp Example - Matrix Multiply - Fortran Version
C Demonstrates a matrix multiply using OpenMP. Threads share row iterations
C according to a predefined chunk size.
C AUTHOR: Blaise Barney
C LAST REVISED: // Blaise Barney
C****************************************************************************** PROGRAM MATMULT INTEGER NRA, NCA, NCB, TID, NTHREADS, I, J, K, CHUNK,
+ OMP_GET_NUM_THREADS, OMP_GET_THREAD_NUM
C number of rows in matrix A
PARAMETER (NRA=)
C number of columns in matrix A
PARAMETER (NCA=)
C number of columns in matrix B
PARAMETER (NCB=) REAL* A(NRA,NCA), B(NCA,NCB), C(NRA,NCB) C Set loop iteration chunk size
CHUNK = C Spawn a parallel region explicitly scoping all variables
!$OMP PARALLEL SHARED(A,B,C,NTHREADS,CHUNK) PRIVATE(TID,I,J,K)
TID = OMP_GET_THREAD_NUM()
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Starting matrix multiple example with', NTHREADS,
+ 'threads'
PRINT *, 'Initializing matrices'
END IF C Initialize matrices
!$OMP DO SCHEDULE(STATIC, CHUNK)
DO I=, NRA
DO J=, NCA
A(I,J) = (I-)+(J-)
CONTINUE
!$OMP DO SCHEDULE(STATIC, CHUNK)
DO I=, NCA
DO J=, NCB
B(I,J) = (I-)*(J-)
CONTINUE
!$OMP DO SCHEDULE(STATIC, CHUNK)
DO I=, NRA
DO J=, NCB
C(I,J) =
CONTINUE C Do matrix multiply sharing iterations on outer loop
C Display who does which iterations for demonstration purposes
PRINT *, 'Thread', TID, 'starting matrix multiply...'
!$OMP DO SCHEDULE(STATIC, CHUNK)
DO I=, NRA
PRINT *, 'Thread', TID, 'did row', I
DO J=, NCB
DO K=, NCA
C(I,J) = C(I,J) + A(I,K) * B(K,J)
CONTINUE C End of parallel region
!$OMP END PARALLEL C Print results
PRINT *, '******************************************************'
PRINT *, 'Result Matrix:'
DO I=, NRA
DO J=, NCB
WRITE(*,) C(I,J)
FORMAT(2x,f8.,$)
CONTINUE
PRINT *, ' '
CONTINUE
PRINT *, '******************************************************'
PRINT *, 'Done.' END
七、Get and print environment information
C******************************************************************************
C FILE: omp_getEnvInfo.f
C DESCRIPTION:
C OpenMP Example - Get Environment Information - Fortran Version
C The master thread queries and prints selected environment information.
C AUTHOR: Blaise Barney /
C LAST REVISED: //
C****************************************************************************** PROGRAM GETINFO INTEGER NTHREADS, TID, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM, OMP_GET_NUM_PROCS, OMP_GET_MAX_THREADS,
+ OMP_IN_PARALLEL, OMP_GET_DYNAMIC, OMP_GET_NESTED,
+ PROCS, MAXT C These are for AIX compilations
C INTEGER INPAR, DYNAMIC, NESTED
C These are for non-AIX compilations
LOGICAL INPAR, DYNAMIC, NESTED C Start parallel region
!$OMP PARALLEL PRIVATE(NTHREADS, TID) C Obtain thread number
TID = OMP_GET_THREAD_NUM() C Only master thread does this
IF (TID .EQ. ) THEN PRINT *, 'Thread',tid,'getting environment information' C Get environment information
PROCS = OMP_GET_NUM_PROCS()
NTHREADS = OMP_GET_NUM_THREADS()
MAXT = OMP_GET_MAX_THREADS()
INPAR = OMP_IN_PARALLEL()
DYNAMIC = OMP_GET_DYNAMIC()
NESTED = OMP_GET_NESTED() C Print environment information PRINT *, 'Number of processors = ', PROCS
PRINT *, 'Number of threads = ', NTHREADS
PRINT *, 'Max threads = ', MAXT
PRINT *, 'In parallel? = ', INPAR
PRINT *, 'Dynamic threads enabled? = ', DYNAMIC
PRINT *, 'Nested parallelism supported? = ', NESTED END IF C Done
!$OMP END PARALLEL END
八、Programs with bugs
(1)omp_bug1.f
C******************************************************************************
C FILE: omp_bug1.f
C DESCRIPTION:
C This example attempts to show use of the PARALLEL DO construct. However
C it will generate errors at compile time. Try to determine what is causing
C the error. See omp_bug1fix.f for a corrected version.
C AUTHOR: Blaise Barney /
C LAST REVISED:
C****************************************************************************** PROGRAM WORKSHARE3 INTEGER TID, OMP_GET_THREAD_NUM, N, I, CHUNKSIZE, CHUNK
PARAMETER (N=)
PARAMETER (CHUNKSIZE=)
REAL A(N), B(N), C(N) ! Some initializations
DO I = , N
A(I) = I * 1.0
B(I) = A(I)
ENDDO
CHUNK = CHUNKSIZE !$OMP PARALLEL DO SHARED(A,B,C,CHUNK)
!$OMP& PRIVATE(I,TID)
!$OMP& SCHEDULE(STATIC,CHUNK) TID = OMP_GET_THREAD_NUM()
DO I = , N
C(I) = A(I) + B(I)
PRINT *,'TID= ',TID,'I= ',I,'C(I)= ',C(I)
ENDDO !$OMP END PARALLEL DO END
C******************************************************************************
C FILE: omp_bug1fix.f
C DESCRIPTION:
C This is a corrected version of the omp_bug1fix.f example. Corrections
C include removing all statements between the PARALLEL DO construct and
C the actual DO loop, and introducing logic to preserve the ability to
C query a thread's id and print it from inside the DO loop.
C AUTHOR: Blaise Barney /
C LAST REVISED:
C****************************************************************************** PROGRAM WORKSHARE4 INTEGER TID, OMP_GET_THREAD_NUM, N, I, CHUNKSIZE, CHUNK
PARAMETER (N=)
PARAMETER (CHUNKSIZE=)
REAL A(N), B(N), C(N)
CHARACTER FIRST_TIME ! Some initializations
DO I = , N
A(I) = I * 1.0
B(I) = A(I)
ENDDO
CHUNK = CHUNKSIZE
FIRST_TIME = 'Y' !$OMP PARALLEL DO SHARED(A,B,C,CHUNK)
!$OMP& PRIVATE(I,TID)
!$OMP& SCHEDULE(STATIC,CHUNK)
!$OMP& FIRSTPRIVATE(FIRST_TIME) DO I = , N
IF (FIRST_TIME .EQ. 'Y') THEN
TID = OMP_GET_THREAD_NUM()
FIRST_TIME = 'N'
ENDIF
C(I) = A(I) + B(I)
PRINT *,'TID= ',TID,'I= ',I,'C(I)= ',C(I)
ENDDO !$OMP END PARALLEL DO END
(3)omp_bug2.f
C******************************************************************************
C FILE: omp_bug2.f
C DESCRIPTION:
C Another OpenMP program with a bug
C AUTHOR: Blaise Barney //
C LAST REVISED:
C****************************************************************************** PROGRAM BUG2 INTEGER NTHREADS, I, TID, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM
REAL* TOTAL C Spawn parallel region
!$OMP PARALLEL C Obtain thread number
TID = OMP_GET_THREAD_NUM()
C Only master thread does this
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
END IF
PRINT *, 'Thread ',TID,'is starting...' !$OMP BARRIER C Do some work
TOTAL = 0.0
!$OMP DO SCHEDULE(DYNAMIC,)
DO I=,
TOTAL = TOTAL + I * 1.0
END DO WRITE(*,) TID,TOTAL
FORMAT('Thread',I2,' is done! Total= ',E12.) !$OMP END PARALLEL END
(4)omp_bug3.f
C******************************************************************************
C FILE: omp_bug3.f
C DESCRIPTION:
C Run time bug
C AUTHOR: Blaise Barney //
C LAST REVISED: //
C****************************************************************************** PROGRAM BUG3 INTEGER N, I, NTHREADS, TID, SECTION, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM
PARAMETER (N=)
REAL A(N), B(N), C(N) C Some initializations
DO I = , N
A(I) = I * 1.0
B(I) = A(I)
ENDDO !$OMP PARALLEL PRIVATE(C,I,TID,SECTION)
TID = OMP_GET_THREAD_NUM()
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
END IF C Use barriers for clean output
!$OMP BARRIER
PRINT *, 'Thread ',TID,' starting...'
!$OMP BARRIER !$OMP SECTIONS
!$OMP SECTION
SECTION =
DO I = , N
C(I) = A(I) * B(I)
ENDDO
CALL PRINT_RESULTS(C, TID, SECTION) !$OMP SECTION
SECTION =
DO I = , N
C(I) = A(I) + B(I)
ENDDO
CALL PRINT_RESULTS(C, TID, SECTION) !$OMP END SECTIONS C Use barrier for clean output
!$OMP BARRIER
PRINT *, 'Thread',tid,' exiting...' !$OMP END PARALLEL END SUBROUTINE PRINT_RESULTS(C, TID, SECTION)
INTEGER TID, SECTION, N, I, J
PARAMETER (N=)
REAL C(N) J =
C Use critical for clean output
!$OMP CRITICAL
PRINT *, ' '
PRINT *, 'Thread',TID,' did section',SECTION
DO I=, N
WRITE(*,) C(I)
FORMAT(E12.,$)
J = J +
IF (J .EQ. ) THEN
PRINT *, ' '
J =
END IF
END DO
PRINT *, ' '
!$OMP END CRITICAL !$OMP BARRIER
PRINT *,'Thread',TID,' done and synchronized' END SUBROUTINE PRINT_RESULTS
(4)omp_bug4.f
C******************************************************************************
C FILE: omp_bug4.f
C DESCRIPTION:
C This very simple program causes a segmentation fault.
C AUTHOR: Blaise Barney //
C LAST REVISED:
C****************************************************************************** PROGRAM BUG4 INTEGER N, NTHREADS, TID, I, J, OMP_GET_NUM_THREADS,
+ OMP_GET_THREAD_NUM
PARAMETER(N=)
REAL* A(N,N) C Fork a team of threads with explicit variable scoping
!$OMP PARALLEL SHARED(NTHREADS) PRIVATE(I,J,TID,A) C Obtain/print thread info
TID = OMP_GET_THREAD_NUM()
IF (TID .EQ. ) THEN
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
END IF
PRINT *, 'Thread',TID,' starting...' C Each thread works on its own private copy of the array
DO I=,N
DO J=,N
A(J,I) = TID + I + J
END DO
END DO C For confirmation
PRINT *, 'Thread',TID,'done. Last element=',A(N,N) C All threads join master thread and disband
!$OMP END PARALLEL END
(5)omp_bug4fix.f
#!/bin/csh #******************************************************************************
# FILE: omp_bug4fix
# DESCRIPTION:
# This script is used to set the thread stack size limit to accomodate
# the omp_bug4 example. The example code requires @16MB per thread. For
# safety, this script sets the stack limit to 20MB. Note that the way
# to do this differs between architectures.
# AUTHOR: Blaise Barney //
# LAST REVISED:
#*****************************************************************************/ # This is for all systems
limit stacksize unlimited # This is for IBM AIX systems
setenv XLSMPOPTS "stack=20000000" # This is for Linux systems
setenv KMP_STACKSIZE # This is for HP/Compaq Tru64 systems
setenv MP_STACK_SIZE # Now call the executable - change the name to match yours
omp_bug4
(6)omp_bug5.f
C******************************************************************************
C FILE: omp_bug5.f
C DESCRIPTION:
C Using SECTIONS, two threads initialize their own array and then add
C it to the other's array, however a deadlock occurs.
C AUTHOR: Blaise Barney //
C LAST REVISED:
C****************************************************************************** PROGRAM BUG5 INTEGER* LOCKA, LOCKB
INTEGER NTHREADS, TID, I,
+ OMP_GET_NUM_THREADS, OMP_GET_THREAD_NUM
PARAMETER (N=)
REAL A(N), B(N), PI, DELTA
PARAMETER (PI=3.1415926535)
PARAMETER (DELTA=.) C Initialize the locks
CALL OMP_INIT_LOCK(LOCKA)
CALL OMP_INIT_LOCK(LOCKB) C Fork a team of threads giving them their own copies of variables
!$OMP PARALLEL SHARED(A, B, NTHREADS, LOCKA, LOCKB) PRIVATE(TID) C Obtain thread number and number of threads
TID = OMP_GET_THREAD_NUM()
!$OMP MASTER
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
!$OMP END MASTER
PRINT *, 'Thread', TID, 'starting...'
!$OMP BARRIER !$OMP SECTIONS !$OMP SECTION
PRINT *, 'Thread',TID,' initializing A()'
CALL OMP_SET_LOCK(LOCKA)
DO I = , N
A(I) = I * DELTA
ENDDO
CALL OMP_SET_LOCK(LOCKB)
PRINT *, 'Thread',TID,' adding A() to B()'
DO I = , N
B(I) = B(I) + A(I)
ENDDO
CALL OMP_UNSET_LOCK(LOCKB)
CALL OMP_UNSET_LOCK(LOCKA) !$OMP SECTION
PRINT *, 'Thread',TID,' initializing B()'
CALL OMP_SET_LOCK(LOCKB)
DO I = , N
B(I) = I * PI
ENDDO
CALL OMP_SET_LOCK(LOCKA)
PRINT *, 'Thread',TID,' adding B() to A()'
DO I = , N
A(I) = A(I) + B(I)
ENDDO
CALL OMP_UNSET_LOCK(LOCKA)
CALL OMP_UNSET_LOCK(LOCKB) !$OMP END SECTIONS NOWAIT PRINT *, 'Thread',TID,' done.' !$OMP END PARALLEL END
C******************************************************************************
C FILE: omp_bug5fix.f
C DESCRIPTION:
C The problem in omp_bug5.f is that the first thread acquires locka and then
C tries to get lockb before releasing locka. Meanwhile, the second thread
C has acquired lockb and then tries to get locka before releasing lockb.
C This solution overcomes the deadlock by using locks correctly.
C AUTHOR: Blaise Barney //
C LAST REVISED:
C****************************************************************************** PROGRAM BUG5 INTEGER* LOCKA, LOCKB
INTEGER NTHREADS, TID, I,
+ OMP_GET_NUM_THREADS, OMP_GET_THREAD_NUM
PARAMETER (N=)
REAL A(N), B(N), PI, DELTA
PARAMETER (PI=3.1415926535)
PARAMETER (DELTA=.) C Initialize the locks
CALL OMP_INIT_LOCK(LOCKA)
CALL OMP_INIT_LOCK(LOCKB) C Fork a team of threads giving them their own copies of variables
!$OMP PARALLEL SHARED(A, B, NTHREADS, LOCKA, LOCKB) PRIVATE(TID) C Obtain thread number and number of threads
TID = OMP_GET_THREAD_NUM()
!$OMP MASTER
NTHREADS = OMP_GET_NUM_THREADS()
PRINT *, 'Number of threads = ', NTHREADS
!$OMP END MASTER
PRINT *, 'Thread', TID, 'starting...'
!$OMP BARRIER !$OMP SECTIONS !$OMP SECTION
PRINT *, 'Thread',TID,' initializing A()'
CALL OMP_SET_LOCK(LOCKA)
DO I = , N
A(I) = I * DELTA
ENDDO
CALL OMP_UNSET_LOCK(LOCKA)
CALL OMP_SET_LOCK(LOCKB)
PRINT *, 'Thread',TID,' adding A() to B()'
DO I = , N
B(I) = B(I) + A(I)
ENDDO
CALL OMP_UNSET_LOCK(LOCKB) !$OMP SECTION
PRINT *, 'Thread',TID,' initializing B()'
CALL OMP_SET_LOCK(LOCKB)
DO I = , N
B(I) = I * PI
ENDDO
CALL OMP_UNSET_LOCK(LOCKB)
CALL OMP_SET_LOCK(LOCKA)
PRINT *, 'Thread',TID,' adding B() to A()'
DO I = , N
A(I) = A(I) + B(I)
ENDDO
CALL OMP_UNSET_LOCK(LOCKA) !$OMP END SECTIONS NOWAIT PRINT *, 'Thread',TID,' done.' !$OMP END PARALLEL END
(8)omp_bug6.f
C******************************************************************************
C FILE: omp_bug6.f
C DESCRIPTION:
C This program compiles and runs fine, but produces the wrong result.
C Compare to omp_orphan.f.
C AUTHOR: Blaise Barney /
C LAST REVISED: //
C****************************************************************************** PROGRAM ORPHAN
COMMON /DOTDATA/ A, B
INTEGER I, VECLEN
REAL* SUM
PARAMETER (VECLEN = )
REAL* A(VECLEN), B(VECLEN) DO I=, VECLEN
A(I) = 1.0 * I
B(I) = A(I)
ENDDO
SUM = 0.0
!$OMP PARALLEL SHARED (SUM)
CALL DOTPROD
!$OMP END PARALLEL
WRITE(*,*) "Sum = ", SUM
END SUBROUTINE DOTPROD
COMMON /DOTDATA/ A, B
INTEGER I, TID, OMP_GET_THREAD_NUM, VECLEN
c REAL* SUM
PARAMETER (VECLEN = )
REAL* A(VECLEN), B(VECLEN) TID = OMP_GET_THREAD_NUM()
!$OMP DO REDUCTION(+:SUM)
DO I=, VECLEN
SUM = SUM + (A(I)*B(I))
PRINT *, ' TID= ',TID,'I= ',I
ENDDO
RETURN
END
Fortran并行计算的一些例子的更多相关文章
- paper 56 :机器学习中的算法:决策树模型组合之随机森林(Random Forest)
周五的组会如约而至,讨论了一个比较感兴趣的话题,就是使用SVM和随机森林来训练图像,这样的目的就是 在图像特征之间建立内在的联系,这个model的训练,着实需要好好的研究一下,下面是我们需要准备的入门 ...
- RAD Studio 10 自带Demo代码汇总说明
大家好,好多朋友来信咨询Delphi和C++Builder的移动开发.DataSnap架构等问题,希望能有Demo代码学习.其实Delphi和C++Builder本身自带有很多示例代码,已经覆盖了大部 ...
- delphi 演示数据路径
链接里默认的--------------------------- Error --------------------------- I/O error for file "C:\Prog ...
- storm事务
1. storm 事务 对于容错机制,Storm通过一个系统级别的组件acker,结合xor校验机制判断一个msg是否发送成功,进而spout可以重发该msg,保证一个msg在出错的情况下至少被重发一 ...
- FutureTask与Fork/Join
在学习多线程的过程中,我们形成了一种思维习惯.那就是对于某个耗时操作不再做同步操作,让他分裂成一个线程之后执行下一步,而线程执行耗时操作.并且我们希望在我们需要它返回的时候再去调用它的结果集.好比我们 ...
- Paul Graham:梦寐以求的编程语言
我的朋友曾对一位著名的操作系统专家说他想要设计一种真正优秀的编程语言.那位专家回答,这是浪费时间,优秀的语言不一定会被市场接受,很可能无人使用,因为语言的流行不取决于它本身.至少,那位专家设计的语言就 ...
- 在fortran下进行openmp并行计算编程
最近写水动力的程序,体系太大,必须用并行才能算的动,无奈只好找了并行编程的资料学习了.我想我没有必要在博客里开一个什么并行编程的教程之类,因为网上到处都是,我就随手记点重要的笔记吧.这里主要是open ...
- 【并行计算】用MPI进行分布式内存编程(一)
通过上一篇关于并行计算准备部分的介绍,我们知道MPI(Message-Passing-Interface 消息传递接口)实现并行是进程级别的,通过通信在进程之间进行消息传递.MPI并不是一种新的开发语 ...
- 【并行计算】基于OpenMP的并行编程
我们目前的计算机都是基于冯偌伊曼结构的,在MIMD作为主要研究对象的系统中,分为两种类型:共享内存系统和分布式内存系统,之前我们介绍的基于MPI方式的并行计算编程是属于分布式内存系统的方式,现在我们研 ...
随机推荐
- 关于setTimeout的妙用前端函数节流
最近在某团队忙于一个项目,有这么一个页面,采用传统模式开发(吐槽它为什么不用React),它的DOM操作比较多,然后性能是比较差的,尤其当你缩放窗口时,可怕的事情发生了,出现了卡顿,甚至浏览器瘫痪.为 ...
- Codeforces Gym 100187K K. Perpetuum Mobile 构造
K. Perpetuum Mobile Time Limit: 2 Sec Memory Limit: 256 MB 题目连接 http://codeforces.com/gym/100187/pro ...
- 如何扫描出Android系统媒体库中视频文件
Android系统启动时会去扫描系统文件,并将系统支持的视频文件(mp4,3gp,wmv)扫描到媒体库(MediaStore)中,下面代码演示如何获得这些文件的信息: publicstatic Lis ...
- 数学+高精度 ZOJ 2313 Chinese Girls' Amusement
题目传送门 /* 杭电一题(ACM_steps 2.2.4)的升级版,使用到高精度: 这次不是简单的猜出来的了,求的是GCD (n, k) == 1 最大的k(1, n/2): 1. 若n是奇数,则k ...
- The Accomodation of Students
The Accomodation of Students Time Limit: 5000/1000 MS (Java/Others) Memory Limit: 32768/32768 K ( ...
- Color the Ball[HDU1199]
Color the Ball Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)To ...
- c/c++ 笔试面试题
#include <iostream> using namespace std; class A { public: void sayHi(){ cout<<"hel ...
- 哈希表工作原理 (并不特指Java中的HashTable)
1. 引言 哈希表(Hash Table)的应用近两年才在NOI中出现,作为一种高效的数据结构,它正在竞赛中发挥着越来越重要的作用. 哈希表最大的优点,就是把数据的存储和查找消耗的时 ...
- 【UOJ】【UR #2】猪猪侠再战括号序列(splay/贪心)
http://uoj.ac/problem/31 纪念伟大的没有调出来的splay... 竟然那个find那里写错了!!!!!!!!!!!!! 以后要记住:一定要好好想过! (正解的话我就不写了,太简 ...
- 【BZOJ】1303: [CQOI2009]中位数图(特殊的技巧)
http://www.lydsy.com/JudgeOnline/problem.php?id=1303 依旧是题解流,,,不看题解没法活,,,第一眼就是瞎搞,然后就是暴力,显然TLE..题解啊题解. ...