VOOZH about

URL: https://en.wikipedia.org/wiki/SPMD

⇱ Single program, multiple data - Wikipedia


Jump to content
From Wikipedia, the free encyclopedia
(Redirected from SPMD)
Computing technique used to achieve parallelism
This article is missing information about GPUs. Please expand the article to include this information. Further details may exist on the talk page. (November 2019)

In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism whereby multiple processors cooperate in the execution of a program in order to obtain results faster.

The term SPMD was introduced in 1983 and was used to denote two different computational models:

  1. by Michel Auguin (University of Nice Sophia-Antipolis) and François Larbey (Thomson/Sintra),[1][2][3] as a "fork-and-join" and data-parallel approach where the parallel tasks ("single program") are split-up and run simultaneously in lockstep on multiple SIMD processors with different inputs, and
  2. by Frederica Darema (IBM),[4][5][6] where "all (processors) processes  begin executing the same program... but through synchronization directives ... self-schedule themselves to execute different instructions and act on different data" and enabling MIMD parallelization of a given program, and is a more general approach than data-parallel and more efficient than the fork-and-join for parallel execution on general purpose multiprocessors.

The (IBM) SPMD is the most common style of parallel programming and can be considered a subcategory of MIMD in that it refers to MIMD execution of a given ("single") program.[7] It is also a prerequisite for research concepts such as active messages and distributed shared memory.

Flynn's taxonomy
Single data stream
Multiple data streams
SIMD subcategories[8]
See also

SPMD vs SIMD

[edit]
👁 Image
An example of "Single program, multiple data"

In SPMD parallel execution, multiple autonomous processors simultaneously execute the same program at independent points, rather than in the lockstep that SIMD or SIMT imposes on different data. In SIMD the same operation (instruction) is applied on multiple data to manipulate data streams (a version of SIMD is vector processing where the data are organized as vectors).

Unlike SIMD, SPMD does not require special support from the processor it's run on, be it CPUs or GPUs.

SPMD and SIMD are not mutually exclusive: each SPMD program can include SIMD, or vector, or GPU sub-processing. Many CPUs include multiple SIMD-capable cores, each of which can participate in SPMD; the same applies to many GPUs containing several SIMD "streams". SPMD has been used for parallel programming of both message passing and shared-memory machine architectures.

Operation

[edit]

Distributed memory

[edit]

On distributed memory computer architectures, SPMD implementations usually employ message passing programming. A distributed memory computer consists of a collection of interconnected, independent computers, called nodes. For parallel execution, each node starts its own program and communicates with other nodes by sending and receiving messages, calling send/receive routines for that purpose. Other parallelization directives such as Barrier synchronization may also be implemented by messages. The messages can be sent by a number of communication mechanisms, such as TCP/IP over Ethernet, or specialized high-speed interconnects such as InfiniBand or Omni-Path. For distributed memory environments, serial sections of the program can be implemented by identical computation of the serial section on all nodes rather than computing the result on one node and sending it to the others, if that improves performance by reducing communication overhead.

Nowadays, the programmer is isolated from the details of the message passing by standard interfaces, such as PVM and MPI.

Distributed memory is the programming style used on parallel supercomputers from homegrown Beowulf clusters to the largest clusters on the Teragrid, as well as present GPU-based supercomputers.

Shared memory

[edit]

On a shared memory machine (a computer with several interconnected CPUs that access the same memory space), the sharing can be implemented in the context of either physically shared memory or logically shared (but physically distributed) memory; in addition to the shared memory, the CPUs in the computer system can also include local (or private) memory. For either of these contexts, synchronization can be enabled with hardware enabled primitives (such as compare-and-swap, or fetch-and-add. For machines that do not have such hardware support, locks can be used and data can be "exchanged" across processors (or, more generally, processes or threads) by depositing the sharable data in a shared memory area. When the hardware does not support shared memory, packing the data as a "message" is often the most efficient way to program (logically) shared memory computers with large number of processors, where the physical memory is local to processors and accessing the memory of another processor takes longer. SPMD on a shared memory machine can be implemented by standard processes (heavyweight) or threads (lightweight).

Shared memory multiprocessing (both symmetric multiprocessing, SMP, and non-uniform memory access, NUMA) presents the programmer with a common memory space and the possibility to parallelize execution. With the (IBM) SPMD model the cooperating processors (or processes) take different paths through the program, using parallel directives (parallelization and synchronization directives, which can utilize compare-and-swap and fetch-and-add operations on shared memory synchronization variables), and perform operations on data in the shared memory ("shared data"); the processors (or processes) can also have access and perform operations on data in their local memory ("private data"). In contrast, with fork-and-join approaches, the program starts executing on one processor and the execution splits in a parallel region, which is started when parallel directives are encountered; in a parallel region, the processors execute a parallel task on different data. A typical example is the parallel DO loop, where different processors work on separate parts of the arrays involved in the loop. At the end of the loop, execution is synchronized (with soft- or hard-barriers[6]), and processors (processes) continue to the next available section of the program to execute. The (IBM) SPMD has been implemented in the current standard interface for shared memory multiprocessing, OpenMP, which uses multithreading, usually implemented by lightweight processes, called threads.

Combination of levels of parallelism

[edit]

Current computers allow exploiting many parallel modes at the same time for maximum combined effect. A distributed memory program using MPI may run on a collection of nodes. Each node may be a shared memory computer and execute in parallel on multiple CPUs using OpenMP. Within each CPU, SIMD vector instructions (usually generated automatically by the compiler) and superscalar instruction execution (usually handled transparently by the CPU itself), such as pipelining and the use of multiple parallel functional units, are used for maximum single CPU speed.

Implementations

[edit]

MPI is commonly used to implement SPMD. As mentioned earlier it is suited for distributed memory systems (multiple machines) but also works on shared-memory scenarios (multiple cores).[9]

GPUs and other accelerators

[edit]

Most graphics shaders are written in a SPMD programming model: the code describes an operation on a single element. The code is then turned into parallel code by the shader compiler using whatever parallelism facilities the hardware offers (multiple units of SIMD in the case of most GPUs, multiple units of SIMT in the case of Nvidia GPUs). CUDA likewise follows an SPMD/SIMT model.[10] When targeting SIMD hardware, control flow is typically mapped onto SIMD operations by predication, which restricts what portions of a vector register is changed using a mask.[11][12] GPUs generally do not have a unified address space; instead, there are several levels of memory available, only some of which are shared among shader programs.[13]

In the machine learning libraries Jax and PyTorch, SPMD is used to distribute the work (shard) over multiple devices, either automatically or manually.[14][15] By having all devices run what is functionally the same program, automatic work distribution becomes much easier and the need for cross-device communication is reduced.[16]

Clang offers an SPMD code-generation mode for its OpenMP offloading support in addition to the regular mode.[17][18] Clang's OpenCL part considers a target to be SPMD if the hardware is able to spawn multiple work-items on-the-fly.[19]

Single CPU core (SPMD on SIMD)

[edit]

Intel IPSC (Implicit SPMD Program Compiler) is an open-source compiler for SPMD programs written in a dialect of C. It turns input programs, which are written in a seemingly single-threaded SPMD model, into efficient x86 (SSE2 to AVX512) or ARM (NEON) SIMD code or Intel GPU SIMD code.[11] Most of IPSC was written by Matt Pharr. According to him, IPSC was designed to produce a compiler that generates good wide-vector code for Larrabee-like architectures. Automatic vectorization proved too fragile to use reliably and a shader-like solutution was sought.[10][12]

The NSIMD library offers an SPMD interface, similar in concept to IPSC's. It targets scalar, x86 (SSE2 to AVX-512) SIMD, ARM (NEON or SVE) SIMD, POWERPC VMX/VSX SIMD, CUDA, ROCm, and OneAPI.[20]

SPMD-on-SIMD (similar to IPSC) has been academically demonstrated on LLVM,[21][22] but none has been accepted into official LLVM as of March 2026.

History

[edit]

The acronym SPMD for "Single-Program Multiple-Data" has been used to describe two different computational models for exploiting parallel computing, and this is due to both terms being natural extensions of Flynn's taxonomy.[7] The two respective groups of researchers were unaware of each other's use of the term SPMD to independently describe different models of parallel programming.

The term SPMD was proposed first in 1983 by Michel Auguin (University of Nice Sophia-Antipolis) and François Larbey (Thomson/Sintra) in the context of the OPSILA parallel computer and in the context of a fork-and-join and data parallel computational model approach.[1] This computer consisted of a master (controller processor) and SIMD processors (or vector processor mode as proposed by Flynn). In Auguin's SPMD model, the same (parallel) task ("same program") is executed on different (SIMD) processors ("operating in lock-step mode"[1] acting on a part ("slice") of the data-vector. Specifically, their 1985 paper[2] and others[3][1] stated:

We consider the SPMD (Single Program, Multiple Data) operating mode. This mode allows simultaneous execution of the same task (one per processor) but prevents data exchange between processors. Data exchanges are only performed under SIMD mode by means of vector assignments. We assume synchronizations are summed-up to switchings (sic) between SIMD and SPMD operatings [sic] modes using global fork-join primitives.

Starting around the same timeframe (in late 1983 – early 1984), the SPMD term was proposed by Frederica Darema (at IBM at that time, and part of the RP3 group) to define a different SPMD computational model that she proposed,[6][5][4] as a programming model which in the intervening years has been applied to a wide range of general-purpose high-performance computers (including RP3 - the 512-processor IBM Research Parallel Processor Prototype) and has led to the current parallel computing standards. The (IBM) SPMD programming model assumes a multiplicity of processors which operate cooperatively, all executing the same program but can take different paths through the program based on parallelization directives embedded in the program:[6][5][4][23][24]

All processes participating in the parallel computation are created at the beginning of the execution and remain in existence until the end ... [the processors/processes] execute different instructions and act on different data ... the job [(work)] to be done by each process is allocated dynamically ... [i.e. the processes] self-schedule themselves to execute different instructions and act on different data [thus self-assign themselves to cooperate in execution of serial and parallel tasks (as well as replicate tasks) in the program.]

The notion process generalized the term processor in the sense that multiple processes can execute on a processor (to for example exploit larger degrees of parallelism for more efficiency and load-balancing). The (IBM) SPMD model was proposed by Darema as an approach different and more efficient than the fork-and-join that was pursued by all others in the community at that time; it is also more general than just "data-parallel" computational model and can encompass fork-and-join (as a subcategory implementation). The original context of the (IBM) SPMD was the RP3 computer (the 512-prosessor IBM Research Parallel Processor Prototype), which supported general purpose computing, with both distributed and (logically) shared memory.[23] The (IBM) SPMD model was implemented by Darema and IBM colleagues into the EPEX (Environment for Parallel Execution), one of the first prototype programming environments.[6][5][4][23][24][25] The effectiveness of the (IBM) SPMD was demonstrated for a wide class of applications,[23][4] and was implemented in the IBM FORTRAN in 1988,[26] the first vendor-product in parallel programming; and in MPI (1991 and on), OpenMP (1997 and on), and other environments which have adopted and cite the (IBM) SPMD Computational Model.

By the late 1980s, there were many distributed computers with proprietary message passing libraries. The first SPMD standard was PVM. The current de facto standard is MPI.

The Cray parallel directives were a direct predecessor of OpenMP.

References

[edit]
  1. ^ a b c d M. Auguin; F. Larbey (1983). "OPSILA: an advanced SIMD for numerical analysis and signal processing". Microprocessing and microprogramming: EUROMICRO ; Symposium. proceedings ; Microcomputers: developments in industry, business and education. 13-16 Sep 1983. Amsterdam: North-Holland. ISBN 0-444-86742-2.
  2. ^ a b M. Auguin; F. Labrey (1985). "A Multi-processor SIMD Machine: OPSILA". In K. Waldschmidt; B. Myhrhaug (eds.). Microcomputers, usage and design. Amsterdam: North-Holland. ISBN 0-444-87835-1.
  3. ^ a b Auguin, M.; Boeri, F.; Dalban, J.P; Vincent-Carrefour, A. (1987). "Experience Using a SIMD/SPMD Multiprocessor Architecture". Multiprocessing and Microprogramming. 21 (1–5): 171–178. doi:10.1016/0165-6074(87)90034-2.
  4. ^ a b c d e Darema, Frederica (2001). "The SPMD Model: Past, Present and Future". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. Vol. 2131. p. 1. doi:10.1007/3-540-45417-9_1. ISBN 978-3-540-42609-7.
  5. ^ a b c d F. Darema-Rogers; D. A. George; V. A. Norton; G. F. Pfister (23 January 1985). "A VM Parallel Environment". IBM Research Report RC 11225 (Report). IBM T. J. Watson Research Center.
  6. ^ a b c d e Darema, F.; George, D.A.; Norton, V.A.; Pfister, G.F. (1988). "A single-program-multiple-data computational model for EPEX/FORTRAN". Journal of Parallel Computing. 7: 11–24. doi:10.1016/0167-8191(88)90094-4.
  7. ^ a b Flynn, Michael (1972). "Some Computer Organizations and Their Effectiveness" (PDF). IEEE Transactions on Computers. C-21 (9): 948–960. doi:10.1109/TC.1972.5009071. S2CID 18573685.
  8. ^ Flynn, Michael J. (September 1972). "Some Computer Organizations and Their Effectiveness" (PDF). IEEE Transactions on Computers. C-21 (9): 948–960. doi:10.1109/TC.1972.5009071.
  9. ^ Strout, M.M.; Kreaseck, B.; Hovland, P.D. (2006). "Data-Flow Analysis for MPI Programs": 175–184. doi:10.1109/ICPP.2006.32. {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ a b "The story of ispc: origins (part 1)". pharr.org. 2018.
  11. ^ a b "Intel® ISPC User's Guide § The ISPC Parallel Execution Model". ispc.github.io.
  12. ^ a b Pharr, Matt; Mark, William R. (May 2012). ispc: A SPMD compiler for high-performance CPU programming (PDF). 2012 Innovative Parallel Computing (InPar). pp. 1–13. doi:10.1109/InPar.2012.6339601.
  13. ^ "WebGPU Shading Language". www.w3.org.
  14. ^ "PyTorch/XLA SPMD User Guide — PyTorch/XLA master documentation". docs.pytorch.org.
  15. ^ "Introduction to parallel programming — JAX documentation". docs.jax.dev.
  16. ^ Xu, Yuanzhong; Lee, HyoukJoong; Chen, Dehao; Hechtman, Blake; Huang, Yanping; Joshi, Rahul; Krikun, Maxim; Lepikhin, Dmitry; Ly, Andy; Maggioni, Marcello; Pang, Ruoming; Shazeer, Noam; Wang, Shibo; Wang, Tao; Wu, Yonghui; Chen, Zhifeng (2021-12-23), GSPMD: General and Scalable Parallelization for ML Computation Graphs, arXiv, doi:10.48550/arXiv.2105.04663
  17. ^ "OpenMP Support — Clang 14.0.0 documentation". releases.llvm.org.
  18. ^ "LLVM/OpenMP Runtimes". GitHub.
  19. ^ "Clang Compiler User's Manual". For "non-SPMD" targets which cannot spawn multiple work-items on the fly using hardware, which covers practically all non-GPU devices such as CPUs and DSPs
  20. ^ "NSIMD documentation". agenium-scale.github.io.
  21. ^ Kandiah, Vijay; Lustig, Daniel; Villa, Oreste; Nellans, David; Hardavellas, Nikos (17 February 2023). Parsimony: Enabling SIMD/Vector Programming in Standard Compiler Flows. CGO 2023: Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimizati. pp. 186–198. doi:10.1145/3579990.3580019.
  22. ^ Kruppe, Robin; Oppermann, Julian; Sommer, Lukas; Koch, Andreas (February 2019). Extending LLVM for Lightweight SPMD Vectorization: Using SIMD and Vector Instructions Easily from Any Language. 2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). pp. 278–279. doi:10.1109/CGO.2019.8661165.
  23. ^ a b c d Darema, Frederica (1988). "Applications environment for the IBM Research Parallel Processor Prototype (RP3)". Supercomputing. Lecture Notes in Computer Science. Vol. 297. pp. 80–95. doi:10.1007/3-540-18991-2_6. ISBN 978-3-540-18991-6.
  24. ^ a b Darema, Frederica (1986). "Parallel Applications Development for Shared Memory Systems". IBM Research Report RC12229 (Report). Yorktown Heights, NY: IBM T. J. Watson Research Center.
  25. ^ J. M. Stone; F. Darema-Rogers; V. A. Norton; G. F. Pfister (30 September 1985). "Introduction to the VM/EPEX Fortran Preprocessor". IBM Research Report RC11407 (Report). Yorktown Heights, NY: IBM T. J. Watson Research Center.
  26. ^ Toomey, L. J.; Plachy, E. C.; Scarborough, R. G.; Sahulka, R. J.; Shaw, J. F.; Shannon, A. W. (1988). "IBM Parallel FORTRAN". IBM Systems Journal. 27 (4): 416–435. doi:10.1147/sj.274.0416.

External links

[edit]