WOLFRAM

Parallelize[expr]

evaluates expr using automatic parallelization.

Details and Options

  • Parallelize[expr] automatically distributes different parts of the evaluation of expr among different available kernels and processors.
  • Parallelize[expr] normally gives the same result as evaluating expr, except for side effects during the computation.
  • Parallelize has attribute HoldFirst, so that expressions are not evaluated before parallelization.
  • Parallelize Options
  • The following options can be given:
  • Method Automaticgranularity of parallelization
    DistributedContexts $DistributedContextscontexts used to distribute symbols to parallel computations
    ProgressReporting $ProgressReportingwhether to report the progress of the computation
  • The Method option specifies the parallelization method to use. Possible settings include:
  • "CoarsestGrained"break the computation into as many pieces as there are available kernels
    "FinestGrained"break the computation into the smallest possible subunits
    "EvaluationsPerKernel"->ebreak the computation into at most e pieces per kernel
    "ItemsPerEvaluation"->mbreak the computation into evaluations of at most m subunits each
    Automaticcompromise between overhead and load balancing
  • Method->"CoarsestGrained" is suitable for computations involving many subunits, all of which take the same amount of time. It minimizes overhead but does not provide any load balancing.
  • Method->"FinestGrained" is suitable for computations involving few subunits whose evaluations take different amounts of time. It leads to higher overhead but maximizes load balancing.
  • The DistributedContexts option specifies which symbols appearing in expr have their definitions automatically distributed to all available kernels before the computation.
  • The default value is DistributedContexts:>$DistributedContexts with $DistributedContexts:=$Context, which distributes definitions of all symbols in the current context, but does not distribute definitions of symbols from packages.
  • The ProgressReporting option specifies whether to report the progress of the parallel computation.
  • The default value is ProgressReporting:>$ProgressReporting.
  • Parallelize Scope
  • Parallelize[f[]] parallelizes these functions that operate on a list element by element: Apply, AssociationMap, Cases, Count, FreeQ, KeyMap, KeySelect, KeyValueMap, Map, MapApply, MapIndexed, MapThread, Comap, ComapApply, MemberQ, Pick, Scan, Select and Through.
  • Parallelize[iter] parallelizes the iterators Array, Do, Product, Sum, Table.
  • Parallelize[list] evaluates the elements of list in parallel.
  • Parallelize[f[]] can parallelize listable and associative functions and inner and outer products. »
  • Parallelize[cmd1;cmd2;] wraps Parallelize around each cmdi and evaluates these in sequence. »
  • Parallelize[s=expr] is converted to s=Parallelize[expr].
  • Parallelize[expr] evaluates expr sequentially if expr is not one of the cases recognized by Parallelize.

Examples

open allclose all

Basic Examples  (4)Summary of the most common use cases

Map a function in parallel:

Out[1]=1

Generate a table in parallel:

Out[1]=1

Functions defined interactively can immediately be used in parallel:

Out[2]=2

Longer computations display information about their progress and estimated time to completion:

Scope  (23)Survey of the scope of standard use cases

Listable Functions  (1)

All listable functions with one argument will automatically parallelize when applied to a list:

Out[1]=1

Implicitly defined lists:

Out[2]=2
Out[3]=3

Structure-Preserving Functions  (8)

Many functional programming constructs that preserve list structure parallelize:

Out[1]=1
Out[1]=1
Out[2]=2
Out[1]=1
Out[1]=1
Out[2]=2

f@@@list is equivalent to MapApply[f,list]:

Out[3]=3
Out[1]=1
Out[2]=2
Out[1]=1

The result need not have the same length as the input:

Out[1]=1
Out[2]=2
Out[3]=3

Without a function, Parallelize simply evaluates the elements in parallel:

Out[1]=1

Reductions  (4)

Count the number of primes up to one million:

Out[1]=1

Check whether 93 occurs in a list of the first 100 primes:

Out[1]=1

Check whether a list is free of 5:

Out[1]=1

The argument does not have to be an explicit List:

Out[1]=1

Inner and Outer Products  (2)

Inner products automatically parallelize:

Out[1]=1
Out[2]=2

Outer products automatically parallelize:

Out[1]=1

Iterators  (3)

Evaluate a table in parallel, with or without an iterator variable:

Out[1]=1
Out[2]=2

Generate an array in parallel:

Out[3]=3

Evaluate sums and products in parallel:

Out[1]=1
Out[2]=2

The evaluation of the function happens in parallel:

Out[3]=3
Out[4]=4

The list of file names is expanded locally on the subkernels:

Out[1]=1

Associative Functions  (1)

Functions with the attribute Flat automatically parallelize:

Out[1]=1
Out[2]=2
Out[3]=3

Functions for Associations  (4)

Parallelize AssociationMap:

Out[1]=1

Parallelize KeyMap:

Out[1]=1

Parallelize KeySelect:

Out[1]=1

Parallelize KeyValueMap:

Out[1]=1

Generalizations & Extensions  (4)Generalized and extended use cases

Listable functions of several arguments:

Out[1]=1

Only the right side of an assignment is parallelized:

Out[1]=1

Elements of a compound expression are parallelized one after the other:

Out[1]=1

Parallelize the generation of video frames:

Out[1]=1

Options  (13)Common values & functionality for each option

DistributedContexts  (5)

By default, definitions in the current context are distributed automatically:

Out[2]=2

Do not distribute any definitions of functions:

Out[2]=2

Distribute definitions for all symbols in all contexts appearing in a parallel computation:

Out[2]=2

Distribute only definitions in the given contexts:

Out[2]=2

Restore the value of the DistributedContexts option to its default:

Out[1]=1

Method  (6)

Break the computation into the smallest possible subunits:

Out[1]=1

Break the computation into as many pieces as there are available kernels:

Out[1]=1

Break the computation into at most 2 evaluations per kernel for the entire job:

Out[1]=1

Break the computation into evaluations of at most 5 elements each:

Out[1]=1

The default option setting balances evaluation size and number of evaluations:

Out[1]=1

Calculations with vastly differing runtimes should be parallelized as finely as possible:

Out[1]=1

A large number of simple calculations should be distributed into as few batches as possible:

Out[2]=2

ProgressReporting  (2)

Do not show a temporary progress report:

Use Method"FinestGrained" for the most accurate progress report:

Applications  (4)Sample problems that can be solved with this function

Search for Mersenne primes:

Out[1]=1

Watch the results appear as they are found:

Out[3]=3

Compute a whole table of visualizations:

Out[1]=1
Out[3]=3

Search a range in parallel for local minima:

Out[1]=1

Choose the best one:

Out[2]=2

Use a shared function to record timing results as they are generated:

Set up a dynamic bar chart with the timing results:

Out[2]=2

Run a series of calculations with vastly varying runtimes:

Out[3]=3

Properties & Relations  (7)Properties of the function, and connections to other functions

For data parallel functions, Parallelize is implemented in terms of ParallelCombine:

Out[1]=1
Out[2]=2
Out[3]=3
Out[4]=4

Parallel speedup can be measured with a calculation that takes a known amount of time:

Out[1]=1

Define a number of tasks with known runtimes:

Out[1]=1

The time for a sequential execution is the sum of the individual times:

Out[2]=2

Measure the speedup for parallel execution:

Out[4]=4

Finest-grained scheduling gives better load balancing and higher speedup:

Out[5]=5

Scheduling large tasks first gives even better results:

Out[6]=6

Form the arithmetic expression 123456789 for chosen from +, , *, /:

Each list of arithmetic operations gives a simple calculation:

Out[2]=2

Evaluating it is easy:

Out[3]=3

Find all sequences of arithmetic operations that give 0:

Out[4]=4

Display the corresponding expressions:

Out[5]=5

Functions defined interactively are automatically distributed to all kernels when needed:

Out[2]=2

Distribute definitions manually and disable automatic distribution:

Out[5]=5

For functions from a package, use ParallelNeeds rather than DistributeDefinitions:

Out[2]=2
Out[4]=4

Set up a random number generator that is suitable for parallel use and initialize each kernel:

Out[2]=2

Possible Issues  (8)Common pitfalls and unexpected behavior

Expressions that cannot be parallelized are evaluated normally:

Out[1]=1
Out[2]=2

Side effects cannot be used in the function mapped in parallel:

Out[1]=1

Use a shared variable to support side effects:

Out[3]=3

If no subkernels are available, the result is computed on the master kernel:

Out[2]=2

If a function used is not distributed first, the result may still appear to be correct:

Out[2]=2

Only if the function is distributed is the result actually calculated on the available kernels:

Out[3]=3
Out[4]=4

Definitions of functions in the current context are distributed automatically:

Out[2]=2

Definitions from contexts other than the default context are not distributed automatically:

Out[4]=4

Use DistributeDefinitions to distribute such definitions:

Out[6]=6

Alternatively, set the DistributedContexts option to include all contexts:

Out[8]=8

Explicitly distribute the definition of a function:

Modify the definition:

The modified definition is automatically distributed:

Out[3]=3

Suppress the automatic distribution of definitions:

Out[6]=6

Symbols defined only on the subkernels are not distributed automatically:

Out[8]=8

The value of $DistributedContexts is not used in Parallelize:

Out[3]=3

Set the value of the DistributedContexts option of Parallelize:

Out[4]=4
Out[6]=6

Restore all settings to their default values:

Out[8]=8

Trivial operations may take longer when parallelized:

Out[1]=1
Out[2]=2

Neat Examples  (1)Surprising or curious use cases

Display nontrivial automata as they are found:

Out[1]=1
Wolfram Research (2008), Parallelize, Wolfram Language function, https://reference.wolfram.com/language/ref/Parallelize.html (updated 2021).
Wolfram Research (2008), Parallelize, Wolfram Language function, https://reference.wolfram.com/language/ref/Parallelize.html (updated 2021).

Text

Wolfram Research (2008), Parallelize, Wolfram Language function, https://reference.wolfram.com/language/ref/Parallelize.html (updated 2021).

Wolfram Research (2008), Parallelize, Wolfram Language function, https://reference.wolfram.com/language/ref/Parallelize.html (updated 2021).

CMS

Wolfram Language. 2008. "Parallelize." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2021. https://reference.wolfram.com/language/ref/Parallelize.html.

Wolfram Language. 2008. "Parallelize." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2021. https://reference.wolfram.com/language/ref/Parallelize.html.

APA

Wolfram Language. (2008). Parallelize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Parallelize.html

Wolfram Language. (2008). Parallelize. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Parallelize.html

BibTeX

@misc{reference.wolfram_2024_parallelize, author="Wolfram Research", title="{Parallelize}", year="2021", howpublished="\url{https://reference.wolfram.com/language/ref/Parallelize.html}", note=[Accessed: 09-January-2025 ]}

@misc{reference.wolfram_2024_parallelize, author="Wolfram Research", title="{Parallelize}", year="2021", howpublished="\url{https://reference.wolfram.com/language/ref/Parallelize.html}", note=[Accessed: 09-January-2025 ]}

BibLaTeX

@online{reference.wolfram_2024_parallelize, organization={Wolfram Research}, title={Parallelize}, year={2021}, url={https://reference.wolfram.com/language/ref/Parallelize.html}, note=[Accessed: 09-January-2025 ]}

@online{reference.wolfram_2024_parallelize, organization={Wolfram Research}, title={Parallelize}, year={2021}, url={https://reference.wolfram.com/language/ref/Parallelize.html}, note=[Accessed: 09-January-2025 ]}