---
title: "ParallelSum"
language: "en"
type: "Symbol"
summary: "ParallelSum[expr, {i, imax}] evaluates in parallel the sum \\[Sum]i = 1 imax expr. ParallelSum[expr, {i, imin, imax}] starts with i = i min. ParallelSum[expr, {i, imin, imax, di}] uses steps di. ParallelSum[expr, {i, {i1, i2, ...}}] uses successive values i1, i2, .... ParallelSum[expr, {i, imin, imax}, {j, jmin, jmax}, ...] evaluates in parallel the multiple sum \\[Sum]i = imin imax \\[Sum]j = jmin jmax ... expr."
keywords: 
- parallel sum
- parallel add
- parallel plus
canonical_url: "https://reference.wolfram.com/language/ref/ParallelSum.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Data Parallelism"
    link: "https://reference.wolfram.com/language/guide/DataParallelism.en.md"
  - 
    title: "Parallel Computing"
    link: "https://reference.wolfram.com/language/guide/ParallelComputing.en.md"
related_functions: 
  - 
    title: "Sum"
    link: "https://reference.wolfram.com/language/ref/Sum.en.md"
  - 
    title: "Parallelize"
    link: "https://reference.wolfram.com/language/ref/Parallelize.en.md"
  - 
    title: "ParallelTable"
    link: "https://reference.wolfram.com/language/ref/ParallelTable.en.md"
  - 
    title: "ParallelCombine"
    link: "https://reference.wolfram.com/language/ref/ParallelCombine.en.md"
---
# ParallelSum
⚠ *Unsupported in Public Cloud*

ParallelSum[expr, {i, imax}] evaluates in parallel the sum $\sum _{i=1}^{i_{\text{\textit{max}}}} \text{\textit{expr}}$.

ParallelSum[expr, {i, imin, imax}] starts with i = i*min*.

ParallelSum[expr, {i, imin, imax, di}] uses steps di.

ParallelSum[expr, {i, {i1, i2, …}}] uses successive values i1, i2, ….

ParallelSum[expr, {i, imin, imax}, {j, jmin, jmax}, …] evaluates in parallel the multiple sum $\sum _{i=i_{\text{\textit{min}}}}^{i_{\text{\textit{max}}}} \sum _{j=j_{\text{\textit{min}}}}^{j_{\text{\textit{max}}}} \ldots
 \text{\textit{expr}}$.

## Details and Options

* ``ParallelSum`` is a parallel version of ``Sum``, which automatically distributes partial summations among different kernels and processors.

* ``ParallelSum`` will give the same results as ``Sum``, except for side effects during the computation.

* ``Parallelize[Sum[expr, iter, …]]`` is equivalent to ``ParallelSum[expr, iter, …]``.

* If an instance of ``ParallelSum`` cannot be parallelized it is evaluated using ``Sum``.

* The following options can be given:

|                      |                       |                                                              |
| -------------------- | --------------------- | ------------------------------------------------------------ |
| Method               | Automatic             | granularity of parallelization                               |
| DistributedContexts  | \$DistributedContexts | contexts used to distribute symbols to parallel computations |
| ProgressReporting    | \$ProgressReporting   | whether to report the progress of the computation            |

* The ``Method`` option specifies the parallelization method to use. Possible settings include:

|                             |                                                                           |
| --------------------------- | ------------------------------------------------------------------------- |
| "CoarsestGrained"           | break the computation into as many pieces as there are available kernels  |
| "FinestGrained"             | break the computation into the smallest possible subunits                 |
| "EvaluationsPerKernel" -> e | break the computation into at most e pieces per kernel                    |
| "ItemsPerEvaluation" -> m   | break the computation into evaluations of at most m subunits each         |
| Automatic                   | compromise between overhead and load balancing                            |

* ``Method -> "CoarsestGrained"`` is suitable for computations involving many subunits, all of which take the same amount of time. It minimizes overhead, but does not provide any load balancing.

* ``Method -> "FinestGrained"`` is suitable for computations involving few subunits whose evaluations take different amounts of time. It leads to higher overhead, but maximizes load balancing.

* The ``DistributedContexts`` option specifies which symbols appearing in ``expr`` have their definitions automatically distributed to all available kernels before the computation.

* The default value is ``DistributedContexts :> \$DistributedContexts`` with ``\$DistributedContexts := \$Context``, which distributes definitions of all symbols in the current context, but does not distribute definitions of symbols from packages.

* The ``ProgressReporting`` option specifies whether to report the progress of the parallel computation.

* The default value is ``ProgressReporting :> \$ProgressReporting``.

---

## Examples (12)

### Basic Examples (1)

```wl
In[1]:= ParallelSum[i ^ 2, {i, 1000}]

Out[1]= 333833500
```

Longer computations display information about their progress and estimated time to completion:

```wl
In[2]:= ParallelSum[EulerPhi[i], {i, 2 * 10 ^ 7}]

Out[2]= 121585426956250

During evaluation of In[4]:=
Animator[0., {0., Infinity, 1.}, AnimationRate -> 1, AnimationRunTime -> 0.1866750717163086, 
 AnimationTimeIndex -> 0.1866750717163086, AppearanceElements -> None, ImageSize -> {1, 1}]
```

### Options (9)

#### Method (2)

Calculations with vastly differing runtimes should be parallelized as finely as possible:

```wl
In[1]:= ParallelSum[N[Fibonacci[i]], {i, 1, 1000}, Method -> "FinestGrained"]

Out[1]= 1.1379692539836027*^209
```

---

A large number of simple calculations should be distributed into as few batches as possible:

```wl
In[1]:= ParallelSum[N[i * Pi], {i, 10 ^ 6}, Method -> "CoarsestGrained"]

Out[1]= 1.5707978975912236*^12
```

#### DistributedContexts (5)

By default, definitions in the current context are distributed automatically:

```wl
In[1]:= remote[x_] := {$KernelID, x ^ 3}

In[2]:= ParallelSum[remote[i], {i, 4}]

Out[2]= {10, 100}
```

---

Do not distribute any definitions of functions:

```wl
In[1]:= local[x_] := {$KernelID, x ^ 2}

In[2]:= ParallelSum[local[i], {i, 4}, DistributedContexts -> None]

Out[2]= {0, 30}
```

---

Distribute definitions for all symbols in all contexts appearing in a parallel computation:

```wl
In[1]:= a`f[x_] := {$KernelID, x}

In[2]:= ParallelSum[a`f[i], {i, 4}, DistributedContexts -> Automatic]

Out[2]= {10, 10}
```

---

Distribute only definitions in the given contexts:

```wl
In[1]:= b`g[x_] := {$KernelID, -x}

In[2]:= ParallelSum[b`g[i], {i, 4}, DistributedContexts -> {"a`"}]

Out[2]= {0, -10}
```

---

Restore the value of the ``DistributedContexts`` option to its default:

```wl
In[1]:= SetOptions[ParallelSum, DistributedContexts :> $DistributedContexts]

Out[1]= {Method -> Automatic, DistributedContexts :> $DistributedContexts}
```

#### ProgressReporting (2)

Do not show a temporary progress report:

```wl
In[1]:= ParallelSum[EulerPhi[i], {i, 2 * 10 ^ 7}, ProgressReporting -> False]

Out[1]= 121585426956250
```

---

Show a temporary progress report even if the default setting ``\$ProgressReporting`` may be ``False`` :

```wl
In[1]:= ParallelSum[EulerPhi[i], {i, 2 * 10 ^ 7}, ProgressReporting -> True]

Out[1]= 121585426956250

During evaluation of In[2]:=
Animator[0., {0., Infinity, 1.}, AnimationRate -> 1, AnimationRunTime -> 0.2020740509033203, 
 AnimationTimeIndex -> 0.2020740509033203, AppearanceElements -> None, ImageSize -> {1, 1}]
```

### Possible Issues (2)

Sums with trivial terms may be slower in parallel than sequentially:

```wl
In[1]:= AbsoluteTiming[ParallelSum[N[i], {i, 10 ^ 6}]]

Out[1]= {0.142296, 5.000005*^11}

In[2]:= AbsoluteTiming[Sum[N[i], {i, 10 ^ 6}]]

Out[2]= {0.668294, 5.000005*^11}
```

Splitting the computation into as few pieces as possible decreases the parallel overhead:

```wl
In[3]:= AbsoluteTiming[ParallelSum[N[i], {i, 10 ^ 6}, Method -> "CoarsestGrained"]]

Out[3]= {0.023303, 5.000005*^11}
```

---

``Sum`` may employ symbolic methods that are faster than an iterative addition of all terms:

```wl
In[1]:= AbsoluteTiming[ParallelSum[i, {i, 10 ^ 7}]]

Out[1]= {2.344636, 50000005000000}

In[2]:= AbsoluteTiming[Sum[i, {i, 10 ^ 7}]]

Out[2]= {0.542239, 50000005000000}
```

## See Also

* [`Sum`](https://reference.wolfram.com/language/ref/Sum.en.md)
* [`Parallelize`](https://reference.wolfram.com/language/ref/Parallelize.en.md)
* [`ParallelTable`](https://reference.wolfram.com/language/ref/ParallelTable.en.md)
* [`ParallelCombine`](https://reference.wolfram.com/language/ref/ParallelCombine.en.md)

## Related Guides

* [Data Parallelism](https://reference.wolfram.com/language/guide/DataParallelism.en.md)
* [Parallel Computing](https://reference.wolfram.com/language/guide/ParallelComputing.en.md)

## History

* [Introduced in 2008 (7.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn70.en.md) \| [Updated in 2010 (8.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn80.en.md) ▪ [2021 (13.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn130.en.md)