---
title: "CramerVonMisesTest"
language: "en"
type: "Symbol"
summary: "CramerVonMisesTest[data] tests whether data is normally distributed using the Cramér\\[Dash]von Mises test. CramerVonMisesTest[data, dist] tests whether data is distributed according to dist using the Cramér\\[Dash]von Mises test. CramerVonMisesTest[data, dist,  property] returns the value of  property."
keywords: 
- Cramer- von Mises test
- cvm test
- cramer von mises test
- cramer vonMises
- test for normality
- normality
- goodness-of-fit
- goodness of fit
- goodness of fit test
- hypothesis tests
- null hypothesis
- alternative hypothesis
- edf based tests
- empirical distribution tests
canonical_url: "https://reference.wolfram.com/language/ref/CramerVonMisesTest.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Hypothesis Tests"
    link: "https://reference.wolfram.com/language/guide/HypothesisTests.en.md"
related_functions: 
  - 
    title: "HypothesisTestData"
    link: "https://reference.wolfram.com/language/ref/HypothesisTestData.en.md"
  - 
    title: "AndersonDarlingTest"
    link: "https://reference.wolfram.com/language/ref/AndersonDarlingTest.en.md"
  - 
    title: "KolmogorovSmirnovTest"
    link: "https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.en.md"
  - 
    title: "DistributionFitTest"
    link: "https://reference.wolfram.com/language/ref/DistributionFitTest.en.md"
  - 
    title: "JarqueBeraALMTest"
    link: "https://reference.wolfram.com/language/ref/JarqueBeraALMTest.en.md"
  - 
    title: "KuiperTest"
    link: "https://reference.wolfram.com/language/ref/KuiperTest.en.md"
  - 
    title: "MardiaCombinedTest"
    link: "https://reference.wolfram.com/language/ref/MardiaCombinedTest.en.md"
  - 
    title: "MardiaKurtosisTest"
    link: "https://reference.wolfram.com/language/ref/MardiaKurtosisTest.en.md"
  - 
    title: "MardiaSkewnessTest"
    link: "https://reference.wolfram.com/language/ref/MardiaSkewnessTest.en.md"
  - 
    title: "PearsonChiSquareTest"
    link: "https://reference.wolfram.com/language/ref/PearsonChiSquareTest.en.md"
  - 
    title: "ShapiroWilkTest"
    link: "https://reference.wolfram.com/language/ref/ShapiroWilkTest.en.md"
  - 
    title: "WatsonUSquareTest"
    link: "https://reference.wolfram.com/language/ref/WatsonUSquareTest.en.md"
---
# CramerVonMisesTest

CramerVonMisesTest[data] tests whether data is normally distributed using the Cramér–von Mises test.

CramerVonMisesTest[data, dist] tests whether data is distributed according to dist using the Cramér–von Mises test.

CramerVonMisesTest[data, dist, "property"] returns the value of "property".

## Details and Options

* ``CramerVonMisesTest`` performs the Cramér–von Mises goodness-of-fit test with null hypothesis $H_0$ that ``data`` was drawn from a population with distribution ``dist`` and alternative hypothesis $H_a$ that it was not.

* By default, a probability value or $p$-value is returned.

* A small $p$-value suggests that it is unlikely that the ``data`` came from ``dist``.

* The ``dist`` can be any symbolic distribution with numeric and symbolic parameters or a dataset.

* The ``data`` can be univariate ``{x1, x2, …}`` or multivariate ``{{x1, y1, …}, {x2, y2, …}, …}``.

* The Cramér–von Mises test assumes that the data came from a continuous distribution.

* The Cramér–von Mises test effectively uses a test statistic based on the expectation value of $\left[\left(\hat{F}(x)-F(x)\right)^2\right]$ where $\hat{F}(x)$ is the empirical CDF of ``data`` and $F(x)$ is the CDF of ``dist``.

* For univariate data, the test statistic is given by $\frac{1}{12 n}+\sum _{i=1}^n \left(\frac{2 i-1}{2 n}-F\left(x_i\right)\right){}^2$.

* For multivariate tests, the sum of the univariate marginal $p$-values is used and is assumed to follow a ``UniformSumDistribution`` under $H_0$.

* ``CramerVonMisesTest[data, dist, "HypothesisTestData"]`` returns a ``HypothesisTestData`` object ``htd`` that can be used to extract additional test results and properties using the form ``htd["property"]``.

* ``CramerVonMisesTest[data, dist, "property"]`` can be used to directly give the value of ``"property"``.

* Properties related to the reporting of test results include:

|                       |                                                            |
| --------------------- | ---------------------------------------------------------- |
| "PValue"              | $p$-value                    |
| "PValueTable"         | formatted version of "PValue"                              |
| "ShortTestConclusion" | a short description of the conclusion of a test            |
| "TestConclusion"      | a description of the conclusion of a test                  |
| "TestData"            | test statistic and $p$-value |
| "TestDataTable"       | formatted version of "TestData"                            |
| "TestStatistic"       | test statistic                                             |
| "TestStatisticTable"  | formatted "TestStatistic"                                  |

* The following properties are independent of which test is being performed.

* Properties related to the data distribution include:

|                                |                                 |
| ------------------------------ | ------------------------------- |
| "FittedDistribution"           | fitted distribution of data     |
| "FittedDistributionParameters" | distribution parameters of data |

* The following options can be given:

|                   |           |                                                                          |
| ----------------- | --------- | ------------------------------------------------------------------------ |
| Method            | Automatic | the method to use for computing $p$-values |
| SignificanceLevel | 0.05      | cutoff for diagnostics and reporting                                     |

* For a test for goodness of fit, a cutoff $\alpha$ is chosen such that $H_0$ is rejected only if $p<\alpha$. The value of $\alpha$ used for the ``"TestConclusion"`` and ``"ShortTestConclusion"`` properties is controlled by the ``SignificanceLevel`` option. By default, $\alpha$ is set to ``0.05``.

* With the setting ``Method -> "MonteCarlo"``, $k$ datasets of the same length as the input ``si`` are generated under $H_0$ using the fitted distribution. The empirical distribution from ``CramerVonMisesTest[si, dist, "TestStatistic"]`` is then used to estimate the $p$-value.

---

## Examples (30)

### Basic Examples (3)

Perform a Cramér–von Mises test for normality:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 10^4];

In[2]:= CramerVonMisesTest[data]

Out[2]= 0.239818
```

Confirm the result using ``QuantilePlot`` :

```wl
In[3]:= QuantilePlot[data]

Out[3]= [image]
```

---

Test the fit of some data to a particular distribution:

```wl
In[1]:= data = RandomVariate[LaplaceDistribution[1, 2], 10^3];

In[2]:= Show[SmoothHistogram[data, PlotStyle -> ColorData[97, 2]], Plot[PDF[LaplaceDistribution[1, 2], x], {x, -10, 10}, PlotStyle -> Dashed]]

Out[2]= [image]

In[3]:= CramerVonMisesTest[data, LaplaceDistribution[1, 2]]

Out[3]= 0.521342
```

---

Compare the distributions of two datasets:

```wl
In[1]:= data1 = RandomVariate[NormalDistribution[], 100];

In[2]:= data2 = RandomVariate[NormalDistribution[], 150];

In[3]:= SmoothHistogram[{data1, data2}]

Out[3]= [image]

In[4]:= CramerVonMisesTest[data1, data2]

Out[4]= 0.911377
```

### Scope (9)

#### Testing (6)

Perform a Cramér–von Mises test for normality:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[], 10^4];
data2 = RandomVariate[StudentTDistribution[3], 10^4];
```

The $p$-value for the normal data is large compared to the $p$-value for the non-normal data:

```wl
In[2]:= CramerVonMisesTest[data1]

Out[2]= 0.42603

In[3]:= CramerVonMisesTest[data2]

Out[3]= 0
```

---

Test the goodness of fit to a particular distribution:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[], 10^3];
data2 = RandomVariate[CauchyDistribution[0, 1], 10^3];

In[2]:= CramerVonMisesTest[data1, CauchyDistribution[0, 1]]

Out[2]= 1.1124434706744069`*^-13

In[3]:= CramerVonMisesTest[data2, CauchyDistribution[0, 1]]

Out[3]= 0.811966
```

---

Compare the distributions of two datasets:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[], 10^3];
data2 = RandomVariate[NormalDistribution[], 10^3];

In[2]:= CramerVonMisesTest[data1, data2]

Out[2]= 0.0696188
```

The two datasets do not have the same distribution:

```wl
In[3]:= data3 = RandomVariate[NormalDistribution[0, 1.25], 10^3];

In[4]:= CramerVonMisesTest[data1, data3]

Out[4]= 0.0495515
```

---

Test for multivariate normality:

```wl
In[1]:=
data1 = RandomVariate[BinormalDistribution[.5], 10^3];
data2 = RandomVariate[LaplaceDistribution[1, 2], {10^3, 2}];

In[2]:= CramerVonMisesTest[data1]

Out[2]= 0.506274

In[3]:= CramerVonMisesTest[data2]

Out[3]= 6.643040335972479`*^-11
```

---

Test for goodness of fit to any multivariate distribution:

```wl
In[1]:=
data1 = RandomVariate[BinormalDistribution[.5], 10^3];
data2 = RandomVariate[\[ScriptD] = LaplaceDistribution[1, 2], {10^3, 2}];

In[2]:= \[ScriptCapitalD] = ProductDistribution[\[ScriptD], \[ScriptD]];

In[3]:= CramerVonMisesTest[data1, \[ScriptCapitalD]]

Out[3]= 7.869503827161797`*^-29

In[4]:= CramerVonMisesTest[data2, \[ScriptCapitalD]]

Out[4]= 0.384889
```

---

Create a ``HypothesisTestData`` object for repeated property extraction:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 10];

In[2]:= ℋ = CramerVonMisesTest[data, Automatic, "HypothesisTestData"]

Out[2]=
HypothesisTestData[CramerVonMisesTest, {{-1.4814380493723753, -1.2732914451882245, 
   -1.0697336779689275, -0.5834556816187007, -0.3290991057518313, -0.2839094842887648, 
   0.0038058801400555104, 0.010791855763592414, 0.5928357029087279, 0.8015981788806918}, 
  "SampleToNull", 1, Rational[1, 20], {NormalDistribution[\[FormalX], \[FormalY]], Automatic}, Automatic, {}}, 
 {"Normality" -> 0, "EqualVariance" -> 0, "Symmetry" -> 0}]
```

The properties available for extraction:

```wl
In[3]:= ℋ["Properties"]

Out[3]= {"CramerVonMises", "FittedDistribution", "FittedDistributionParameters", "HypothesisTestData", "Properties", "PValue", "PValueTable", "ShortTestConclusion", "TestConclusion", "TestData", "TestDataTable", "TestEntries", "TestStatistic", "TestStatisticTable"}
```

#### Reporting (3)

Tabulate the results of the Cramér–von Mises test:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 100];

In[2]:= ℋ = CramerVonMisesTest[data, Automatic, "HypothesisTestData"];
```

The full test table:

```wl
In[3]:= ℋ["TestDataTable"]

Out[3]=
| ""                 | "Statistic" | "P‐Value" |
| :----------------- | :---------- | :-------- |
| "Cramér‐von Mises" | 0.043905    | 0.612816  |
```

A $p$-value table:

```wl
In[4]:= ℋ["PValueTable"]

Out[4]=
| ""                 | "P‐Value" |
| :----------------- | :-------- |
| "Cramér‐von Mises" | 0.612816  |
```

The test statistic:

```wl
In[5]:= ℋ["TestStatisticTable"]

Out[5]=
| ""                 | "Statistic" |
| :----------------- | :---------- |
| "Cramér‐von Mises" | 0.043905    |
```

---

Retrieve the entries from a Cramér–von Mises test table for custom reporting:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[], 100];
data2 = RandomVariate[NormalDistribution[], 100];

In[2]:= ℋ1 = CramerVonMisesTest[data1, Automatic, "TestData"]

Out[2]= {0.0278822, 0.883354}

In[3]:= ℋ2 = CramerVonMisesTest[data2, Automatic, "TestData"]

Out[3]= {0.0327414, 0.805451}
```

In[4]:= BarChart[{Labeled[\[ScriptCapitalH]1,"Set 1"],Labeled[\[ScriptCapitalH]2,"Set 2"]},ChartLabels->{"\[Omega]^2","p-value"}]

Out[4]= 

---

Report test conclusions using ``"ShortTestConclusion"`` and ``"TestConclusion"`` :

```wl
In[1]:= data = BlockRandom[SeedRandom[1];RandomVariate[ParetoDistribution[1.05, 2], 100]];

In[2]:= ℋ = CramerVonMisesTest[data, ParetoDistribution[1, 2], "HypothesisTestData"];

In[3]:= ℋ["ShortTestConclusion"]

Out[3]= "Reject"

In[4]:= ℋ["TestConclusion"]//TraditionalForm

Out[4]//TraditionalForm=
$$\text{The null hypothesis that }\text{the data is distributed according to the }\text{ParetoDistribution}[1,2] \text{is rejected at the }5\text{
percent level }\text{based on the }\text{Cram{\' e}r-von Mises}\text{ test.}$$
```

The conclusion may differ at a different significance level:

```wl
In[5]:= ℋ = CramerVonMisesTest[data, ParetoDistribution[1, 2], "HypothesisTestData", SignificanceLevel -> .001];

In[6]:= ℋ["ShortTestConclusion"]

Out[6]= "Do not reject"

In[7]:= ℋ["TestConclusion"]//TraditionalForm

Out[7]//TraditionalForm=
$$\text{The null hypothesis that }\text{the data is distributed according to the }\text{ParetoDistribution}[1,2] \text{is not rejected at the }0.1\text{
percent level }\text{based on the }\text{Cram{\' e}r-von Mises}\text{ test.}$$
```

### Options (3)

#### Method (3)

Use Monte Carlo-based methods or a computation formula:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 100];

In[2]:= CramerVonMisesTest[data, NormalDistribution[], Method -> "MonteCarlo"]

Out[2]= 0.314

In[3]:= CramerVonMisesTest[data, NormalDistribution[], Method -> Automatic]

Out[3]= 0.299948
```

---

Set the number of samples to use for Monte Carlo-based methods:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 100];

In[2]:= pts = Table[{i, CramerVonMisesTest[data, NormalDistribution[], Method -> {"MonteCarlo", "MonteCarloSamples" -> i}]}, {i, Range[5, 100, 5]}];
```

The Monte Carlo estimate converges to the true $p$-value with increasing samples:

```wl
In[3]:= pval = CramerVonMisesTest[data, NormalDistribution[]];

In[4]:= Show[ListLinePlot[pts, PlotRange -> {0, 1}, FrameLabel -> {"Samples", "P-Value"}, Frame -> True, AxesOrigin -> {0, 0}], Graphics[{Dashed, Line[{{0, pval}, {100, pval}}]}]]

Out[4]= [image]
```

---

Set the random seed used in Monte Carlo-based methods:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 100];

In[2]:= pts = Table[{i, CramerVonMisesTest[data, NormalDistribution[], Method -> {"MonteCarlo", "RandomSeed" -> i, "MonteCarloSamples" -> 50}]}, {i, Range[1, 10]}];
```

The seed affects the state of the generator and has some effect on the resulting $p$-value:

```wl
In[3]:= pval = CramerVonMisesTest[data, NormalDistribution[]];

In[4]:= Show[ListLinePlot[pts, PlotRange -> {Min[pts[[All, 2]]], Max[pts[[All, 2]]]}, FrameLabel -> {"Seed", "P-Value"}, Frame -> True, AxesOrigin -> {0, 0}], Graphics[{Dashed, Line[{{0, pval}, {100, pval}}]}]]

Out[4]= [image]
```

### Applications (3)

A power curve for the Cramér–von Mises test:

```wl
In[1]:= data = Table[RandomVariate[UniformDistribution[{-4, 4}], {500, i}], {i, n = {7, 10, 15, 20, 25, 30, 50}}];

In[2]:= ℋ = Table[CramerVonMisesTest[data[[i, j]], NormalDistribution[]], {i, Length[data]}, {j, Length[data[[i]]]}];

In[3]:= pC = Interpolation[Transpose[{n, Table[Probability[x ≤ 0.05, x\[Distributed]i], {i, ℋ}]}], InterpolationOrder -> 1];
```

Visualize the approximate power curve:

```wl
In[4]:= Plot[pC[x], {x, 7, 50}, PlotRange -> {0, 1}, Ticks -> {n, Automatic}, AxesOrigin -> {0, 0}]

Out[4]= [image]
```

Estimate the power of the Cramér–von Mises test when the underlying distribution is ``UniformDistribution[{-4, 4}]``, the test size is ``0.05``, and the sample size is 32:

```wl
In[5]:= pC[32.]

Out[5]= 0.982
```

---

Observations generated by a homogeneous Poisson process should exhibit complete spatial randomness, which implies that they were drawn from a uniform distribution. Determine if observations from the following images would be modeled well by a homogeneous Poisson process:

```wl
In[1]:= img1 = [image];img2 = [image];

In[2]:= m = MorphologicalComponents[Binarize[#, {0, .7}]]& /@ {img1, img2};
```

Find the centers of each point and rescale on $[0,1]$ :

```wl
In[3]:= locs = ComponentMeasurements[#, "Centroid"][[All, 2]]\[Transpose]& /@ m;

In[4]:=
s1 = MapThread[Rescale[#1, {0, #2}]&, {locs[[1]], ImageDimensions[img1]}]\[Transpose];
s2 = MapThread[Rescale[#1, {0, #2}]&, {locs[[2]], ImageDimensions[img2]}]\[Transpose];
```

The points in the first image would be modeled well by a homogeneous Poisson process:

```wl
In[5]:= CramerVonMisesTest[s1, UniformDistribution[{{0, 1}, {0, 1}}], "TestDataTable"]

Out[5]=
| ""                 | "Statistic" | "P‐Value" |
| :----------------- | :---------- | :-------- |
| "Cramér‐von Mises" | 1.49282     | 0.871383  |
```

A model for the second group should account for dependence:

```wl
In[6]:= CramerVonMisesTest[s2, UniformDistribution[{{0, 1}, {0, 1}}], "TestDataTable"]

Out[6]=
| ""                 | "Statistic" | "P‐Value"  |
| :----------------- | :---------- | :--------- |
| "Cramér‐von Mises" | 0.131012    | 0.00858209 |
```

---

Find the parameters for distributions that minimize the Cramér–von Mises test statistic:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 100];

In[2]:= f[μ_ ? NumericQ, σ_ ? NumericQ, dist_] := CramerVonMisesTest[data, dist[μ, σ], "TestStatistic"]

In[3]:= par = FindMinimum[f[μ, σ, NormalDistribution], {{μ}, {σ}}]

Out[3]= {0.0173702, {μ -> 0.077949, σ -> 1.00415}}
```

Compare the results to ``FindDistributionParameters`` :

```wl
In[4]:= estPar = FindDistributionParameters[data, NormalDistribution[μ, σ]]

Out[4]= {μ -> 0.0625017, σ -> 1.02514}

In[5]:=
Show[Plot3D[f[μ, σ, NormalDistribution], {μ, -3, 3}, {σ, 0, 6}, PlotStyle -> Directive[Opacity[0.5]], PlotRange -> {{-2, 2}, {0, 3}, {-10, 35}}], Graphics3D[{Red, Tube[{{μ, σ, 40}, {μ, σ, -20}} /. par[[2]], .05]}], 
	Graphics3D[{Green, Tube[{{μ, σ, 40}, {μ, σ, -20}} /. estPar, .05]}], AspectRatio -> 1]

Out[5]= [image]
```

### Properties & Relations (8)

By default, univariate data is compared to ``NormalDistribution`` :

```wl
In[1]:= data = RandomVariate[NormalDistribution[2, 3], 10];

In[2]:= ℋ = CramerVonMisesTest[data, Automatic, "HypothesisTestData"];

In[3]:= ℋ["TestDataTable"]

Out[3]=
| ""                 | "Statistic" | "P‐Value" |
| :----------------- | :---------- | :-------- |
| "Cramér‐von Mises" | 0.013374    | 0.998512  |
```

The parameters have been estimated from the data:

```wl
In[4]:= ℋ["FittedDistribution"]

Out[4]= NormalDistribution[1.72295, 1.89089]
```

---

Multivariate data is compared to ``MultinormalDistribution`` by default:

```wl
In[1]:= data = RandomVariate[MultinormalDistribution[{1, 2, 3}, IdentityMatrix[3]], 1000];

In[2]:= ℋ = CramerVonMisesTest[data, Automatic, "HypothesisTestData"];

In[3]:= ℋ["TestDataTable"]

Out[3]=
| ""                 | "Statistic" | "P‐Value" |
| :----------------- | :---------- | :-------- |
| "Cramér‐von Mises" | 0.0322302   | 0.969     |

In[4]:= ℋ["FittedDistribution"]//TraditionalForm

Out[4]//TraditionalForm=
$$\text{MultinormalDistribution}\left[\{0.986002,2.01753,2.98711\},\left(
\begin{array}{ccc}
 0.995818 & 0.000784087 & -0.0111047 \\
 0.000784087 & 1.01179 & 0.0354541 \\
 -0.0111047 & 0.0354541 & 1.02612 \\
\end{array}
\right)\right]$$
```

---

The parameters of the test distribution are estimated from the data if not specified:

```wl
In[1]:= data = RandomVariate[NormalDistribution[1, 2], 1000];

In[2]:= CramerVonMisesTest[data, NormalDistribution[μ, σ], "FittedDistribution"]

Out[2]= NormalDistribution[1.07588, 1.97366]
```

Specified parameters are not estimated:

```wl
In[3]:= CramerVonMisesTest[data, NormalDistribution[μ, 2], "FittedDistribution"]

Out[3]= NormalDistribution[1.07588, 2]

In[4]:= CramerVonMisesTest[data, NormalDistribution[1, 2], "FittedDistribution"]

Out[4]= NormalDistribution[1, 2]
```

---

Maximum likelihood estimates are used for unspecified parameters of the test distribution:

```wl
In[1]:= data = RandomVariate[ExponentialDistribution[3], 10^3];

In[2]:= ℋ = CramerVonMisesTest[data, ExponentialDistribution[λ], "FittedDistribution"]

Out[2]= ExponentialDistribution[2.90822]

In[3]:= CramerVonMisesTest[data, ExponentialDistribution[λ]]

Out[3]= 0.187486
```

---

If the parameters are unknown, ``CramerVonMisesTest`` applies a correction when possible:

```wl
In[1]:= data = RandomVariate[NormalDistribution[3, 4], 10^4];

In[2]:= est = EstimatedDistribution[data, NormalDistribution[μ, σ]]

Out[2]= NormalDistribution[2.94836, 3.99893]
```

The parameters are estimated but no correction is applied:

```wl
In[3]:= CramerVonMisesTest[data, est]

Out[3]= 0.781992

In[4]:= ℋ = CramerVonMisesTest[data, NormalDistribution[μ, σ], "HypothesisTestData"];
```

The fitted distribution is the same as before and the $p$-value is corrected:

```wl
In[5]:= ℋ["FittedDistribution"]

Out[5]= NormalDistribution[2.94836, 3.99893]

In[6]:= ℋ["PValue"]

Out[6]= 0.33059
```

---

Independent marginal densities are assumed in tests for multivariate goodness of fit:

```wl
In[1]:= data = RandomVariate[MultinormalDistribution[{0, 0}, {{0.118, 0.252}, {0.252, 0.665}}], 100];

In[2]:= CramerVonMisesTest[data, MultinormalDistribution[{0, 0}, {{0.118, 0.252}, {0.252, 0.665}}], "TestStatistic"]

Out[2]= 0.389143
```

The test statistic is identical when independence is assumed:

```wl
In[3]:= CramerVonMisesTest[data, MultinormalDistribution[{0, 0}, {{0.118, 0}, {0, 0.665}}], "TestStatistic"]

Out[3]= 0.389143
```

---

The Cramér–von Mises statistic can be defined using ``NExpectation`` :

```wl
In[1]:=
n = 10;
h0 = NormalDistribution[1, 2];
data = RandomVariate[h0, n];

In[2]:=
f[x_] := CDF[h0, x]
Overscript[f,  ^ ][x_] := CDF[EmpiricalDistribution[data], x]

In[3]:= n NExpectation[(Overscript[f,  ^ ][t] - f[t])^2, t\[Distributed]h0]

Out[3]= 0.123782

In[4]:= CramerVonMisesTest[data, h0, "TestStatistic"]

Out[4]= 0.123782
```

---

The Cramér–von Mises test works with the values only when the input is a ``TimeSeries`` :

```wl
In[1]:=
ts = TemporalData[TimeSeries, {CompressedData["«1188»"], {{0, 100, 1}}, 1, {"Continuous", 1}, {"Discrete", 1}, 1, 
  {ValueDimensions -> 1, ResamplingMethod -> None}}, False, 10.1];

In[2]:= CramerVonMisesTest[ts]

Out[2]= 0.243512

In[3]:= CramerVonMisesTest[ts["Values"]]

Out[3]= 0.243512
```

### Possible Issues (3)

The Cramér–von Mises test is not intended for discrete distributions:

```wl
In[1]:= data = RandomVariate[PoissonDistribution[30], 35];

In[2]:= CramerVonMisesTest[data, PoissonDistribution[30]]
```

CramerVonMisesTest::dscrt: The p-value returned by the {CramerVonMises} test may not reflect the true size of the test for discrete test distributions.

```wl
Out[2]= 0.373681
```

The continuity correction typically does a good job of preserving the size of the test:

```wl
In[3]:= sim = RandomVariate[PoissonDistribution[30], {500, 35}];

In[4]:= p = Quiet[CramerVonMisesTest[#, PoissonDistribution[30]]]& /@ sim;

In[5]:= Show[ListLinePlot[Table[{α, Probability[pv ≤ α, pv\[Distributed]p]}, {α, .01, 1, .01}]], Plot[x, {x, 0, 1}, PlotStyle -> Dashed]]

Out[5]= [image]
```

This may not be the case in some situations:

```wl
In[6]:= sim = RandomVariate[DiscreteUniformDistribution[{1, 3}], {500, 35}];

In[7]:= p = Quiet[CramerVonMisesTest[#, DiscreteUniformDistribution[{1, 3}]]]& /@ sim;

In[8]:= Show[ListLinePlot[Table[{α, Probability[pv ≤ α, pv\[Distributed]p]}, {α, .01, 1, .01}]], Plot[x, {x, 0, 1}, PlotStyle -> Dashed]]

Out[8]= [image]
```

Use Monte Carlo methods or ``PearsonChiSquareTest`` in these cases:

```wl
In[9]:= CramerVonMisesTest[sim[[1]], DiscreteUniformDistribution[{1, 3}], Method -> "MonteCarlo"]
```

CramerVonMisesTest::dscrt: The p-value returned by the {CramerVonMises} test may not reflect the true size of the test for discrete test distributions.

```wl
Out[9]= 0.109

In[10]:= PearsonChiSquareTest[sim[[1]], DiscreteUniformDistribution[{1, 3}]]

Out[10]= 0.104649
```

---

The Cramér–von Mises test is not valid for some distributions when parameters have been estimated from the data:

```wl
In[1]:= data = RandomVariate[BetaDistribution[1, 2], 100];

In[2]:= CramerVonMisesTest[data, BetaDistribution[1, b]]
```

CramerVonMisesTest::estprm: The p-value returned by the Cramér-von Mises test is not valid for the BetaDistribution[1,b] when the parameters in {b} have been estimated from the data.

```wl
Out[2]= 0.521853
```

Provide parameter values if they are known:

```wl
In[3]:= CramerVonMisesTest[data, BetaDistribution[1, 2]]

Out[3]= 0.119484
```

Alternatively, use Monte Carlo methods to approximate the $p$-value:

```wl
In[4]:= CramerVonMisesTest[data, BetaDistribution[1, b], Method -> "MonteCarlo"]

Out[4]= 0.282
```

---

The Cramér–von Mises test must have sample sizes of at least 7 for valid $p$-values:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], 5];

In[2]:= DistributionFitTest[data, Automatic, "CramerVonMises"]
```

DistributionFitTest::htdrng: The {CramerVonMises} test is only valid for sample sizes between 7 and \[Infinity].

```wl
Out[2]= 0.967641
```

Use Monte Carlo methods to arrive at a valid $p$-value:

```wl
In[3]:= DistributionFitTest[data, Automatic, "CramerVonMises", Method -> "MonteCarlo"]
```

DistributionFitTest::htdrng: The {CramerVonMises} test is only valid for sample sizes between 7 and \[Infinity].

```wl
Out[3]= 0.994
```

### Neat Examples (1)

Compute the statistic when the null hypothesis $H_0$ is true:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], {2500, 100}];

In[2]:= T1 = CramerVonMisesTest[#, NormalDistribution[], "TestStatistic"]& /@ data;
```

The test statistic given a particular alternative:

```wl
In[3]:= T2 = CramerVonMisesTest[#, NormalDistribution[.5, 2], "TestStatistic"]& /@ data;
```

Compare the distributions of the test statistics:

In[4]:= SmoothHistogram[{T1,T2},Filling->Axis,PlotLegends->{"Subscript[H, 0] is True","Subscript[H, 0] is False"}]

```wl
Out[4]= [image]
```

## See Also

* [`HypothesisTestData`](https://reference.wolfram.com/language/ref/HypothesisTestData.en.md)
* [`AndersonDarlingTest`](https://reference.wolfram.com/language/ref/AndersonDarlingTest.en.md)
* [`KolmogorovSmirnovTest`](https://reference.wolfram.com/language/ref/KolmogorovSmirnovTest.en.md)
* [`DistributionFitTest`](https://reference.wolfram.com/language/ref/DistributionFitTest.en.md)
* [`JarqueBeraALMTest`](https://reference.wolfram.com/language/ref/JarqueBeraALMTest.en.md)
* [`KuiperTest`](https://reference.wolfram.com/language/ref/KuiperTest.en.md)
* [`MardiaCombinedTest`](https://reference.wolfram.com/language/ref/MardiaCombinedTest.en.md)
* [`MardiaKurtosisTest`](https://reference.wolfram.com/language/ref/MardiaKurtosisTest.en.md)
* [`MardiaSkewnessTest`](https://reference.wolfram.com/language/ref/MardiaSkewnessTest.en.md)
* [`PearsonChiSquareTest`](https://reference.wolfram.com/language/ref/PearsonChiSquareTest.en.md)
* [`ShapiroWilkTest`](https://reference.wolfram.com/language/ref/ShapiroWilkTest.en.md)
* [`WatsonUSquareTest`](https://reference.wolfram.com/language/ref/WatsonUSquareTest.en.md)

## Related Guides

* [Hypothesis Tests](https://reference.wolfram.com/language/guide/HypothesisTests.en.md)

## History

* [Introduced in 2010 (8.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn80.en.md)