---
title: "ContingencyTable"
language: "en"
type: "Method"
summary: "ContingencyTable (Machine Learning Method) Method for LearnDistribution. Use a table to store the probabilities of a nominal vector for each possible outcome. A contingency table models the probability distribution of a nominal vector space by storing a probability value for each possible outcome. If the data is unidimensional, the distribution corresponds to a categorical distribution. The following options can be given: If the data contains numerical values, they are discretized. The resulting distribution is still a valid distribution in the original space. Information[LearnedDistribution[\\[Ellipsis]],MethodOption] can be used to extract the values of options chosen by the automation system. LearnDistribution[\\[Ellipsis],FeatureExtractor->Minimal] can be used to remove most preprocessing and directly access the method."
keywords: 
- Machine Learning
- Classification
- Logistic function
canonical_url: "https://reference.wolfram.com/language/ref/method/ContingencyTable.html"
source: "Wolfram Language Documentation"
---
# "ContingencyTable" (Machine Learning Method)

* Method for ``LearnDistribution``.

* Use a table to store the probabilities of a nominal vector for each possible outcome.

---

## Details & Suboptions

* A contingency table models the probability distribution of a nominal vector space by storing a probability value for each possible outcome.

* If the data is unidimensional, the distribution corresponds to a categorical distribution.

* The following options can be given:

"AdditiveSmoothing" 	[`Automatic`](https://reference.wolfram.com/language/ref/Automatic.en.md)	value to be added to each count

* If the data contains numerical values, they are discretized. The resulting distribution is still a valid distribution in the original space.

* ``Information[LearnedDistribution[…], "MethodOption"]`` can be used to extract the values of options chosen by the automation system.

* ``LearnDistribution[…, FeatureExtractor -> "Minimal"]`` can be used to remove most preprocessing and directly access the method.

---

## Examples (4)

### Basic Examples (3)

Train a contingency-table distribution on a nominal dataset:

```wl
In[1]:= ld = LearnDistribution[{"A", "A", "B", "B", "B"}, Method -> "ContingencyTable"]

Out[1]=
LearnedDistribution[Association["ExampleNumber" -> 5, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "Nominal"]], 
     "Output" -> Association["f1" -> Associati ... 87, -0.5581045242273198}, 
     "LeftBoundary" -> -0.8482875978539917, "LeftScale" -> 0.04164962654198675, 
     "LeftTailNorm" -> 0.10714285714285714]], 
  "Entropy" -> Around[0.6820133850147919, 0.020306153124905864], "EntropySampleSize" -> 56]]
```

Look at the distribution ``Information`` :

```wl
In[2]:= Information[ld]

Out[2]=
MachineLearning`MLInformationObject[LearnedDistribution[Association["ExampleNumber" -> 5, 
   "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
     Association["Input" -> Association["f1" -> Association["Type" -> "Nominal"]], 
      "O ... .5581287164962436}, 
      "LeftBoundary" -> -0.8478507247066664, "LeftScale" -> 0.03621212688039488, 
      "LeftTailNorm" -> 0.10294117647058823]], "Entropy" -> Around[0.6774168640087993, 
     0.018115682147101412], "EntropySampleSize" -> 68]]]
```

Obtain options information:

```wl
In[3]:= Information[ld, "MethodOption"]

Out[3]= Method -> {"ContingencyTable", "AdditiveSmoothing" -> 1}
```

Obtain an option value directly:

```wl
In[4]:= Information[ld, "AdditiveSmoothing"]

Out[4]= 1
```

Compute the probabilities for the values ``"A"`` and ``"B"`` :

```wl
In[5]:= PDF[ld, "A"]

Out[5]= 0.428571

In[6]:= PDF[ld, "B"]

Out[6]= 0.571429
```

Generate new samples:

```wl
In[7]:= RandomVariate[ld, 10]

Out[7]= {"A", "B", "A", "B", "A", "B", "B", "B", "A", "B"}
```

---

Train a contingency-table distribution on a numeric dataset:

```wl
In[1]:= ld = LearnDistribution[{1.2, 2.1, 3.5, 4.3}, Method -> "ContingencyTable"]

Out[1]=
LearnedDistribution[Association["ExampleNumber" -> 4, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "Numerical"]], 
     "Output" -> Association["f1" -> Associa ... 5946821074114402, -0.5943034615151463}, 
     "LeftBoundary" -> -1.4934068319758655, "LeftScale" -> 0.0008207380195640971, 
     "LeftTailNorm" -> 0.125]], "Entropy" -> Around[1.1647162356455172, 0.0882542889133266], 
  "EntropySampleSize" -> 24]]
```

Look at the distribution ``Information`` :

```wl
In[2]:= Information[ld]

Out[2]=
MachineLearning`MLInformationObject[LearnedDistribution[Association["ExampleNumber" -> 4, 
   "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
     Association["Input" -> Association["f1" -> Association["Type" -> "Numerical"]], 
       ... 821074114402, -0.5943034615151463}, 
      "LeftBoundary" -> -1.4934068319758655, "LeftScale" -> 0.0008207380195640971, 
      "LeftTailNorm" -> 0.125]], "Entropy" -> Around[1.1647162356455172, 0.0882542889133266], 
   "EntropySampleSize" -> 24]]]
```

Compute the probability density for a new example:

```wl
In[3]:= PDF[ld, 1.3]

Out[3]= 0.550055
```

Plot the PDF along with the training data:

```wl
In[4]:= Show[Plot[PDF[ld, x], {x, -2, 8}, Filling -> Bottom], NumberLinePlot[{1.2, 2.1, 3.5, 4.3}, Spacings -> 0, PlotStyle -> Red]]

Out[4]= [image]
```

Generate and visualize new samples:

```wl
In[5]:= Histogram[RandomVariate[ld, 10000], 50]

Out[5]= [image]
```

---

Train a contingency-table distribution on a two-dimensional dataset:

```wl
In[1]:= iris = ExampleData[{"MachineLearning", "FisherIris"}, "Data"][[All, 1, {1, 3}]];

In[2]:= ld = LearnDistribution[iris, Method -> "ContingencyTable"]

Out[2]=
LearnedDistribution[Association["ExampleNumber" -> 150, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "NumericalVector", 
         "Length" -> 2]], "Output" ->  ... -> CompressedData["«620»"], 
     "LeftBoundary" -> -2.5125888451725027, "LeftScale" -> 0.408908418811996, 
     "LeftTailNorm" -> 0.02072538860103627]], 
  "Entropy" -> Around[2.268940957994351, 0.042260501401623164], "EntropySampleSize" -> 772]]
```

Plot the PDF along with the training data:

```wl
In[3]:= Show[ContourPlot[PDF[ld, {x, y}], {x, 4, 8}, {y, 1, 7}, PlotRange -> All, Contours -> 10], ListPlot[iris, PlotStyle -> Red]]

Out[3]= [image]
```

Use ``SynthesizeMissingValues`` to impute missing values using the learned distribution:

```wl
In[4]:= SynthesizeMissingValues[ld, {5.5, Missing[]}]

Out[4]= {5.5, 4.39872}

In[5]:= Histogram[Table[Last@SynthesizeMissingValues[ld, {5.5, Missing[]}], 1000]]

Out[5]= [image]
```

### Options (1)

#### "AdditiveSmoothing" (1)

Train a contingency-table distribution on a nominal dataset without any smoothing:

```wl
In[1]:= ld = LearnDistribution[{"A", "A", "B", "B", "B"}, Method -> {"ContingencyTable", "AdditiveSmoothing" -> 0}]

Out[1]=
LearnedDistribution[Association["ExampleNumber" -> 5, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "Nominal"]], 
     "Output" -> Association["f1" -> Associati ... 4, 
    "ProcessorType" -> "x86-64", "OperatingSystem" -> "MacOSX", "SystemWordLength" -> 64, 
    "Evaluations" -> {}], "LogPDFDistribution" -> Missing[], 
  "Entropy" -> Around[0.7072859632994796, 0.11770779776346137], "EntropySampleSize" -> 3]]
```

Compute the probabilities for the values ``"A"`` and ``"B"`` :

```wl
In[2]:= PDF[ld, {"A", "B"}]

Out[2]= {0.4, 0.6}
```

Compare with the probabilities obtained after adding 1 and 10 counts to each outcome:

```wl
In[3]:= ld = LearnDistribution[{"A", "A", "B", "B", "B"}, Method -> {"ContingencyTable", "AdditiveSmoothing" -> 1}]

Out[3]=
LearnedDistribution[Association["ExampleNumber" -> 5, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "Nominal"]], 
     "Output" -> Association["f1" -> Associati ... 4, 
    "ProcessorType" -> "x86-64", "OperatingSystem" -> "MacOSX", "SystemWordLength" -> 64, 
    "Evaluations" -> {}], "LogPDFDistribution" -> Missing[], 
  "Entropy" -> Around[0.7003970336970156, 0.08281274035887244], "EntropySampleSize" -> 3]]

In[4]:= PDF[ld, {"A", "B"}]

Out[4]= {0.428571, 0.571429}

In[5]:= ld = LearnDistribution[{"A", "A", "B", "B", "B"}, Method -> {"ContingencyTable", "AdditiveSmoothing" -> 10}]

Out[5]=
LearnedDistribution[Association["ExampleNumber" -> 5, 
  "Preprocessor" -> MachineLearning`MLProcessor["ToMLDataset", 
    Association["Input" -> Association["f1" -> Association["Type" -> "Nominal"]], 
     "Output" -> Association["f1" -> Associati ... , 
    "ProcessorType" -> "x86-64", "OperatingSystem" -> "MacOSX", "SystemWordLength" -> 64, 
    "Evaluations" -> {}], "LogPDFDistribution" -> Missing[], 
  "Entropy" -> Around[0.7204235231045195, 0.013644591495560496], "EntropySampleSize" -> 3]]

In[6]:= PDF[ld, {"A", "B"}]

Out[6]= {0.48, 0.52}
```

## See Also

* [`LearnDistribution`](https://reference.wolfram.com/language/ref/LearnDistribution.en.md)
* [`LearnedDistribution`](https://reference.wolfram.com/language/ref/LearnedDistribution.en.md)
* [`AnomalyDetection`](https://reference.wolfram.com/language/ref/AnomalyDetection.en.md)
* [`SynthesizeMissingValues`](https://reference.wolfram.com/language/ref/SynthesizeMissingValues.en.md)
* [`RandomVariate`](https://reference.wolfram.com/language/ref/RandomVariate.en.md)
* [`PDF`](https://reference.wolfram.com/language/ref/PDF.en.md)
* [`RarerProbability`](https://reference.wolfram.com/language/ref/RarerProbability.en.md)
* [`DecisionTree`](https://reference.wolfram.com/language/ref/method/DecisionTree.en.md)
* [`KernelDensityEstimation`](https://reference.wolfram.com/language/ref/method/KernelDensityEstimation.en.md)
* [`NaiveBayes`](https://reference.wolfram.com/language/ref/method/NaiveBayes.en.md)

## Related Links

* [An Elementary Introduction to the Wolfram Language: Machine Learning](https://www.wolfram.com/language/elementary-introduction/22-machine-learning.html)

## History

* [Introduced in 2019 (12.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn120.en.md)