"Multinormal" (Machine Learning Method)

Details & Suboptions

  • "Multinormal" models the probability density of a numeric space using a multivariate normal distribution as in MultinormalDistribution.
  • The probability density for vector is proportional to , where and are learned parameters. If n is the size of the input numeric vector, is an n×n symmetric positive definite matrix called covariance, and is a size-n vector.
  • The following options can be given:
  • "CovarianceType" "Full"type of constraint on the covariance matrix
    "IntrinsicDimension" Automaticeffective dimensionality of the data to assume
  • Possible settings for "CovarianceType" include:
  • "Diagonal"only diagonal elements are learned (the others are set to 0)
    "Full"all n×n elements are learned
    "Spherical"only diagonal elements are learned and are set to be equal
  • Possible settings for "IntrinsicDimension" include:
  • Automatictry several possible dimensions
    "Heuristic"use a heuristic based on the data
    kuse the specified dimension
  • When "CovarianceType""Full" and "IntrinsicDimension"k, with k<n, a linear dimensionality reduction is performed on the data. A full k×k covariance matrix is used to model data in the reduced space (which can be interpreted as the "signal" part), while a spherical covariance matrix is used to model the n-k remaining dimensions (which can be interpreted as the "noise" part).
  • The value of "IntrinsicDimension" is ignored when "CovarianceType""Diagonal" or "CovarianceType""Spherical".
  • Information[LearnedDistribution[],"MethodOption"] can be used to extract the values of options chosen by the automation system.
  • LearnDistribution[,FeatureExtractor"Minimal"] can be used to remove most preprocessing and directly access the method.

Examples

open allclose all

Basic Examples  (3)

Train a "Multinormal" distribution on a numeric dataset:

Look at the distribution Information:

Obtain options information:

Obtain an option value directly:

Compute the probability density for a new example:

Plot the PDF along with the training data:

Generate and visualize new samples:

Train a "Multinormal" distribution on a two-dimensional dataset:

Plot the PDF along with the training data:

Use SynthesizeMissingValues to impute missing values using the learned distribution:

Train a "Multinormal" distribution on a nominal dataset:

Because of the necessary preprocessing, the PDF computation is not exact:

Use ComputeUncertainty to obtain the uncertainty on the result:

Increase MaxIterations to improve the estimation precision:

Options  (2)

"CovarianceType"  (1)

Train a "Multinormal" distribution with a "Full" covariance:

Evaluate the PDF of the distribution at a specific point:

Visualize the PDF obtained after training a multinormal with covariance types "Full", "Diagonal" and "Spherical":

"IntrinsicDimension"  (1)

Train a "Multinormal" distribution with a "Full" covariance and an intrinsic dimension of 1:

Visualize samples generated from the distribution:

Compare with samples generated with an intrinsic dimension of 3: