Predict
✖
Predict
generates a PredictorFunction that attempts to predict outi from the example ini.
attempts to predict the output associated with input from the training examples given.
Details and Options




- Predict is used to model the relationship between a scalar variable and examples of many types of data, including numerical, textual, sounds and images.
- This type of modelling, also known as regression analysis, is typically used for tasks like customer behavior analysis, healthcare outcomes prediction, credit risk assessment and more.
- Complex expressions are automatically converted to simpler features like numbers or classes.
- The final model type and hyperparameter values are selected using cross-validation on the training data.
- The training data can have the following structure:
-
{in1out1,in2out2,…} a list of Rule between input and output {in1,in2,…}{out1,out2,…} a Rule between inputs and corresponding outputs {list1,list2,…}n the nth element of each List as the output {assoc1,assoc2,…}"key" the "key" element of each Association as the output Dataset[…]column the specified column of Dataset as the output Tabular[…]column the specified column of Tabular as the output - In addition, special form of data include:
-
"name" a built-in prediction function FittedModel[…] a fitted model converted into a PredictorFunction[…] - Each example input ini can be a single data element, a list {feature1, …} or an association <"feature1"value1,… > .
- Each example output outi must be a numerical value.
- The prediction properties prop are the same as in PredictorFunction. They include:
-
"Decision" best prediction according to distribution and utility function "Distribution" distribution of value conditioned on input "SHAPValues" Shapley additive feature explanations for each example "SHAPValues"n SHAP explanations using n samples "Properties" list of all properties available - "SHAPValues" assesses the contribution of features by comparing predictions with different sets of features removed and then synthesized. The option MissingValueSynthesis can be used to specify how the missing features are synthesized. SHAP explanations are given as deviation from the training output mean.
- Examples of built-in predictor functions include:
-
"NameAge" age of a person, given their first name - The following options can be given:
-
AnomalyDetector None anomaly detector used by the predictor AcceptanceThreshold Automatic rarer probability threshold for anomaly detector FeatureExtractor Identity how to extract features from which to learn FeatureNames Automatic feature names to assign for input data FeatureTypes Automatic feature types to assume for input data IndeterminateThreshold 0 below what probability density to return Indeterminate Method Automatic which regression algorithm to use MissingValueSynthesis Automatic how to synthesize missing values PerformanceGoal Automatic aspects of performance to try to optimize RecalibrationFunction Automatic how to post-process predicted value RandomSeeding 1234 what seeding of pseudorandom generators should be done internally TargetDevice "CPU" the target device on which to perform training TimeGoal Automatic how long to spend training the classifier TrainingProgressReporting Automatic how to report progress during training UtilityFunction Automatic utility as function of actual and predicted value ValidationSet Automatic data on which to validate the model generated - Using FeatureExtractor"Minimal" indicates that the internal preprocessing should be as simple as possible.
- Possible settings for Method include:
-
"DecisionTree" predict using a decision tree "GradientBoostedTrees" predict using an ensemble of trees trained with gradient boosting "LinearRegression" predict from linear combinations of features "NearestNeighbors" predict from nearest neighboring examples "NeuralNetwork" predict using an artificial neural network "RandomForest" predict from Breiman–Cutler ensembles of decision trees "GaussianProcess" predict using a Gaussian process prior over functions - Possible settings for PerformanceGoal include:
-
"DirectTraining" train directly on the full dataset, without model searching "Memory" minimize storage requirements of the predictor "Quality" maximize accuracy of the predictor "Speed" maximize speed of the predictor "TrainingSpeed" minimize time spent producing the predictor Automatic automatic tradeoff among speed, accuracy and memory {goal1,goal2,…} automatically combine goal1, goal2, etc. - The following settings for TrainingProgressReporting can be used:
-
"Panel" show a dynamically updating graphical panel "Print" periodically report information using Print "ProgressIndicator" show a simple ProgressIndicator "SimplePanel" dynamically updating panel without learning curves None do not report any information - Information can be used on the PredictorFunction[…] obtained.



Examples
open allclose allBasic Examples (2)Summary of the most common use cases
Learn to predict the third column of a matrix using the features in the first two columns:

https://wolfram.com/xid/0ftugwn-b9dzea

Predict the value of a new example, given its features:

https://wolfram.com/xid/0ftugwn-53n8

Predict the value of a new example that has a missing feature:

https://wolfram.com/xid/0ftugwn-h5bf4p

Predict the value of a multiple examples at the same time:

https://wolfram.com/xid/0ftugwn-fdrke7

Train a linear regression on a set of examples:

https://wolfram.com/xid/0ftugwn-urg4gr

Get the conditional distribution of the predicted value, given the example feature:

https://wolfram.com/xid/0ftugwn-qj2n2y

Plot the probability density of the distribution:

https://wolfram.com/xid/0ftugwn-6q4emg

Plot the prediction with a confidence band together with the training data:

https://wolfram.com/xid/0ftugwn-q7b6fs

Scope (23)Survey of the scope of standard use cases
Data Format (7)
Specify the training set as a list of rules between an input examples and the output value:

https://wolfram.com/xid/0ftugwn-n3lf9z

Each example can contain a list of features:

https://wolfram.com/xid/0ftugwn-5i899g

Each example can contain an association of features:

https://wolfram.com/xid/0ftugwn-250cgd

Specify the training set a list of rule between a list of input and a list of output:

https://wolfram.com/xid/0ftugwn-rqhpke

Specify all the data in a matrix and mark the output column:

https://wolfram.com/xid/0ftugwn-5kyd9k

Specify all the data in a list of associations and mark the output key:

https://wolfram.com/xid/0ftugwn-ihybis

Specify all the data in a dataset and mark the output column:

https://wolfram.com/xid/0ftugwn-7tfkv4

Data Types (12)
Numerical (3)
Predict a variable from a number:

https://wolfram.com/xid/0ftugwn-zhyqm6

Predict a variable from a numerical vector:

https://wolfram.com/xid/0ftugwn-phdgu5

Predict a variable from a numerical array or arbitrary depth:

https://wolfram.com/xid/0ftugwn-tdhm4r

Nominal (3)
Predict a variable from a nominal value:

https://wolfram.com/xid/0ftugwn-btibju

Predict a variable from several nominal values:

https://wolfram.com/xid/0ftugwn-9ljf1l


https://wolfram.com/xid/0ftugwn-qc5ur1

Predict a variable from a mixture of nominal and numerical values:

https://wolfram.com/xid/0ftugwn-23kul


https://wolfram.com/xid/0ftugwn-e4z5l2

Quantities (1)
Train a predictor on data including Quantity objects:

https://wolfram.com/xid/0ftugwn-beyyeu

Use the predictor on a new example:

https://wolfram.com/xid/0ftugwn-ljemp1

Predict the most likely price when only the "Neighborhood" is known:

https://wolfram.com/xid/0ftugwn-f6pdpt

Colors (1)
Images (1)
Sequences (1)
Missing Data (2)
Train on a dataset containing missing features:

https://wolfram.com/xid/0ftugwn-4isdtm

Train a predictor on a dataset with named features. The order of the keys does not matter. Keys can be missing:

https://wolfram.com/xid/0ftugwn-qqov9n

Predict examples containing missing features:

https://wolfram.com/xid/0ftugwn-hcil3x

Information (4)
Extract information from a trained predictor:

https://wolfram.com/xid/0ftugwn-rwllrs

Get information about the input features:

https://wolfram.com/xid/0ftugwn-zqh27x

Get the feature extractor used to process the input features:

https://wolfram.com/xid/0ftugwn-ly2pte

Get a list of the supported properties

https://wolfram.com/xid/0ftugwn-m3ykix

Options (23)Common values & functionality for each option
AcceptanceThreshold (1)
Create a predictor with an anomaly detector:

https://wolfram.com/xid/0ftugwn-xbvaeq

Change the value of the acceptance threshold when evaluating the predictor:

https://wolfram.com/xid/0ftugwn-rmozzx


https://wolfram.com/xid/0ftugwn-v8fyvh

Permanently change the value of the acceptance threshold in the predictor:

https://wolfram.com/xid/0ftugwn-5oakn


https://wolfram.com/xid/0ftugwn-oi7ss1

AnomalyDetector (1)
Create a predictor and specify that an anomaly detector should be included:

https://wolfram.com/xid/0ftugwn-qxo89p

Evaluate the predictor on a non-anomalous input:

https://wolfram.com/xid/0ftugwn-0j6dyn

Evaluate the predictor on an anomalous input:

https://wolfram.com/xid/0ftugwn-1h9pb9

The "Distribution" property is not affected by the anomaly detector:

https://wolfram.com/xid/0ftugwn-0f8nbd

Temporarily remove the anomaly detector from the predictor:

https://wolfram.com/xid/0ftugwn-8hvu12

Permanently remove the anomaly detector from the predictor:

https://wolfram.com/xid/0ftugwn-ro1s8l


https://wolfram.com/xid/0ftugwn-7pl50v

FeatureExtractor (2)
Generate a predictor function using FeatureExtractor to preprocess the data using a custom function:

https://wolfram.com/xid/0ftugwn-exe7l1

https://wolfram.com/xid/0ftugwn-tt0p1a

Add the "StandardizedVector" method to the preprocessing pipeline:

https://wolfram.com/xid/0ftugwn-txhwa

Use the predictor on new data:

https://wolfram.com/xid/0ftugwn-jf77m5

Create a feature extractor and extract features from a dataset:

https://wolfram.com/xid/0ftugwn-nh9mjj

Train a predictor on the extracted features:

https://wolfram.com/xid/0ftugwn-mh79x5

Join the feature extractor to the predictor:

https://wolfram.com/xid/0ftugwn-wqrrwd

The predictor can now be used on the initial input type:

https://wolfram.com/xid/0ftugwn-7kos20

FeatureNames (2)
Train a predictor and give a name to each feature:

https://wolfram.com/xid/0ftugwn-48k90i

Use the association format to predict a new example:

https://wolfram.com/xid/0ftugwn-f3cwzr

The list format can still be used:

https://wolfram.com/xid/0ftugwn-6665m0

Train a predictor on a training set with named features and use FeatureNames to set their order:

https://wolfram.com/xid/0ftugwn-ida89d

Features are ordered as specified:

https://wolfram.com/xid/0ftugwn-mo755h

Predict a new example from a list:

https://wolfram.com/xid/0ftugwn-3pb471

FeatureTypes (2)
Train a predictor on textual and nominal data:

https://wolfram.com/xid/0ftugwn-cpvxh2

https://wolfram.com/xid/0ftugwn-tl4eix

The first feature has been wrongly interpreted as a nominal feature:

https://wolfram.com/xid/0ftugwn-77hwe0

Specify that the first feature should be considered textual:

https://wolfram.com/xid/0ftugwn-dyugcs


https://wolfram.com/xid/0ftugwn-z5h6yj


https://wolfram.com/xid/0ftugwn-qojwog

Train a predictor with named features:

https://wolfram.com/xid/0ftugwn-dbyvuq

https://wolfram.com/xid/0ftugwn-3dzzwj

Both features have been considered numerical:

https://wolfram.com/xid/0ftugwn-g00qh3

Specify that the feature "gender" should be considered nominal:

https://wolfram.com/xid/0ftugwn-3z3lut


https://wolfram.com/xid/0ftugwn-8tgdi

IndeterminateThreshold (1)
Specify a probability density threshold when training the predictor:

https://wolfram.com/xid/0ftugwn-xir5ql

Visualize the probability density for a given example:

https://wolfram.com/xid/0ftugwn-d9c02p


https://wolfram.com/xid/0ftugwn-jqvr

As no value has a probability density above 0.5, no prediction is made:

https://wolfram.com/xid/0ftugwn-e1alw5

Specifying a threshold when predicting supersedes the trained threshold:

https://wolfram.com/xid/0ftugwn-3xbgli

Update the value of the threshold in the predictor:

https://wolfram.com/xid/0ftugwn-ts9t4j


https://wolfram.com/xid/0ftugwn-g2ypu5

Method (4)

https://wolfram.com/xid/0ftugwn-d8jun1

https://wolfram.com/xid/0ftugwn-9j29zn

Train a nearest-neighbors predictor:

https://wolfram.com/xid/0ftugwn-wyev5w

Plot the predicted value as a function of the feature for both predictors:

https://wolfram.com/xid/0ftugwn-h0h59a

Train a random forest predictor:

https://wolfram.com/xid/0ftugwn-2rcb2u

https://wolfram.com/xid/0ftugwn-hruxbw

Find the standard deviation of the residuals on a test set:

https://wolfram.com/xid/0ftugwn-bv2blv

https://wolfram.com/xid/0ftugwn-e0jek4

In this example, using a linear regression predictor increases the standard deviation of the residuals:

https://wolfram.com/xid/0ftugwn-h9y3dv


https://wolfram.com/xid/0ftugwn-btdg1t

However, using a linear regression predictor reduces the training time:

https://wolfram.com/xid/0ftugwn-d8ix2a

Train a linear regression, neural network, and Gaussian process predictor:

https://wolfram.com/xid/0ftugwn-c0p8r5

https://wolfram.com/xid/0ftugwn-d2nzu

These methods produce smooth predictors:

https://wolfram.com/xid/0ftugwn-fn8xqh

Train a random forest and nearest-neighbor predictor:

https://wolfram.com/xid/0ftugwn-glulwk

These methods produce non-smooth predictors:

https://wolfram.com/xid/0ftugwn-bhmrux

Train a neural network, a random forest, and a Gaussian process predictor:

https://wolfram.com/xid/0ftugwn-fi8wke

https://wolfram.com/xid/0ftugwn-6pcffu

The Gaussian process predictor is smooth and handles small datasets well:

https://wolfram.com/xid/0ftugwn-vg7xd6

MissingValueSynthesis (1)
Train a predictor with two input features:

https://wolfram.com/xid/0ftugwn-uarmq6

Get the prediction for an example that has a missing value:

https://wolfram.com/xid/0ftugwn-00e8xt

Set the missing value synthesis to replace each missing variable with its estimated most likely value given known values (which is the default behavior):

https://wolfram.com/xid/0ftugwn-e03411

Replace missing variables with random samples conditioned on known values:

https://wolfram.com/xid/0ftugwn-od0kq

Averaging over many random imputations is usually the best strategy and allows obtaining the uncertainty caused by the imputation:

https://wolfram.com/xid/0ftugwn-fqeata

Specify a learning method during training to control how the distribution of data is learned:

https://wolfram.com/xid/0ftugwn-lyqv3u

Predict an example with missing values using the "KernelDensityEstimation" distribution to condition values:

https://wolfram.com/xid/0ftugwn-hwf88u

Provide an existing LearnedDistribution at training to use it when imputing missing values during training and later evaluations:

https://wolfram.com/xid/0ftugwn-8ni5d


Specify an existing LearnedDistribution to synthesize missing values for an individual evaluation:

https://wolfram.com/xid/0ftugwn-z7m9zi

Control both the learning method and the evaluation strategy by passing an association at training:

https://wolfram.com/xid/0ftugwn-ktalq9

PerformanceGoal (1)
Train a predictor with an emphasis on training speed:

https://wolfram.com/xid/0ftugwn-9reyva

https://wolfram.com/xid/0ftugwn-hq956d


https://wolfram.com/xid/0ftugwn-5pj95i

Find the standard deviation of the residuals on a test set:

https://wolfram.com/xid/0ftugwn-vgczcx

https://wolfram.com/xid/0ftugwn-cqph3r

By default, a compromise between prediction speed and performance is sought:

https://wolfram.com/xid/0ftugwn-2udre2


https://wolfram.com/xid/0ftugwn-uqqrve


https://wolfram.com/xid/0ftugwn-gdh7l0

With the same data, train a predictor with an emphasis on training speed and memory:

https://wolfram.com/xid/0ftugwn-dctcxo

The predictor uses less memory, but is also less accurate:

https://wolfram.com/xid/0ftugwn-wk9ft5


https://wolfram.com/xid/0ftugwn-pq5t0t

RecalibrationFunction (1)
Load the Boston Homes dataset:

https://wolfram.com/xid/0ftugwn-w7tgbo
Train a predictor with model calibration:

https://wolfram.com/xid/0ftugwn-zudagx

Visualize the comparison plot on a test set:

https://wolfram.com/xid/0ftugwn-4mjeut

Remove the recalibration function from the predictor:

https://wolfram.com/xid/0ftugwn-hdakmq

Visualize the new comparison plot:

https://wolfram.com/xid/0ftugwn-8q81c8

TargetDevice (1)
Train a predictor on the system's default GPU using a neural network and look at the AbsoluteTiming:

https://wolfram.com/xid/0ftugwn-gnlysk
Compare the previous result with the one achieved by using the default CPU computation:

https://wolfram.com/xid/0ftugwn-lm8aa7
TimeGoal (2)
Train a predictor while specifying a total training time of 3 seconds:

https://wolfram.com/xid/0ftugwn-idsajs


https://wolfram.com/xid/0ftugwn-c4k8t4

Load the "BostonHomes" dataset:

https://wolfram.com/xid/0ftugwn-kiebhl
Train a predictor while specifying a target training time of 0.1 seconds:

https://wolfram.com/xid/0ftugwn-ef9kd

The predictor reached a standard deviation of about 3.2:

https://wolfram.com/xid/0ftugwn-vomv25

Train a classifier while specifying a target training time of 5 seconds:

https://wolfram.com/xid/0ftugwn-eez9o1

The standard deviation of the predictor is now around 2.7:

https://wolfram.com/xid/0ftugwn-7ug6po

TrainingProgressReporting (1)
Load the "WineQuality" dataset:

https://wolfram.com/xid/0ftugwn-pac3cx
Show training progress interactively during training of a predictor:

https://wolfram.com/xid/0ftugwn-qc60eq
Show training progress interactively without plots:

https://wolfram.com/xid/0ftugwn-qt9tmk
Print training progress periodically during training:

https://wolfram.com/xid/0ftugwn-me00oh
Show a simple progress indicator:

https://wolfram.com/xid/0ftugwn-jyd4wa

https://wolfram.com/xid/0ftugwn-uhwsza
UtilityFunction (2)

https://wolfram.com/xid/0ftugwn-d5e35w

https://wolfram.com/xid/0ftugwn-fg10uw

Visualize the probability density for a given example:

https://wolfram.com/xid/0ftugwn-jhwx3g


https://wolfram.com/xid/0ftugwn-h525og

By default, the value with the highest probability density is predicted:

https://wolfram.com/xid/0ftugwn-fu4psv

This corresponds to a Dirac delta utility function:

https://wolfram.com/xid/0ftugwn-bceaom

Define a utility function that penalizes the predicted value's being smaller than the actual value:

https://wolfram.com/xid/0ftugwn-0p1sgu
Plot this function for a given actual value:

https://wolfram.com/xid/0ftugwn-ygfxmi

Train a predictor with this utility function:

https://wolfram.com/xid/0ftugwn-3onkk

The predictor decision is now changed despite the probability density's being unchanged:

https://wolfram.com/xid/0ftugwn-eeu5o8


https://wolfram.com/xid/0ftugwn-hu9x7e

Specifying a utility function when predicting supersedes the utility function specified at training:

https://wolfram.com/xid/0ftugwn-bcjxxn


https://wolfram.com/xid/0ftugwn-zf4loh


https://wolfram.com/xid/0ftugwn-dwqxyx

Visualize the distribution of age for the name "Claire" with the built-in predictor "NameAge":

https://wolfram.com/xid/0ftugwn-5timav


https://wolfram.com/xid/0ftugwn-xk6le0

The most likely value of this distribution is the following:

https://wolfram.com/xid/0ftugwn-za4le1

Change the utility function to predict the mean value instead of the most likely value:

https://wolfram.com/xid/0ftugwn-ec1roa

ValidationSet (1)
Train a linear regression predictor on the "WineQuality" data:

https://wolfram.com/xid/0ftugwn-cf6lt

https://wolfram.com/xid/0ftugwn-t2s4e8

Obtain the L2 regularization coefficient of the trained predictor:

https://wolfram.com/xid/0ftugwn-g8bxn1


https://wolfram.com/xid/0ftugwn-b4hon9

https://wolfram.com/xid/0ftugwn-9ptx3u

A different L2 regularization coefficient has been selected:

https://wolfram.com/xid/0ftugwn-dy1ept

Applications (6)Sample problems that can be solved with this function
Basic Linear Regression (1)
Train a predictor that predicts the median value of properties in a neighborhood of Boston, given some features of the neighborhood:

https://wolfram.com/xid/0ftugwn-cs8a11

Generate a PredictorMeasurementsObject to analyze the performance of the predictor on a test set:

https://wolfram.com/xid/0ftugwn-fiomr4

Visualize a scatter plot of the values of the test set as a function of the predicted values:

https://wolfram.com/xid/0ftugwn-nf8vw6

Compute the root mean square of the residuals:

https://wolfram.com/xid/0ftugwn-rbo2u1

Weather Analysis (1)
Load a dataset of the average monthly temperature as a function of the city, the year, and the month:

https://wolfram.com/xid/0ftugwn-38fkv
Visualize a sample of the dataset:

https://wolfram.com/xid/0ftugwn-lw5ujc

Train a linear predictor on the dataset:

https://wolfram.com/xid/0ftugwn-oinmn3

Plot the predicted temperature distribution of the city "Lincoln" in 2020 for different months:

https://wolfram.com/xid/0ftugwn-826pqo

For every month, plot the predicted temperature and its error bar (standard deviation):

https://wolfram.com/xid/0ftugwn-iueu6k

https://wolfram.com/xid/0ftugwn-omfr74

https://wolfram.com/xid/0ftugwn-p6kay7

Quality Assesment (1)
Load a dataset of wine quality as a function of the wines' physical properties:

https://wolfram.com/xid/0ftugwn-nfgpho

https://wolfram.com/xid/0ftugwn-hwj5w4

Get a description of the variables in the dataset:

https://wolfram.com/xid/0ftugwn-pt87r

Visualize the distribution of the "alcohol" and "pH" variables:

https://wolfram.com/xid/0ftugwn-9geimf

Train a predictor on the training set:

https://wolfram.com/xid/0ftugwn-1jd9tq

Predict the quality of an unknown wine:

https://wolfram.com/xid/0ftugwn-pau4wm

https://wolfram.com/xid/0ftugwn-jkvhit

Create a function that predicts the quality of the unknown wine as a function of its pH and alcohol level:

https://wolfram.com/xid/0ftugwn-h7rybq
Plot this function to have a hint on how to improve this wine:

https://wolfram.com/xid/0ftugwn-e2ux40

Interpretable Machine Learning (1)
Load a dataset of wine quality as a function of the wines' physical properties:

https://wolfram.com/xid/0ftugwn-0262ca
Train a predictor to estimate wine quality:

https://wolfram.com/xid/0ftugwn-ytl4j2


https://wolfram.com/xid/0ftugwn-8yv81b

Predict the example bottle's quality:

https://wolfram.com/xid/0ftugwn-rfe9mh

Calculate how much higher or lower this bottle's predicted quality is than the mean:

https://wolfram.com/xid/0ftugwn-lqdk3d

Get an estimation for how much each feature impacted the predictor's output for this bottle:

https://wolfram.com/xid/0ftugwn-m7t0n8

Visualize these feature impacts:

https://wolfram.com/xid/0ftugwn-7d1qat

Confirm that the Shapley values fully explain the predicted quality:

https://wolfram.com/xid/0ftugwn-08373t


Learn a distribution of the data that treats each feature as independent:

https://wolfram.com/xid/0ftugwn-ddmot3

Estimate SHAP value feature importance for 100 bottles of wine, using 5 samples for each estimation:

https://wolfram.com/xid/0ftugwn-euwga

Calculate how important each feature is to the model:

https://wolfram.com/xid/0ftugwn-otiw08

Visualize the model's feature importance:

https://wolfram.com/xid/0ftugwn-wsqujs

Visualize a nonlinear relationship between a feature's value and its impact on the model's prediction:

https://wolfram.com/xid/0ftugwn-mmlrh3

Computer Vision (1)
Generate images of gauges associated with their values:

https://wolfram.com/xid/0ftugwn-osbs00

https://wolfram.com/xid/0ftugwn-pg9hnr


https://wolfram.com/xid/0ftugwn-e6q3m3

Train a predictor on this dataset:

https://wolfram.com/xid/0ftugwn-hmr8ih

Predict the value of a gauge from its image:

https://wolfram.com/xid/0ftugwn-ps49ic

Interact with the predictor using Dynamic:

https://wolfram.com/xid/0ftugwn-nqwdji

Customer Behavior Analysis (1)
Import a dataset with data about customer purchases:

https://wolfram.com/xid/0ftugwn-ie3hz3

https://wolfram.com/xid/0ftugwn-3gsmgj

Train a "GradientBoostedTrees" model to predict the total spending based on the other features:

https://wolfram.com/xid/0ftugwn-kgu4su

Use the model to predict the most likely spending by location:

https://wolfram.com/xid/0ftugwn-63i5tc


https://wolfram.com/xid/0ftugwn-6x5scf

For the top three locations, estimate the spending amount as a function of the customer age:

https://wolfram.com/xid/0ftugwn-rv7xie


https://wolfram.com/xid/0ftugwn-hqdj3d
Compute the model predictions:

https://wolfram.com/xid/0ftugwn-y1q16g

https://wolfram.com/xid/0ftugwn-8wf93y

https://wolfram.com/xid/0ftugwn-fwmrvg

Properties & Relations (1)Properties of the function, and connections to other functions
The linear regression predictor without regularization and LinearModelFit can train equivalent models:

https://wolfram.com/xid/0ftugwn-ettqhz


https://wolfram.com/xid/0ftugwn-e8k9t5


https://wolfram.com/xid/0ftugwn-e8qqrd


https://wolfram.com/xid/0ftugwn-p11huw

Fit and NonlinearModelFit can also be equivalent:

https://wolfram.com/xid/0ftugwn-ks38o


https://wolfram.com/xid/0ftugwn-3o0oq

Possible Issues (1)Common pitfalls and unexpected behavior
The RandomSeeding option does not always guarantee reproducibility of the result:
Train several predictors on the "WineQuality" dataset:

https://wolfram.com/xid/0ftugwn-68c6ko

https://wolfram.com/xid/0ftugwn-hyhthp
Compare the results when tested on a test set:

https://wolfram.com/xid/0ftugwn-bxxs1r

https://wolfram.com/xid/0ftugwn-zox5dm

Neat Examples (1)Surprising or curious use cases
Create a function to visualize the predictions of a given method after learning from 1D data:

https://wolfram.com/xid/0ftugwn-05eyuk
Try the function with the "GaussianProcess" method on a simple dataset:

https://wolfram.com/xid/0ftugwn-g8vc1q

Visualize the prediction of other methods:

https://wolfram.com/xid/0ftugwn-ydgebn

Wolfram Research (2014), Predict, Wolfram Language function, https://reference.wolfram.com/language/ref/Predict.html (updated 2025).
Text
Wolfram Research (2014), Predict, Wolfram Language function, https://reference.wolfram.com/language/ref/Predict.html (updated 2025).
Wolfram Research (2014), Predict, Wolfram Language function, https://reference.wolfram.com/language/ref/Predict.html (updated 2025).
CMS
Wolfram Language. 2014. "Predict." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2025. https://reference.wolfram.com/language/ref/Predict.html.
Wolfram Language. 2014. "Predict." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2025. https://reference.wolfram.com/language/ref/Predict.html.
APA
Wolfram Language. (2014). Predict. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Predict.html
Wolfram Language. (2014). Predict. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Predict.html
BibTeX
@misc{reference.wolfram_2025_predict, author="Wolfram Research", title="{Predict}", year="2025", howpublished="\url{https://reference.wolfram.com/language/ref/Predict.html}", note=[Accessed: 17-May-2025
]}
BibLaTeX
@online{reference.wolfram_2025_predict, organization={Wolfram Research}, title={Predict}, year={2025}, url={https://reference.wolfram.com/language/ref/Predict.html}, note=[Accessed: 17-May-2025
]}