AssessmentFunction

AssessmentFunction[key]

represents a tool for assessing whether answers are correct according to the key.

AssessmentFunction[key,method]

uses the specified answer comparison method.

AssessmentFunction[key,f]

uses the function f to compare answers with the key.

AssessmentFunction[key,comp]

performs assessment using the custom assessment defined in the Association comp.

AssessmentFunction[obj]

represents an assessment function that performs assessment using the CloudObject obj.

AssessmentFunction[{obj,id}]

assesses the specified question within the CloudObject obj.

AssessmentFunction[] [answer]

gives an AssessmentResultObject representing the correctness of answer.

Details and Options

  • AssessmentFunction is commonly used within QuestionObject to define how to assess answers to a question.
  • The key accepts the following forms:
  • ansanswer matches the pattern ans
    {ans1,ans2,}answer is any of the ansi
    {{a1,a2,}}answer is the list {a1,a2,}
  • Each possible answer ansi can have the following forms:
  • pattpattern matching all correct responses
    pattscorepattern and corresponding score to be awarded
    pattansspecAssociation containing a complete answer specification
  • Using pattscore is equivalent to patt<|"Score"score|>.
  • The score should have either Boolean or numeric values. True and positive numeric scores denote correct answers, while False, zero and negative scores are incorrect.
  • The patti can be exact answer values or patterns against which the values of answer are compared.
  • In AssessmentFunction[{patt1,patt2,}], when no scores are provided, all the patti are treated as correct. If a single patti is set to True or a positive score, all other patti are treated as incorrect answers.
  • The full answer specification ansspec accepts the following keys:
  • "Score" (required)award given for matching answers
    "AnswerCorrect"whether the ans is considered correct
    "Category"category corresponding to the answer for sorting questions
    "Explanation"text to be provided to the user
  • Answer comparison methods supported in AssessmentFunction[key,"method"] include the following "method" values and use the corresponding distance functions to compare answers and solutions against the Tolerance, where None represents requiring an exact match. Methods are also listed in Listing of Assessment Comparison Methods.
  • "Number"Norm[#1-#2]&scalar numeric values
    "String"EditDistancestrings
    "Expression"Noneany expression
    "HeldExpression"Noneexpressions held without evaluation
    "ArithmeticResult"Noneanswers to arithmetic exercises
    "PolynomialResult"Noneanswers to polynomial exercises
    "CalculusResult"Noneanswers to calculus exercises
    "AlgebraicValue"Noneanswers to equation-solving exercises
    "CodeEquivalence"Nonecode
    "Date"DateDifferencedates
    "GeoPosition"GeoDistancegeographic locations
    "Vector"Norm[#1-#2]&vector
    "Color"ColorDistancecolor values
    "Quantity"Norm[#1-#2]&quantity with magnitude and unit
  • In AssessmentFunction[key,comp], comp is an Association. The accepted keys are:
  • "ComparisonMethod"named comparison method "method"
    "Comparator"custom function to compare provided answer to each pattern in key
    "Selector"function to select matching pattern for provided answer
    "ListAssessment"specify the method for assessing listed answers
    "ScoreCombiner"function to combine elementwise "Score" values
    "AnswerCorrectCombiner"function to combine elementwise "AnswerCorrect" values
  • AssessmentFunction[key,f] for a function f is equivalent to AssessmentFunction[key,<|"Comparator"f|>].
  • Only one of "Comparator" or "Selector" should be provided. Using "Comparator"compf computes compf[answer, patt] for each ans in the key in order and chooses the first ans to give True. Common comparators include MatchQ, Greater, StringMatchQ and SameQ.
  • A custom comparator f that takes only the user's answer as input can be used without specifying a key. In this case, Automatic is accepted as a key. When f[answer] gives True, the assessment is marked as correct. When the key is not Automatic, f[answer, patti] is computed using the submitted answer and each patti in the answer key; as soon as any gives True, assessment is based on that patti.
  • Using "Selector"selectf computes selectf[{patt1,patt2,},answer] and returns the patt corresponding to the selected ans. Common selectors include SelectFirst, Composition[First,Nearest] and Composition[First,TakeLargestBy].
  • When assessing listed answers AssessmentFunction[key,<|,"ListAssessment"method,|>][{elem1,elem2,}], the following values are supported by method:
  • "SeparatelyScoreElements"assess each element of the answer against the key separately and combine the results
    "AllElementsOrdered"check whether the elements of answer match the elements of key, with matching order
    "AllElementsOrderless"check whether the elements of answer match the elements of key, in any order
    "WholeList" (default)assess as an ordinary expression, applying the comparison to the full list {elem1,elem2,}
  • For "SeparatelyScoredElements", the patt in the key should correspond to individual elements of the answer. This allows assigning scores for each element as described below. For all other "ListAssessment" methods, each patt in the key should contain a list.
  • Using "ListAssessment""SeparatelyScoreElements" assesses listed answers one element at a time. The "Score" and "AnswerCorrect" results for each element are combined using the "ScoreCombiner" and "AnswerCorrectCombiner" functions, respectively. The "ScoreCombiner" and "AnswerCorrectCombiner" functions are only applied when "SeparatelyScoredElements" is used.
  • AssessmentFunction accepts the following options:
  • DistanceFunction Automaticdistance metric to use
    Tolerance Automaticdistance to accept when matching answers
    MaxItems Infinitylimit on number of elements in elementwise assessment
  • AssessmentFunction[key] is equivalent to AssessmentFunction[key,Automatic] and infers an answer comparison type from key.
  • Each answer comparison type corresponds to a predefined comparator or selector function. Usually, when no built-in notion of distance exists for the comparison type a "Comparator" of MatchQ is used.
  • When a notion of distance does exist for a comparison type, AssessmentFunction uses a "Selector" of First@*Nearest and accepts Tolerance and DistanceFunction options.
  • For separately scored elements, "AnswerCorrectCombiner" should take a list of Booleans representing the correctness of each element and return a single Boolean for the overall correctness of the answer. The default depends on the comparison method. The most common default value is AllTrue[#,TrueQ]&.
  • For separately scored elements, "ScoreCombiner" should take a list of numeric values representing the score of each element and return a total numeric score for the answer. The default depends on the comparison method. The most common default value is Total.
  • When separately scoring elements, if the number of elements given is greater than the value of MaxItems, AssessmentFunction gives a Failure.
  • Information works on AssessmentFunction and accepts the following prop values:
  • "DefaultQuestionInterface"user interface implied by the key (i.e. "MultipleChoice", "ShortAnswer")
    "AnswerComparisonMethod"expected type for the values (i.e. "Number", "GeoPosition")
    "Key"key used to assess answers
  • Information[AssessmentFunction[],"Properties"] provides a full list of available prop values.
  • AssessmentFunction[CloudObject[]] performs the assessment remotely within the specified CloudObject. This prevents the modification of the assessment by the user providing the answers.

Examples

open allclose all

Basic Examples  (4)

Create an assessment function that will check for the answer "Dog":

Define an assessment function that gives 10 points for any answer over 100:

Check the answer to a polynomial math question:

The factored form is marked incorrect:

An equivalent polynomial with reordered terms is correct:

Create an assessment function that awards two points for an even number:

Apply the assessment to an answer:

Scope  (28)

Answer Keys  (8)

Providing a single value as an answer key treats that value as the only correct answer:

Check an answer:

One point is awarded:

Provide a single correct answer and specify the score:

Positive scores are considered as correct:

Provide a single correct answer and the associated assessment as an Association:

Apply it to a correct and incorrect answer:

All these AssessmentFunction inputs are equivalent:

Create an answer key with many correct answers:

Any of the values will be marked correct:

Assign scores for each available answer:

Negative scores are considered incorrect:

Create an AssessmentFunction for a categorization problem by specifying a "Category" for each item in the key:

The submitted answer should be provided as itemcategory:

Include an explanation for a correct answer:

Provide the correct answer and see the explanation:

The explanation is also available in a QuestionObject:

Named Comparison Methods  (4)

Specify a comparison method by name:

Answers will be compared as numbers:

Choose a different comparison method for the same answer key:

As an expression, the answer is not equivalent:

The comparison method can be specified using only the name:

Or using an Association:

AssessmentFunction attempts to automatically determine an appropriate comparison method when one is not specified:

Use Information to retrieve the chosen method:

Answer key values must be consistent with the comparison method. Use vector values in the answer key for a "Vector" comparison:

The submitted answer must also be consistent:

The assessment is based on the custom distance measurement for vectors. When the Euclidean distance is more than the tolerance, it is marked as incorrect:

Custom Comparison Methods  (5)

Give a custom comparator instead of a named method, setting the answer key to Automatic:

If the comparator function gives True when applied to the submitted answer, it is correct:

Alternatively, give the comparator in an Association:

Provide a custom comparator along with an answer key:

The comparator is applied to the submitted answer and the answer key to determine whether they match:

Create an AssessmentFunction with a custom selector to determine which value from the key matches the submitted answer:

Any answer closer to 3 than 4 or 1 will be marked correct:

Create two assessment functions for similar problems, one with a comparator and one with a selector:

The selector looks at all values in the answer key and selects the closest one. Note the score of 10 corresponding to the value 3 in the answer key:

The comparator compares the submitted answer to the values in the answer key in order and uses the first match. Note the score of 1 corresponding to the value 4 in the answer key:

Specify a comparator function to create an assessment for geolocations near cities:

Assess a location and see the full assessment:

Holding Values  (3)

Specify a "HeldExpression" comparison method using HoldPattern to define the answer key values:

Check an expression held by Hold. The expression does not evaluate:

Mathematical comparison methods like "AlgebraicValue" also accept held values:

Appropriate mathematical transformations, like basic arithmetic, are allowed within the Hold during assessment:

Full resolved values do not need to be held:

The "AlgebraicValue" comparison method does not allow functions like SolveValues to evaluate:

Specify a "CodeEquivalence" comparison method using HoldPattern to define the answer key:

Check code wrapped in Hold for equivalence. Transformations such as equivalence of arbitrary variable names are applied within the Hold:

List Assessment  (7)

Award points for each correct answer by specifying "ListAssessment""SeparatelyScoreElements":

The score represents the number of correct answers:

Use separately scored elements to award partial credit:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Define custom combining functions for combining the assessments of each element:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Create an AssessmentFunction with the setting "ListAssessment""AllElementsOrdered":

The submitted answer must contain the same elements in the same order:

Different orders are marked as incorrect:

Add a tolerance:

The comparison method is applied independently to each element:

Directly compute a comparison that is equivalent to the one in the "AllElementsOrdered" assessment:

Compare this to the default "WholeList" setting, which compares the submitted list to the answer in its entirety. Note that the "Vector" comparison method is chosen instead of "Number":

Directly compute an equivalent comparison to the internal "WholeList" assessment:

For a list of strings, create an "AllElementsOrdered" assessment function. Note that the "String" comparison method is inferred:

Create a "WholeList" assessment function for the same answer key. Note that the "Expression" comparison method is inferred for the list:

The Tolerance is supported in the "String" comparison on each element, allowing for example capitalization differences:

The "Expression" comparison does not support Tolerance, and the answer is marked as incorrect:

Create an AssessmentFunction with the setting "ListAssessment""AllElementsOrderless":

The submitted answer must be a list containing all the elements, but any order is accepted:

The comparison method, in this case "CalculusResult", is applied to each element, allowing differences in arithmetic:

Use three different "ListAssessment" settings for the same answer key. Note that for "SeparatelyScoreElements", the answer key values are not lists:

All three compare elements independently. With the "CalculusResult" method, mathematically equivalent values are marked as correct:

Only "SeparatelyScoreElements" awards partial credit for partial answers:

Only "AllElementsOrdered" requires the elements to be in the same order as the answer key:

Cloud Deployment  (1)

Create an AssessmentFunction for geolocations in Florida:

Cloud deploy the assessment function using the resource function QuestionDeploy:

The deployed AssessmentFunction does not include the answer key:

Assess an answer. The assessment occurs securely in the deployed CloudObject:

Generalizations & Extensions  (2)

Easily create an answer key with only one correct answer, by specifying a single positive score:

Other values are marked incorrect:

Easily create an answer key with only one incorrect answer, by specifying a single negative score:

Other values are marked correct:

Options  (6)

DistanceFunction  (1)

Define an assessment function for a question about the distance of the Sun:

The tolerance is applied linearly:

Specify a DistanceFunction to apply the tolerance logarithmically:

MaxItems  (4)

Using "ListAssessment""SeparatelyScoreElements" computes the assessment for each element in the answer; this can be slow for some comparators:

Limit the number of elements to assess with MaxItems:

Create an assessment function that scores each element separately. Note that the key contains scores for each possible value of the elements:

Assess a listed answer:

See the full result. Note that "ElementInformation" contains assessments for each element and an overall "Score" and "AnswerCorrect" value are computed for the full answer:

Create an assessment function that assesses a listed answer by comparing each element of the answer to the corresponding element of the key:

Assess an answer. Note that the tolerance is applied to each element:

Changing the order of the elements gives an incorrect result:

Create an assessment function that assesses a listed answer by comparing each element of the answer to any element of the key:

Assess answers with different element orders. The answer is correct as long as each element in the answer matches a distinct element in the key:

Tolerance  (1)

Create an assessment function asking to name the value of Pi:

Approximate answers are marked as incorrect:

Use the Tolerance option to allow approximate answers:

The answer is marked as correct:

Applications  (3)

Make an assessment function that checks if a user's code is equivalent to the answer key:

Code transformation rules attempt to determine if the code is equivalent:

Answers that are not equivalent are incorrect:

Define a grader for a calculus problem:

Mathematical transformation rules attempt to determine if the code is equivalent:

Equivalent representations are also correct:

Attempting to give the unevaluated question as an answer gives incorrect:

Create a QuestionObject for a polynomial exercise:

Properties & Relations  (5)

Create an assessment that checks for a list of values as a single item:

Elements of the list are incorrect answers:

Only the full list will match:

Answer keys support patterns:

See the assessment results:

Extract information about an AssessmentFunction using Information:

Retrieve specific values:

When using "ListAssessment""AllElementsOrdered", the values in the answer key are lists. Each element in the answer key list is compared to the corresponding element in the submitted answer:

MatchQ[1,_Integer]
True
MatchQ["hello",_String]
True
MatchQ[3,_Integer]
True

When using "ListAssessment""AllElementsOrderless", more comparisons are performed:

MatchQ[1,_Integer]
True
MatchQ[1,_String]
False
MatchQ[1,_Integer]
True
MatchQ["hello",_Integer]
False
MatchQ["hello",_String]
True
MatchQ["hello",_Integer]
False
MatchQ[3,_Integer]
True
MatchQ[3,_String]
False
MatchQ[3,_Integer]
True

When using "ListAssessment""SeparatelyScoreElements", the answer key is a flat list of values, and the comparisons are made between the elements of the submitted answer and each value in the key:

MatchQ[1,_?OddQ]
True
MatchQ["hello",_?OddQ]
False
MatchQ["hello",_String]
True
MatchQ[6,_?OddQ]
False
MatchQ[6,_String]
False
MatchQ[6,_?EvenQ]
True

Create an "AllElementsOrderless" assessment function with overlapping values in the key:

If each element of the answer does not match a distinct element of the key, it is incorrect:

When a distinct element of the key can match each element of the answer, it is correct:

Wolfram Research (2020), AssessmentFunction, Wolfram Language function, https://reference.wolfram.com/language/ref/AssessmentFunction.html (updated 2024).

Text

Wolfram Research (2020), AssessmentFunction, Wolfram Language function, https://reference.wolfram.com/language/ref/AssessmentFunction.html (updated 2024).

CMS

Wolfram Language. 2020. "AssessmentFunction." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2024. https://reference.wolfram.com/language/ref/AssessmentFunction.html.

APA

Wolfram Language. (2020). AssessmentFunction. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/AssessmentFunction.html

BibTeX

@misc{reference.wolfram_2024_assessmentfunction, author="Wolfram Research", title="{AssessmentFunction}", year="2024", howpublished="\url{https://reference.wolfram.com/language/ref/AssessmentFunction.html}", note=[Accessed: 04-November-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_assessmentfunction, organization={Wolfram Research}, title={AssessmentFunction}, year={2024}, url={https://reference.wolfram.com/language/ref/AssessmentFunction.html}, note=[Accessed: 04-November-2024 ]}