---
title: "ContentDetectorFunction"
language: "en"
type: "Symbol"
summary: "ContentDetectorFunction[...] represents a function generated by TrainImageContentDetector or TrainTextContentDetector that localizes and classifies contents in a piece of text or an image."
keywords: 
- entity detection
- Content Annotation
- Named Content Recognition
- Content Tagging
- text auto tagging
- natural language processing
- NLP
- image object detector
- object localization
- object recognition
- object classification
canonical_url: "https://reference.wolfram.com/language/ref/ContentDetectorFunction.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Supervised Machine Learning"
    link: "https://reference.wolfram.com/language/guide/SupervisedMachineLearning.en.md"
related_functions: 
  - 
    title: "TrainTextContentDetector"
    link: "https://reference.wolfram.com/language/ref/TrainTextContentDetector.en.md"
  - 
    title: "TrainImageContentDetector"
    link: "https://reference.wolfram.com/language/ref/TrainImageContentDetector.en.md"
  - 
    title: "TextContents"
    link: "https://reference.wolfram.com/language/ref/TextContents.en.md"
  - 
    title: "TextCases"
    link: "https://reference.wolfram.com/language/ref/TextCases.en.md"
  - 
    title: "ImageContents"
    link: "https://reference.wolfram.com/language/ref/ImageContents.en.md"
  - 
    title: "ImageCases"
    link: "https://reference.wolfram.com/language/ref/ImageCases.en.md"
  - 
    title: "ClassifierFunction"
    link: "https://reference.wolfram.com/language/ref/ClassifierFunction.en.md"
---
[EXPERIMENTAL]

# ContentDetectorFunction

ContentDetectorFunction[…] represents a function generated by TrainImageContentDetector or TrainTextContentDetector that localizes and classifies contents in a piece of text or an image.

## Details

* Content detection, also known as entity tagging and object detection, is the process of finding and classifying subparts of images or text similar to those that the content detector was originally trained on.

* ``ContentDetectorFunction[…]`` is a function that can be applied to an image or a string and returns position, class and other classification properties of the detected contents.

* ``ContentDetectorFunction[…][expr]`` returns detected contents in ``expr``.

* ``ContentDetectorFunction[…][{expr1, expr2, …}]`` detects contents in all ``expri``.

* ``ContentDetectorFunction[…][expr, prop]`` returns the specified property; available properties include:

|               |                                                     |
| ------------- | --------------------------------------------------- |
| "Class"       | the class of the detected object                    |
| "Position"    | the position of the detected object                 |
| "Probability" | estimated probability that the detection is correct |
| "Properties"  | a list of the available properties                  |
| {prop1, …}    | a list of property specifications                   |

* In addition, text detectors can return the following properties:

|                      |                                                |
| -------------------- | ---------------------------------------------- |
| "HighlightedSnippet" | a snippet with the detected string highlighted |
| "Snippet"            | a snippet around the detected string           |
| "String"             | string of the identified text                  |

* Image detectors can return the following properties:

|               |                                                |
| ------------- | ---------------------------------------------- |
| "BoundingBox" | the subimage bounding box given as a Rectangle |
| "Image"       | the identified subimage                        |

* ``ContentDetectorFunction[…][expr, …, opts]`` specifies that the detector should use the options ``opts`` when applied to ``expr``.

* The following options can be given:

|                      |           |                                       |
| -------------------- | --------- | ------------------------------------- |
| AcceptanceThreshold  | Automatic | identification acceptance threshold   |
| TargetDevice         | "CPU"     | the target device on which to compute |

* Base on the detector type, other options might be available:

|                    |           |                                      |
| ------------------ | --------- | ------------------------------------ |
| MaxFeatures        | Automatic | maximum number of contents to return |
| MaxOverlapFraction | Automatic | maximum bounding box overlap         |

---

## Examples (5)

### Basic Examples (2)

Train a simple object detector on text tags:

```wl
In[1]:=
detector = TrainTextContentDetector[{
	"I like banana" -> {{8, 13} -> "Fruit"}, 
	"I eat apples watching TV" -> {{7, 12} -> "Fruit"}, 
	"I am enjoying raspberries" -> {{15, 25} -> "Fruit"}, 
	"I play soccer" -> {{8, 13} -> "Sport"}, 
	"I watch TV" -> {}
	}]

Out[1]= ContentDetectorFunction[<|Type -> ClassifierFunction, InputType -> Text, Function -> NeuralFunctions`Private`file26Detect`textContentClassifierEvaluator, Tokenize -> (With[{NeuralFunctions`Private`file26Detect`tok = NaturalLanguageProcessing`TextTokenize[#1, <<1>>, Method -> <<6>>]}, {<<2>>}] & ), <<2>>, Classifiers -> {ClassifierFunction[…], ClassifierFunction[…]}, Version -> 1|>]
```

Apply the detector on new texts:

```wl
In[2]:= detector[{"I ate cranberries", "I like basketball"}]

Out[2]= {{<|"String" -> "cranberries", "Class" -> "Fruit"|>}, {<|"String" -> "basketball", "Class" -> "Sport"|>}}
```

---

Train an object detector that works on images:

```wl
In[1]:=
df = TrainImageContentDetector[
	{[image] -> {Rectangle[{48, 25}, 
     {160, 144}] -> "apple", Rectangle[{144, 24}, {279, 144}] -> "apple"}, [image] -> {Rectangle[{84, 43}, 
     {227, 155}] -> "strawberry"}, [image] -> {Rectangle[{67, 60}, 
     {140, 220}] -> "banana"}, 
	[image] -> {Rectangle[{60, 70}, {180, 212}] -> "strawberry"}}, TimeGoal -> Quantity[15, "Minutes"]]

Out[1]= ContentDetectorFunction[<|Classes -> {apple, strawberry, banana}, GridSize -> {13, 13}, Anchors -> {{0.0440562, 0.0521065}, {0.144189, 0.158656}, {0.256802, 0.421103}, {0.606371, 0.271368}, {0.751578, 0.705252}}, Net -> NetChain[<2>], DataRange -> Full, Fitting -> Fit, <<3>>, Function -> NeuralFunctions`Private`file19YOLO`YOLOEvaluator, InputType -> Image, Architecture -> YOLO|>]
```

Apply the detector on a new image:

```wl
In[2]:=
testImage = [image];
df[testImage]

Out[2]= {<|"Image" -> [image], "Class" -> "apple"|>, <|"Image" -> [image], "Class" -> "apple"|>}
```

Highlight the detection on the input image:

```wl
In[3]:= HighlightImage[testImage, Legended[#BoundingBox, #Class]& /@ df[testImage, {"BoundingBox", "Class"}]]

Out[3]= [image]
```

### Options (3)

#### AcceptanceThreshold (1)

By default, the detected objects are automatically filtered by a probability threshold:

```wl
In[1]:= df = TrainImageContentDetector[{DynamicModule[«3»] -> {Rectangle[{89, 26}, {317, 217}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{228, 99}, {289, 158}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{86, 133}, {176, 207}] -> "heart", Rectangle[{170, 83}, {225, 130}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{147, 86}, {239, 186}] -> "heart"}}]

Out[1]= ContentDetectorFunction[<|Classes -> {heart}, GridSize -> {13, 13}, Anchors -> {{0.0440562, 0.0521065}, {0.144189, 0.158656}, {0.256802, 0.421103}, {0.606371, 0.271368}, {0.751578, 0.705252}}, Net -> NetChain[<2>], DataRange -> Full, Fitting -> Fit, Version -> 1., <<2>>, Function -> NeuralFunctions`Private`file19YOLO`YOLOEvaluator, InputType -> Image, Architecture -> YOLO|>]

In[2]:= i = [image];

In[3]:= df[i]

Out[3]= {<|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>}
```

Use ``AcceptanceThreshold -> t`` to return only detections with strength greater than ``t`` :

```wl
In[4]:= df[i, AcceptanceThreshold -> .5]

Out[4]= {<|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>}
```

Using a low threshold might give more low-quality detections:

```wl
In[5]:= df[i, "Image", AcceptanceThreshold -> 10 ^ -2]

Out[5]= {[image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image], [image]}
```

#### MaxFeatures (1)

By default, all the detections above the acceptance threshold are returned:

```wl
In[1]:= df = TrainImageContentDetector[{DynamicModule[«3»] -> {Rectangle[{89, 26}, {317, 217}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{228, 99}, {289, 158}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{86, 133}, {176, 207}] -> "heart", Rectangle[{170, 83}, {225, 130}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{147, 86}, {239, 186}] -> "heart"}}]

Out[1]= ContentDetectorFunction[<|Classes -> {heart}, GridSize -> {13, 13}, Anchors -> {{0.0440562, 0.0521065}, {0.144189, 0.158656}, {0.256802, 0.421103}, {0.606371, 0.271368}, {0.751578, 0.705252}}, Net -> NetChain[<2>], DataRange -> Full, Fitting -> Fit, Version -> 1., <<2>>, Function -> NeuralFunctions`Private`file19YOLO`YOLOEvaluator, InputType -> Image, Architecture -> YOLO|>]

In[2]:= i = [image];

In[3]:= df[i, AcceptanceThreshold -> 10 ^ -3]//Length

Out[3]= 180
```

Use ``MaxFeatures -> n`` to return only the ``n`` strongest detections:

```wl
In[4]:= df[i, AcceptanceThreshold -> 10 ^ -3, MaxFeatures -> 4]

Out[4]= {<|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>, <|"Image" -> [image], "Class" -> "heart"|>}
```

#### MaxOverlapFraction (1)

By default, the detections are returned regardless of their overlapping:

```wl
In[1]:= df = TrainImageContentDetector[{DynamicModule[«3»] -> {Rectangle[{89, 26}, {317, 217}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{228, 99}, {289, 158}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{86, 133}, {176, 207}] -> "heart", Rectangle[{170, 83}, {225, 130}] -> "heart"}, DynamicModule[«3»] -> {Rectangle[{147, 86}, {239, 186}] -> "heart"}}]

Out[1]= ContentDetectorFunction[<|Classes -> {heart}, GridSize -> {13, 13}, Anchors -> {{0.0440562, 0.0521065}, {0.144189, 0.158656}, {0.256802, 0.421103}, {0.606371, 0.271368}, {0.751578, 0.705252}}, Net -> NetChain[<2>], DataRange -> Full, Fitting -> Fit, Version -> 1., <<2>>, Function -> NeuralFunctions`Private`file19YOLO`YOLOEvaluator, InputType -> Image, Architecture -> YOLO|>]

In[2]:= i = [image];

In[3]:= HighlightImage[i, {Green, df[i, "BoundingBox"]}]

Out[3]= [image]
```

Find only non-overlapping objects:

```wl
In[4]:= HighlightImage[i, {Green, df[i, "BoundingBox", MaxOverlapFraction -> 0]}]

Out[4]= [image]
```

Allow up to 10 percent of overlap:

```wl
In[5]:= HighlightImage[i, {Green, df[i, "BoundingBox", MaxOverlapFraction -> 0.1]}]

Out[5]= [image]
```

## See Also

* [`TrainTextContentDetector`](https://reference.wolfram.com/language/ref/TrainTextContentDetector.en.md)
* [`TrainImageContentDetector`](https://reference.wolfram.com/language/ref/TrainImageContentDetector.en.md)
* [`TextContents`](https://reference.wolfram.com/language/ref/TextContents.en.md)
* [`TextCases`](https://reference.wolfram.com/language/ref/TextCases.en.md)
* [`ImageContents`](https://reference.wolfram.com/language/ref/ImageContents.en.md)
* [`ImageCases`](https://reference.wolfram.com/language/ref/ImageCases.en.md)
* [`ClassifierFunction`](https://reference.wolfram.com/language/ref/ClassifierFunction.en.md)

## Related Guides

* [Supervised Machine Learning](https://reference.wolfram.com/language/guide/SupervisedMachineLearning.en.md)

## History

* [Introduced in 2021 (13.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn130.en.md)