---
title: "ImageCorrespondingPoints"
language: "en"
type: "Symbol"
summary: "ImageCorrespondingPoints[image1, image2] finds a set of matching interest points in image1 and image2 and returns their pixel coordinates."
keywords: 
- feature
- keypoint
- object recognition
- image matching
- stereovision
- multiview geometry
- image alignment
- image stitching
- image registration
- point of interest
- correspondence
- landmark
- ransac
- homography
- SIFT
- SURF
- FAST
- AGAST
- BRISK
- KAZE
- AKAZE
- ORB
- MSER
- Speeded-Up Robust Features
- Features from Accelerated Segment Test
- Adaptive and Generic Accelerated Segment Test
- Binary Robust Invariant Scalable Keypoints
- Binary Robust Independent Elementary Features
- BRIEF descriptor
- nonlinear scale-space detector and descriptor
- Accelerated KAZE
- Oriented FAST
- Rotated BRIEF
- feature detection
- feature descriptor
- HOG
canonical_url: "https://reference.wolfram.com/language/ref/ImageCorrespondingPoints.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Feature Detection"
    link: "https://reference.wolfram.com/language/guide/FeatureDetection.en.md"
  - 
    title: "Computer Vision"
    link: "https://reference.wolfram.com/language/guide/ComputerVision.en.md"
  - 
    title: "Image Computation: Update History"
    link: "https://reference.wolfram.com/language/guide/ImageComputation-UpdateHistory.en.md"
  - 
    title: "Geometric Operations"
    link: "https://reference.wolfram.com/language/guide/ImageGeometry.en.md"
  - 
    title: "Computational Photography"
    link: "https://reference.wolfram.com/language/guide/ComputationalPhotography.en.md"
  - 
    title: "Image Computation for Microscopy"
    link: "https://reference.wolfram.com/language/guide/ImageComputationForMicroscopy.en.md"
related_functions: 
  - 
    title: "ImageFeatureTrack"
    link: "https://reference.wolfram.com/language/ref/ImageFeatureTrack.en.md"
  - 
    title: "ImageKeypoints"
    link: "https://reference.wolfram.com/language/ref/ImageKeypoints.en.md"
  - 
    title: "FindGeometricTransform"
    link: "https://reference.wolfram.com/language/ref/FindGeometricTransform.en.md"
  - 
    title: "ImageAlign"
    link: "https://reference.wolfram.com/language/ref/ImageAlign.en.md"
  - 
    title: "ImageStitch"
    link: "https://reference.wolfram.com/language/ref/ImageStitch.en.md"
  - 
    title: "CornerFilter"
    link: "https://reference.wolfram.com/language/ref/CornerFilter.en.md"
  - 
    title: "EdgeDetect"
    link: "https://reference.wolfram.com/language/ref/EdgeDetect.en.md"
---
# ImageCorrespondingPoints

ImageCorrespondingPoints[image1, image2] finds a set of matching interest points in image1 and image2 and returns their pixel coordinates.

## Details and Options

* ``ImageCorrespondingPoints`` uses ``ImageKeypoints`` to find candidate corresponding points.

* ``ImageCorrespondingPoints[image1, image2]`` returns an expression of the form ``{points1, points2}``, where the ``pointsi`` are lists of pixel coordinates representing the matching points in ``imagei``.

* The following options can be specified:

|                      |           |                                     |
| -------------------- | --------- | ----------------------------------- |
| KeypointStrength     | Automatic | minimum strength of the keypoints   |
| Masking              | All       | region of interest                  |
| MaxFeatures          | Automatic | maximum number of keypoints         |
| Method               | Automatic | the type of keypoint to use         |
| TransformationClass  | None      | geometrical relation between points |

* With the setting ``Masking -> roi``, the set of points is restricted so that the returned points ``points1`` of ``image1`` all lie within the region of interest.

* With the setting ``MaxFeatures -> n``, at most ``n`` corresponding points with largest average keypoint strength are returned.

* By default, a suitable keypoint type is used to find corresponding points. Using ``Method -> method``, a specific keypoint type or a list of types can be specified.

* Possible settings for ``method`` include:

|                       |                                                                                    |
| --------------------- | ---------------------------------------------------------------------------------- |
| "AKAZE"               | Accelerated KAZE and binary descriptors                                            |
| "BRISK"               | Binary Robust Invariant Scalable Keypoints                                         |
| "KAZE"                | nonlinear scale-space detector and descriptor                                      |
| "ORB"                 | FAST detector and Binary Robust Independent Elementary Features (BRIEF) descriptor |
| "SIFT"                | Scale-Invariant Feature Transform detector and descriptor                          |
| "RootSIFT"            | SIFT keypoints with an improved descriptor                                         |
| "SURF"                | Speeded-Up Robust Features                                                         |
| {method1, method2, …} | combination of various keypoint correspondences                                    |

* Possible settings for ``TransformationClass`` include:

|               |                                                                                    |
| ------------- | ---------------------------------------------------------------------------------- |
| None          | no geometric constraints                                                           |
| "Translation" | translation only                                                                   |
| "Rigid"       | translation and rotation                                                           |
| "Similarity"  | translation, rotation, and scaling                                                 |
| "Affine"      | linear transformation and translation                                              |
| "Perspective" | linear fractional transformation                                                   |
| "Epipolar"    | epipolar transformation, mapping a point in one image to a line in the other image |

## Examples (25)

### Basic Examples (1)

Corresponding points of two different images of the same object:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{98.4792, 108.588}, {129.497, 49.5567}, {72.4483, 52.6461}, {72.2919, 61.582}, {143.513, 66.8625}, {183.855, 45.043}, {88.0629, 61.5811}, {127.884, 27.0976}, {84.7241, 65.7025}, {176.593, 65.5775}}, {{97.901, 111.134}, {129.233, 50.1133}, {70.4156, 56.3674}, {70.523, 64.5807}, {141.995, 71.3083}, {177.141, 42.3174}, {84.7602, 63.2292}, {126.469, 30.0437}, {82.4865, 68.1415}, {18.2546, 208.863}}}
```

### Scope (3)

Corresponding points in binary images:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{72.6415, 49.8696}, {55.1076, 41.0066}, {54.7814, 29.482}, {23.5307, 45.4759}}, {{26.9906, 29.81}, {44.8811, 39.0071}, {44.5768, 50.2094}, {78.1534, 33.6316}}}
```

---

Corresponding points in grayscale images:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{46.9532, 47.8183}, {61.8854, 61.6799}, {84.5018, 25.1445}, {39.7285, 43.1555}, {48.7129, 61.7104}, {23.4037, 45.7435}, {52.7303, 13.053}}, {{52.7975, 32.0367}, {38.2713, 18.3274}, {16.2887, 53.9563}, {60.3145, 36.9974}, {51.8848, 17.9542}, {78.0659, 33.4641}, {46.7959, 67.5645}}}
```

---

Corresponding points in color images:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{34.4243, 39.232}, {51.2255, 54.1668}, {59.0599, 12.526}, {41.2014, 34.2059}, {54.6823, 21.3054}, {48.9432, 28.4091}, {35.5914, 24.3267}, {74.8283, 38.2093}, {19.9302, 27.5284}, {30.1138, 54.2699}, {30.1359, 62.4735}}, {{64.936, 34.947}, {49.6803, 20.5996}, {40.6882, 62.6824}, {58.9781, 41.0412}, {45.2318, 53.3933}, {50.4537, 46.4056}, {66.5915, 51.1906}, {24.2183, 36.8424}, {79.9698, 47.3234}, {70.3343, 20.7836}, {69.1305, 12.272}}}
```

### Options (16)

#### KeypointStrength (3)

By default, all keypoints are used in finding corresponding points:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{98.4792, 108.588}, {129.497, 49.5567}, {72.4483, 52.6461}, {72.2919, 61.582}, {143.513, 66.8625}, {183.855, 45.043}, {88.0629, 61.5811}, {127.884, 27.0976}, {84.7241, 65.7025}, {176.593, 65.5775}}, {{97.901, 111.134}, {129.233, 50.1133}, {70.4156, 56.3674}, {70.523, 64.5807}, {141.995, 71.3083}, {177.141, 42.3174}, {84.7602, 63.2292}, {126.469, 30.0437}, {82.4865, 68.1415}, {18.2546, 208.863}}}
```

---

Use only keypoints with individual strength greater or equal to a given threshold:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], KeypointStrength -> .001]

Out[1]= {{{98.4792, 108.588}, {100.002, 89.7356}, {129.497, 49.5567}}, {{97.901, 111.134}, {98.4657, 92.649}, {129.233, 50.1133}}}
```

---

Increasing the threshold typically results in detecting fewer corresponding points:

```wl
In[1]:= {i1, i2} = {[image], [image]};

In[2]:= ListLinePlot[Table[{s, Length[ImageCorrespondingPoints[i1, i2, KeypointStrength -> s][[1]]]}, {s, .0001, .005, .0001}], Frame -> True, FrameLabel -> {"threshold", "number of corresponding points"}]

Out[2]= [image]
```

#### Masking (1)

By default, with ``Masking -> All``, all detected corresponding points are returned:

```wl
In[1]:= {i1, i2} = {[image], [image]};

In[2]:= MapThread[HighlightImage[#1, #2]&, {{i1, i2}, ImageCorrespondingPoints[i1, i2]}]

Out[2]= {[image], [image]}
```

With ``Masking -> maskimage``, corresponding points of ``image1`` should lie within ``maskimage`` :

```wl
In[3]:=
mask = [image];
c = ImageCorrespondingPoints[i1, i2, Masking -> mask];
```

Display the detected correspondences for the first image:

```wl
In[4]:= HighlightImage[i1, {c[[1]], Orange, mask}]

Out[4]= [image]
```

#### MaxFeatures (2)

Return the best $5$ correspondences:

```wl
In[1]:= c = ImageCorrespondingPoints[[image], [image], MaxFeatures -> 5]

Out[1]= {{{98.4792, 108.588}, {100.002, 89.7356}, {129.497, 49.5567}, {72.4483, 52.6461}, {143.513, 66.8625}}, {{97.901, 111.134}, {98.4657, 92.649}, {129.233, 50.1133}, {70.4156, 56.3674}, {141.995, 71.3083}}}
```

---

The number of returned correspondences may be less than the value of the ``MaxFeatures`` option:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], KeypointStrength -> .001, MaxFeatures -> 5]//First//Length

Out[1]= 3
```

#### Method (3)

By default, ``"SURF"`` keypoints are used to find correspondences:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]

Out[1]= {{{98.4792, 108.588}, {129.497, 49.5567}, {72.4483, 52.6461}, {72.2919, 61.582}, {143.513, 66.8625}, {183.855, 45.043}, {88.0629, 61.5811}, {127.884, 27.0976}, {84.7241, 65.7025}, {176.593, 65.5775}}, {{97.901, 111.134}, {129.233, 50.1133}, {70.4156, 56.3674}, {70.523, 64.5807}, {141.995, 71.3083}, {177.141, 42.3174}, {84.7602, 63.2292}, {126.469, 30.0437}, {82.4865, 68.1415}, {18.2546, 208.863}}}
```

---

Use ``"KAZE"`` keypoints:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], Method -> "KAZE"]

Out[1]= {{{86.2211, 34.0441}, {78.269, 34.5029}, {133.272, 54.9446}, {109.657, 25.0302}, {87.1842, 43.0154}, {108.103, 53.2507}, {129.419, 19.6988}, {102.701, 86.9468}, {92.8365, 53.929}, {95.1033, 87.1674}, {77.3634, 48.6172}, {96.445, 99.5094}, {131.47,  ... .4162}, {130.983, 68.6973}, {87.1599, 48.2602}, {64.2494, 69.4828}, {62.5835, 40.6916}, {42.1505, 30.3793}, {76.9744, 109.054}, {63.0794, 34.7891}, {82.5192, 38.8913}, {57.0898, 51.222}, {75.0836, 89.8124}, {75.0771, 95.4302}, {82.2334, 77.8962}}}
```

---

Use a combination of ``"SURF"`` and ``"BRISK"`` keypoints:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], Method -> {"SURF", "BRISK"}]//Short

Out[1]//Short= {{{98.4792, 108.588}, «69»}, {{«18», «19»}, «68», {«1»}}}
```

#### TransformationClass (7)

By default, the two detected point sets are not constrained geometrically:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image]]//First//Length

Out[1]= 28
```

---

Constrain the pair of point sets to be related by an epipolar transformation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Epipolar"]//First//Length

Out[1]= 27
```

---

Constrain the pair of point sets to be related by a linear fractional transformation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Perspective"]//First//Length

Out[1]= 19
```

---

Constrain the pair of point sets to be related by an affine transformation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Affine"]//First//Length

Out[1]= 20
```

---

Constrain the pair of point sets to be related by a similarity transformation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Similarity"]//First//Length

Out[1]= 20
```

---

Constrain the pair of point sets to be related by a rigid transformation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Rigid"]//First//Length

Out[1]= 0
```

---

Constrain the pair of point sets to be related by a translation:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Translation"]

Out[1]= {{{72.4483, 52.6461}, {72.2919, 61.582}, {127.884, 27.0976}, {84.7241, 65.7025}}, {{70.4156, 56.3674}, {70.523, 64.5807}, {126.469, 30.0437}, {82.4865, 68.1415}}}
```

### Applications (3)

Find matching positions for stereovision applications:

```wl
In[1]:= ImageCorrespondingPoints[[image], [image], TransformationClass -> "Perspective"]

Out[1]= {{{312.504, 200.649}, {336.565, 131.01}, {342.826, 251.787}, {322.347, 122.36}, {323.858, 136.409}, {333.054, 251.074}, {272.816, 125.816}, {309.307, 133.206}, {377.345, 242.983}, {286.268, 191.339}, {304.907, 157.558}, {334.111, 241.843}, {312.339 ... 8.743}, {92.4529, 179.79}, {200.396, 347.329}, {58.5127, 270.567}, {86.4391, 218.042}, {134.308, 347.071}, {99.1537, 254.797}, {48.5832, 246.308}, {172.953, 215.548}, {232.828, 245.131}, {66.6378, 186.438}, {129.064, 139.125}, {145.736, 329.441}}}
```

---

Extract the matching patches from two images:

```wl
In[1]:=
i1 = [image];i2 = [image];
matches = ImageCorrespondingPoints[i1, i2, "Transformation" -> "Perspective"];
MapThread[ImageTrim, {{i1, i2}, matches}]

Out[1]= {[image], [image]}
```

---

Find the rotation angle between two images:

```wl
In[1]:=
i1 = [image];i2 = [image];
matches = ImageCorrespondingPoints[i1, i2];
a1  = ArcTan@@@(# - 1 / 2 * ImageDimensions@i1& /@ matches[[1]]);
a2 = ArcTan@@@(# - 1 / 2 * ImageDimensions@i2& /@ matches[[2]]);
RootMeanSquare[Mod[a1 - a2, 2 Pi]] / Degree

Out[1]= 90.5895
```

### Properties & Relations (1)

``ImageCorrespondingPoints`` converts all images to grayscale:

```wl
In[1]:= {i1, i2} = {[image], [image]};

In[2]:= ImageCorrespondingPoints[i1, i2] == ImageCorrespondingPoints[Sequence@@ColorConvert[{i1, i2}, "Grayscale"]]

Out[2]= True
```

### Neat Examples (1)

Find and visualize matching points in two images of the Moon:

```wl
In[1]:=
images = { [image], [image]};
matches = ImageCorrespondingPoints@@images;
MapThread[Show[#1, Graphics[{Yellow, MapIndexed[Inset[#2[[1]], #1]& , #2]}]]&, {images, matches}]

Out[1]= {[image], [image]}
```

## See Also

* [`ImageFeatureTrack`](https://reference.wolfram.com/language/ref/ImageFeatureTrack.en.md)
* [`ImageKeypoints`](https://reference.wolfram.com/language/ref/ImageKeypoints.en.md)
* [`FindGeometricTransform`](https://reference.wolfram.com/language/ref/FindGeometricTransform.en.md)
* [`ImageAlign`](https://reference.wolfram.com/language/ref/ImageAlign.en.md)
* [`ImageStitch`](https://reference.wolfram.com/language/ref/ImageStitch.en.md)
* [`CornerFilter`](https://reference.wolfram.com/language/ref/CornerFilter.en.md)
* [`EdgeDetect`](https://reference.wolfram.com/language/ref/EdgeDetect.en.md)

## Related Guides

* [Feature Detection](https://reference.wolfram.com/language/guide/FeatureDetection.en.md)
* [Computer Vision](https://reference.wolfram.com/language/guide/ComputerVision.en.md)
* [Image Computation: Update History](https://reference.wolfram.com/language/guide/ImageComputation-UpdateHistory.en.md)
* [Geometric Operations](https://reference.wolfram.com/language/guide/ImageGeometry.en.md)
* [Computational Photography](https://reference.wolfram.com/language/guide/ComputationalPhotography.en.md)
* [Image Computation for Microscopy](https://reference.wolfram.com/language/guide/ImageComputationForMicroscopy.en.md)

## History

* [Introduced in 2010 (8.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn80.en.md) \| [Updated in 2012 (9.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn90.en.md) ▪ [2014 (10.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn100.en.md) ▪ [2016 (11.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn110.en.md) ▪ [2017 (11.1)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn111.en.md) ▪ [2021 (13.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn130.en.md)