VideoMapTimeSeries
VideoMapTimeSeries[f,video]
applies f to each frame of the Video object video, returning a time series.
VideoMapTimeSeries[f,video,n]
applies f to overlapping partitions of n video frames.
VideoMapTimeSeries[f,video,n,d]
applies f to partitions with offset d.
VideoMapTimeSeries[f,{video1,video2,…},…]
applies f to a list of inputs extracted from each videoi.
Details and Options
- VideoMapTimeSeries can be used to detect temporal or spatial events in videos, such as object detection, motion detection or activity recognition.
- VideoMapTimeSeries returns a TimeSeries whose values are the results of f applied to an association including partial video data and their properties, such as video frames, audio data and time.
- The function f can access video and audio data using the following arguments:
-
#Image video frames as Image objects #Audio a chunk of the audio as an Audio object #Time time from the beginning of the video #TimeInterval beginning and end time stamps for the current partition #FrameIndex index of the current output frame #InputFrameIndex index of the current input frame - In VideoMapTimeSeries[f,{video1,video2,…},…], data provided to each of the arguments is a list where the element corresponds to the data extracted from videoi.
- Using VideoMapTimeSeries[f,video,n], the partition slides by one image frame.
- Frame variables n and d can be given as a scalar specifying the number of frames or a time Quantity object.
- VideoMapTimeSeries supports video containers and codecs specified by $VideoDecoders.
- The following options can be given:
-
Alignment Center alignment of the time stamps with partitions MetaInformation None include additional metainformation MissingDataMethod None method to use for missing values ResamplingMethod "Interpolation" the method to use for resampling paths
Examples
open allclose allBasic Examples (2)
Scope (4)
Function Specification (2)
The function f receives an Association holding data for each partition:
Check the keys of the provided association:
Process individual video frames:
The function f can operate on the audio data, provided as an Audio object:
Compute time-synchronous measurements on both image and audio data:
Partition Specification (2)
Options (1)
Alignment (1)
By default, the time stamps are aligned with the center of each partition and correspond to the value of the "Time" key:
Use AlignmentRight to return the computed property at the end of each partition:
Use a custom alignment ranging from –1 (left) to 1 (right):
The boundaries of each partition are the start time for the first frame and the end time for the last frame of the partition. They can be queried using the "TimeInterval" key:
Applications (2)
Properties & Relations (1)
VideoMapTimeSeries returns the results along with corresponding times in a TimeSeries:
Use VideoMapList to get a list of results without time stamps:
Possible Issues (1)
When the function returns a list, all lists should have similar dimensions:
Pad or trim the resulting lists to the same size to store them in the TimeSeries:
Results may also be wrapped into other containers before being stored in a TimeSeries:
Use VideoMapList to return the list of varying length lists:
Text
Wolfram Research (2020), VideoMapTimeSeries, Wolfram Language function, https://reference.wolfram.com/language/ref/VideoMapTimeSeries.html (updated 2021).
CMS
Wolfram Language. 2020. "VideoMapTimeSeries." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2021. https://reference.wolfram.com/language/ref/VideoMapTimeSeries.html.
APA
Wolfram Language. (2020). VideoMapTimeSeries. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/VideoMapTimeSeries.html