NAME
Paws::Rekognition::GetSegmentDetectionResponse
ATTRIBUTES
AudioMetadata => ArrayRef[Paws::Rekognition::AudioMetadata]
An array of objects. There can be multiple audio streams. Each AudioMetadata
object contains metadata for a single audio stream. Audio information in an AudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection
.
JobStatus => Str
Current status of the segment detection job.
Valid values are: "IN_PROGRESS"
, "SUCCEEDED"
, "FAILED"
=head2 NextToken => Str
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
Segments => ArrayRef[Paws::Rekognition::SegmentDetection]
An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes
input parameter of StartSegmentDetection
. Within each segment type the array is sorted by timestamp values.
SelectedSegmentTypes => ArrayRef[Paws::Rekognition::SegmentTypeInfo]
An array containing the segment types requested in the call to StartSegmentDetection
.
StatusMessage => Str
If the job fails, StatusMessage
provides a descriptive error message.
VideoMetadata => ArrayRef[Paws::Rekognition::VideoMetadata]
Currently, Amazon Rekognition Video returns a single object in the VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection
.