January 2014 / Show-Me Tech
Web | Web |
---|---|
R | Real |
T | Time |
C | Communication |
MediaStream
navigator.webkitGetUserMedia({
audio: false,
video: true
}, function(stream) {
window.stream = stream;
var video = document.getElementById('video-gum-demo');
video.src = window.URL.createObjectURL(stream);
video.play();
});
RTCPeerConnection
RTCDataChannel
"While the specification does not mandate a maximum duration of a speech input stream, this suggestion [transcripts for live communication] is most appropriate for implementations utilizing a local recognizer. Allowing MediaStreams to be used as an input for a SpeechRecognition object, for example through a new 'inputStream' property as an alternative to the start, stop and abort methods, would enable authors to supply external input to be recognized. This may include, but is not limited to, prerecorded audio files and WebRTC live streams, both from local and remote parties."
— Peter Beverloo
Google Chrome Software Engineer, June 2013 email to the Public Speech API mailing list
"The intention is eventually to enable a MediaStream for any streaming data source, not just a camera or microphone. This would enable streaming from disc, or from arbitrary data sources such as sensors or other inputs."
— Sam Dutton
Developer Relations at Google, Getting Started with WebRTC
Slides, resources, and links at: