Les Litwin is the owner of Antrica and is keen to share his knowledge about encoders and decoders with everyone.
In today’s video, he discusses exactly how to define latency, gives examples about what latency can do, as well as explaining why it is so important to consider it when making decisions about your encoders and decoders.
Okay, so let’s talk about latency first. So latency is the delay between the image from a camera, let’s say, to the monitor. So for example, in a broadcast environment, imagine an outside broadcast so we’re sort of broadcasting and we’re streaming live to the audience who are watching it on TV.
So the latency is the time from when somebody waves in front of the camera to when that person watching the TV actually sees that person waving. So that’s the latency.
And in security and surveillance we want to minimize that as much as possible, so the way we do that is, first of all, we have to consider the camera. The camera has to have low latency in it and then we have to encode that video image and send that video image over a network and then at the other end, we have to decode that image and display it on a screen, for example.
So our company makes encoders and decoders and we have some that are very low latency. That’s not the only element that allows you to have low latency; there’s the network itself.
So for example, if this was a network inside this room it would be very short delay, it’s called ping delay, maybe 2-3 milliseconds, but if you then sent that image over the internet, you might find it’s got two seconds of delay between me sending it from this house to someone else’s house. It could take one or two seconds. So, when considering latency there are so many elements in there. The biggest element though, apart from the network, so let’s assume you take the network and reduce that to the minimum, the cameras okay, there’s also the display that has a latency may be only a few milliseconds but it could be longer, it’s the decoder that’s the problem. The decoder introduces the most delay in a system. So, if you took something like VLC player, VLC player is a piece of software that runs on a computer and it has a hundred milliseconds of delay built-in- you cannot make that much shorter because VLC just falls over. Generally, people make it longer, 200, 300, 500 thousand milliseconds.
All of these bits added together make the total delay of the system, so we have encoders that can do this between the video going in and the video coming out in 13 milliseconds, which is stunning. Now, you could you could make that zero, but the only way you’d make that zero is by not compressing the video, in which case you have to have dedicated cables, fibre-optic, that sort of thing, and it’s not a practical situation in most cases. So typically we would say if you can get 30 to 50 to 100 milliseconds, that’s what we would consider to be low delay and the encoder and decoder are major elements in that. The decoder being the biggest element, but then there’s the network in-between which we have no control over and a camera or the video source and the display.
In security and surveillance typically you would see somewhere in the 2 to 400-millisecond region. That’s a typical overall delay because in security, you need to control cameras, and if the delay is too long you can’t control the count. Imagine a mouse, you know, you move the mouse left, the second later, it moves on your screen, it would be impossible for you to run a mouse like that, and it’s the same in security with cameras.
So that’s pretty much how you control latency, you have to choose your encoder, your decoder, manage the network in between and the two elements either side.