pouët.net

Go to bottom

How many ms delay is OK?

category: general [glöplog]
When syncing visuals and music in a demo, the tighter the better. But how many ms of delay can we introduce before things feel out of sync? I have found some numbers in related areas. In music recording lore, artists feel out of sync if the loopback delay is >6ms. For TV broadcasts, 45ms is the maximum recommended by the ATSC, and lip-sync apparently goes bad at 22ms. What about about demos, where beats and visuals are tightly synced? How many ms of audio delay can we get away with before it starts looking bad?
added on the 2017-03-10 11:46:52 by sigveseb sigveseb
"However, the ITU performed strictly controlled tests with expert viewers and found that the threshold for detectability is -125ms to +45ms." Probably the +/- 22 ms figure from the film industry is used to be on the safe side, since they have much higher standards and budget than the evening news. As for demos, I'd say "just try it", but it's not easy to know exactly how much delay you have in your test, due to buffers in the operating system and the hardware.
added on the 2017-03-10 12:29:10 by absence absence
Why would you settle for anything less than frame perfect sync?
added on the 2017-03-10 12:57:09 by Preacher Preacher
depends on the average distance between the PA and the audience. every meter adds about 3ms audio delay.
added on the 2017-03-10 13:08:19 by cupe cupe
You can view a high frame-rate video in VLC and test it yourself. There's an option for the audio sync offset, that's usually set to "0.000s"
added on the 2017-03-10 13:24:42 by tomaes tomaes
Preacher: Can be a variety of reasons; double- or triple-buffering, audio engine latency, vsync...
added on the 2017-03-10 13:46:39 by Gargaj Gargaj
Related: on Windows textmode demos, can you even get a reliable vsync indication? I remember looking into that a few years ago and not finding anything other than "run it and see if it looks good." I may have missed something obvious, though.

Whatever tolerances Everyway uses are good enough! :)
added on the 2017-03-10 14:42:55 by cxw cxw
Quote:
Whatever tolerances Everyway uses are good enough! :)

Might be difficult to beat, as you generally get lower delay by accessing old school hardware directly than wading through abstraction layers on a modern computer...
added on the 2017-03-10 18:47:50 by absence absence
Good question! I would say that it depends. Lipsync is probably the scenario where its least forgiving. If its a demo with a dominant bassline connected with the visualisering, i'd bet that you'd get away with a few frames..Let's say inn the range of 32-64ms.
added on the 2017-03-10 19:00:46 by quisten quisten
IMO, negative audio latency feels the worst. If sound is a bit late, it can be tolerable, but if the picture is late, it's very disturbing and unnatural.
added on the 2017-03-10 19:42:55 by yzi yzi
Quote:
What about about demos, where beats and visuals are tightly synced? How many ms of audio delay can we get away with before it starts looking bad?


over 5ms annoys.

if going off sync why not sync to something not so easily spotted like some sound on a track other than the kick..
added on the 2017-03-10 19:46:34 by 1in10 1in10
Quote:
over 5ms annoys.

How do you measure that? It sounds unlikely that you could even notice 5 ms delay, otherwise you would get annoyed by walking two steps relative to the source. (Sound travels at about 340 m/s, which is 1.7 m in 5 ms.)
added on the 2017-03-10 21:51:22 by absence absence
7
What cupe and yzi said. Our brains are used to sound coming a bit late because not everything takes place right in front of our faces. 20ms delay video to audio or so is well within limits, everything above depends on the content. But yeah, audio first, then video is evil.
added on the 2017-03-10 22:57:26 by kb_ kb_
absence, it is easier to notice than you think. You just need to be someone who is working on the sync, i.e. you need to watch the difference to see it. It is pretty objective when you do...

At the same time, 10-15ms error is not always horrible. It depends on what kind of event you are trying to synchronise. For fast, snappy cutting to sharp, typically percussive noises you'd want less error, for slower moving things and sounds with slow attack the tolerance is much higher.
added on the 2017-03-10 23:16:58 by introspec introspec
Quote:
How do you measure that?


small delays are used to change the feel of a track.
something that is hard to hear can still be measured in how it makes you feel.

what introspec said.
added on the 2017-03-10 23:28:39 by 1in10 1in10
It depends on the content. Here's a drumming test video I made with Ableton Live. The video clip repeats several times, but the audio - or MIDI events actually - is quantized with varying parameters. Does it feel beliveable? I don't know. I guess a lot depends on what you expect. Do you "want" to believe it's in sync?
https://www.youtube.com/watch?v=4SQxaZtOjF4
added on the 2017-03-11 09:13:30 by yzi yzi
FWIW in our game the audio system was running with a fixed latency of 100ms, and it was "okay".
added on the 2017-03-11 14:59:04 by Gargaj Gargaj
yzi: to me it´s not that i "want" to believe it...but my brain simply adopts/"fixes" it for the most timings in that vid. the moment i realize sth is wrong it feels correct again already.
I guess i am just old and too used to big latencys by now! ;)
(using a bluetooth-headphone for about a year might have fucked my brain beyond repair...if it gets to more than ~250ms off i let it resync with a double-tap, but that´s still a lot of time with huge delays, as it keeps desyncing, no matter what i do! ;) )
Quote:
absence, it is easier to notice than you think. You just need to be someone who is working on the sync, i.e. you need to watch the difference to see it. It is pretty objective when you do...

Quote:
small delays are used to change the feel of a track.
something that is hard to hear can still be measured in how it makes you feel.

You may notice adjustment to the delay, but what I wonder is how you measure that the absolute delay you experience in a specific situation is 5 ms, and not e.g. 50 ms.
added on the 2017-03-12 02:04:37 by absence absence
For me it's simply experience and good time sense. If you'd really want to measure it, you could do some recording using fraps, shadowplay, windows gamemode or similar screengrabbing tools, and then do a frame analyisis in a video editor.
added on the 2017-03-12 11:30:27 by xTr1m xTr1m
Unless you record what comes out of your speakers and what is displayed on your screen, it's not relevant to the actual delay you precieve. The tools you suggest don't do that, and can't tell you anything about how long it takes between something occurs on your screen and sound comes out of your speaker. In other words, if you have a video file with audio synced at 5 ms, but you actually get 100 ms delay when playing it back, the 5 ms figure means nothing. That's why I'm suspicious of the "5 ms is annoying" claims.
added on the 2017-03-12 13:09:25 by absence absence
absence, what you say is true for modern digital displays and audio cards. Watching an oldschool demo on a CRT with analogue speakers connected gives you exactly that, 100% certainty that what you see is what you get.

However, your point is very good. It does raise a question about how objective is what we see at the party, on a projector, and also in our emulators.
added on the 2017-03-12 13:21:26 by introspec introspec
@introspec: there is still the delay for the sound waves to travel through air to your ears. As mentionned several times in the thread, this is 300m/s or about 3ms delay per meter. I never saw anyone complain for that or think they'd need to compensate for it.

In the typical demoparty setup this means already at least 30ms delay if you are at a reasonable distance from the audio source.

This is why it seems very unlikely that an extra 5ms would make a difference. Try it four yourself: watch your demo while near to the computer, watch it again stadning 2m further. Do you notice the audio delay difference?

Now, a *variation* of the delay of 5ms can be noticeable, but that's a different thing. And if you properly apply doppler effect to the sound in your demo, it would be interesting to see (hear) the result!
One frame is 16.6ms with 60hz, which is the most typical rate in use.
When syncing stuff, 2 frames is about the maximum that still works out but at that point some tighter syncs can start to fall apart.. around one frame is what you can typically assume for playback as you will want the monitor to sync up as well, along with rendering latency.

Good thing with demos is that they are not interactive so you have more control on when to flash pretty colours at desired points, as long as you can keep things consistent across, the viewer can always adapt a tiny bit.

But what gargaj described is the enemy.. buffering along with vsync will often cause problems at playback. Ive heard tips that allowing minimal ore rendered frames in nvidia panel helps a bit with this when using vsync but i havent tried that yet.
added on the 2017-03-12 14:41:30 by oasiz oasiz

login

Go to top