pouët.net

Go to bottom

negative timestamps? queryperformancecounter win32

category: code [glöplog]
 
I'm confused.
Can QueryPerformanceCounter on windows start at a negative value and consistently return negative, but increasing, timestamps? Does that indicate a problem with the system? An incorrect use of QueryPerformanceFrequency?

-r
added on the 2024-07-02 01:41:54 by rennis250 rennis250
added on the 2024-07-02 08:03:48 by NR4 NR4
I recognize that this isn’t answering your question, but in case it helps: Assuming you’re making a demo, getting the time from the clock is likely to cause sync issues if anything ever jitters. It’s generally better to use the current position of the soundtrack for the time, that way things will always be in sync. And, bonus: making pause/play/speedup/slowdown/seek becomes trivial, just do that with the track and the demo will automatically follow along.
added on the 2024-07-02 09:07:19 by skrebbel skrebbel
I think your advice is useful, skrebbel. Thanks for posting it.
added on the 2024-07-02 09:36:12 by Adok Adok
but guys, do you realize the urls I posted are doing exactly that. One with winmm, less bytes, one with dsound, more accurate, but bigger

...

shrug
added on the 2024-07-02 09:38:48 by NR4 NR4
While everything others said is correct, they did not answer the question :). The start epoch of QPC is not documented, and thus could be anything, including negative, as far as I know. It should not matter anyway, because QPC does not provide any relation to any real world clock and is thus only ever useful to measure time intervals (i.e. differences of 2 QPC values). As the starting point (and frequency) of QPC is not documented, you should also be prepared to handle it overflowing at some point. IIRC I did read somewhere (cannot find the source right now, probably some Microsoft blog) that there are Windows Kernel Debug features which explicitly set the starting point of timing function to "right before overflow" on system boot, explicitly to be able to test for bugs.
On pre-Vista multi-core systems, QPC can also jump backwards between cores (Microsoft says "on broken hardware", if that's true there was a lot of broken hardware back then :D).
In any case, I suggest reading https://learn.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps before using QPC.
added on the 2024-07-02 09:44:03 by manx manx
To be fair, the question wasn't about syncing to audio or sizecoding.
added on the 2024-07-02 09:46:15 by Gargaj Gargaj
whoo manx! long time no see :) good to see you posting!

gargaj: correct, demoscene bbs and me being a size coder made me type the answer I typed.
added on the 2024-07-02 10:12:45 by NR4 NR4
My 2 cents, this bug seems to happen occassionaly so don't be alarmed. Maybe trying to tie the time sync to another constant (using another function, as said above) may resolve you issue? :)
added on the 2024-07-02 11:21:28 by Defiance Defiance
Are audio timestamps 100% reliable nowadays? Because I remember having issues where audio timestamps would increment in steps of one second at a time but that's more than 10 years ago
added on the 2024-07-02 11:21:41 by Kabuto Kabuto
Quote:
My 2 cents, this bug seems to happen occassionaly so don't be alarmed.

Despite the erroneous title, that issue is about the delta between two calls to QPC being negative, so it's not related to the topic of this thread.
added on the 2024-07-02 11:42:24 by absence absence
Quote:
Is possible, ms bug

What bug is that? Do you have a reference?
added on the 2024-07-02 11:54:33 by absence absence
rennis250: You could ask about the issue here as well, see what answers you may pick up. :)
added on the 2024-07-02 13:00:27 by Defiance Defiance
Thanks all!
I am very thankful for all the tips/links and had not considered that approach to syncing with audio.
Always thankful to get sizecoding + demo advice :-)

I had looked at the "Acquiring high resolution timestamps" page before and did another (closer) read. I now found this line:

"QueryPerformanceCounter reads the performance counter and returns the total number of ticks that have occurred since the Windows operating system was started"

So, that indicates to me that it should be 0 after reboot and be increasingly positive for a long time after that. In our case, the machine in question was recently rebooted and still showed negative (although) increasing timestamps from QPC. Then, I started to get suspicious

I will look into that Debug feature manx, if only to test some things. Thanks!
added on the 2024-07-02 13:08:57 by rennis250 rennis250
Thanks Defiance.
I will post there as well.
added on the 2024-07-02 13:10:26 by rennis250 rennis250
Could it be that you're using the wrong data type? Can you reproduce this issue on any other machine? Maybe you have a memory access issue?
added on the 2024-07-02 16:58:53 by LJ LJ
It happens on some machines (sometimes) and not others.
Rebooting has no effect.

My teammates have been using this same code for ~10 years and the common consensus amongst them is that it is surprising, but normal, to get very negative (like -318596), but increasing, timestamps from QPC.

They said that this is true for high-performance timers on all systems, like CLOCK_MONOTONIC on Linux and mach_absolute_time on MacOS, as they all have an "arbitrary start point". But I grew suspicious, so I started to do some probing and questioning.

My understanding was that the "arbitrary" means "depends on when you booted the computer", not "it is just purely random and could be negative".
The Microsoft docs imply that at least on Windows, it should be arbitrary but positive if you recently rebooted.

I mean QPC is being used via Cython in the code in question, so it could be that a conversion is going wrong somewhere. I will check and do some testing.

The info that all of you have provided here and the docs have already cleared up some of my confusion and oriented me better. Many thanks again!
added on the 2024-07-02 17:36:17 by rennis250 rennis250
Quote:
My teammates have been using this same code for ~10 years...


Did you happen to convert said code from x86 to x64? Like the code was meant for the 32-bit arc and you compiled the binary to 64-bit? I had something similar happen to me on a project of mine. You could also try writing some sample code using QueryPerformanceCounter in plain c/c++ and see if this issue happens on these machines again.

absence: Maybe you could enlighten us about the nature of this issue...!
added on the 2024-07-02 20:50:37 by Defiance Defiance
Quote:
absence: Maybe you could enlighten us about the nature of this issue...!

I don't know anything about it, I just noticed that OP and the issue you linked to describe two different problems. But since OP calls QPC through a foreign function interface, I'd start by verifying that a hello world-like C program behaves in the same way, to remove as much unrelated code as possible.
added on the 2024-07-03 10:50:16 by absence absence
I strongly suspect there's something wrong on your end.

First,
Quote:
My teammates have been using this same code for ~10 years and the common consensus amongst them is that it is surprising, but normal, to get very negative (like -318596), but increasing, timestamps from QPC.


-318596, if that's an actual number, or at least in the ballpark of what you get, is not "very negative" in QPC terms. The perf counter runs at least in the MHz range so that particular number would become positive in a fraction of a second. If that were the kind of value you get from QPC itself, I'd say the odds of always hitting almost the exact point where it becomes positive are astronomically low.

So I guess that value is post scaling to whatever time resolution comes out of your code. Usually you want timestamps that are not dependent on whatever QPF reports so there's probably some piece of code a la "real_timestamp = qpc_value * our_timebase / qpf_value" in there. In here lies the first trap: This calculation has to be done either the lazy way with double precision floating point, or the proper way in 128 bit integer. 64bit ints aren't enough and give you weird overflow errors that suspiciously look like what you're reporting [source: I and people I know screwed that up several times], and even using doubles can have the same error because whoops, we did the cast to double a little too late and thanks to the compiler parts of the formula were still done in 64bit int.

Also always disregard the "QPC starts at boot" bit. Windows nowadays suspends to disk instead of shutting down (unless you manually restart or power cycle), and if it "boots" up again, the QPC value will happily continue from where you left it*. Same with sleep. There's no relation to wall clock time or anything, really. So QPC is only ever meant for measuring time intervals with a defined start and stop point - its absolute value doesn't mean shit. Fine for short lived durations (and even during a demo) but please use the good old system clock for anything that might cross a "user shuts their laptop" or similar boundary.

(* and if you just use your PC long enough without an explicit reboot, it'll easily get into the range where your math overflows)

So if you still want to use QPC, I'd suggest the following: Always use a time delta from your last measurement (first measurement inits and yields 0), scale, and add that to your output time. This decouples the output value from whatever the performance counter did before, and has the added advantage that you usually don't need 128 bits math if you call it regularly because the delta fits into 32 bits easily.

Here's some basic looking but battle tested code:
Code:uint64_t GetMicroseconds() { LARGE_INTEGER c; static LARGE_INTEGER f = {}; static int64_t lastpc = 0; static uint64_t d = 0; static uint64_t time = 0; const uint64_t timeBase = 1000 * 1000; if (!f.QuadPart) { QueryPerformanceFrequency(&f); QueryPerformanceCounter(&c); lastpc = c.QuadPart; } QueryPerformanceCounter(&c); int64_t dpc = c.QuadPart - lastpc; lastpc = c.QuadPart; if (dpc > 0) d += dpc; uint64_t dt = d * timeBase / f.QuadPart; d -= dt * f.QuadPart / timeBase; time += dt; return time; }
added on the 2024-07-03 12:25:04 by kb_ kb_
Is that essentially (c.QuadPart - startTime)*timeBase / f.QuadPart, and all the extra stuff works around potential hardware bugs that prevent QPC from being monotonic? Is that still an issue these days, or just a matter of not touching battle tested code that works?
added on the 2024-07-03 12:52:34 by absence absence
Oh right, and because of the 128-bit thing you mentioned.
added on the 2024-07-03 13:01:44 by absence absence
It's probably a bit more paranoid than strictly necessary nowadays but yeah, the 128 bits thing, and also we're talking possible years of uptime for the system as well as the application so we made sure it also survives the 63/64bit wraparound of the raw QPC value. Other than that, it's what you said, yeah. :)
added on the 2024-07-03 13:27:59 by kb_ kb_

login

Go to top