C++/Get time in milliseconds in Windows
I need to calculate time differences (in Win2k) in milli or microseconds. I tried GetSystemTime(), GetLocalTime() API's but no clues on getting difference. Exact problem is something like this: a node in linked list has to have cuurent time and a structure. I've to calculate time difference which is less than a second (milli/micro seconds) for first and next nodes. Its easier to get it in unix using gettimeofday() function, but no idea in Windows. Please guide me on this.
Well, I have not looked into these areas for a while but a quick peek at the MSDN library documentation on the SYSTEMTIME structure reveals under the See Also section at the end of the article a couple of interesting things:
- the FILETIME structure
- the SystemTimeToFileTime function
In fact the remarks section specifically points out how to go about doing addition and subtraction from SYSTEMTIME values:
"It is not recommended that you add and subtract values from the SYSTEMTIME structure to obtain relative times. Instead, you should
- Convert the SYSTEMTIME structure to a FILETIME structure.
- Copy the resulting FILETIME structure to a ULARGE_INTEGER structure.
- Use normal 64-bit arithmetic on the ULARGE_INTEGER value."
The FILETIME structure is similar to the timeval structure used by gettimeofday, except it stores time as a 64-bit integer value as low and high 32-bit integer fields. The time value represents the number of 100 nanosecond increments since 1 January 1601 (UTC).
You can therefore do something like the following:
GetSystemTime( &systemTime );
SystemTimeToFileTime( &systemTime, &fileTime );
uli.LowPart = fileTime.dwLowDateTime; // could use memcpy here!
uli.HighPart = fileTime.dwHighDateTime;
ULONGLONG systemTimeIn_ms( uli.QuadPart/10000 );
std::cout << "System time in ms since 1 January 1601 (UTC): "
<< systemTimeIn_ms << '\n';
Note that if you do not have a copy of the MSDN library locally then it can be found online at http://msdn2.microsoft.com/
On the other hand I often find the C/C++ standard clock() function is useful (include <time.h> for C or <ctime> for standard C++). It returns an integer type aliased as clock_t that represents the number of ticks since the process started. The duration of the ticks used by clock for a given implementation is given by the macro CLOCKS_PER_SEC (or CLK_TCK on older compilers specifically Visual C++ before version 6.0). On MS Win32 systems CLOCKS_PER_SEC is 1000.
The good thing about clock() is that it is part of the standard ANSI C and C++ libraries so should be available for all C/C++ implementations that are hosted. The bad thing about clock() is that it measures different things on different platforms. On some it is the so-called 'wall-clock' time in ticks, on others it is the CPU time used. This shows up if you put the process to sleep around taking timings with clock(). On a system that uses wall clock time the difference will be around the time slept for, on those that use CPU time the time difference will be near zero. This can be seen in the following example program (which can be built using MS Visual C++ on Windows or a compiler such as g++ on a UNIX/Linux platform):
#ifdef _MSC_VER // Identifies MS compilers (crude test for Windows!)
# include <Windows.h>
void millisleep( unsigned int milliseconds )
# ifdef _MSC_VER // If its Visual C++ call Win32 Sleep function
#else // Else assume a UNIX/Linux system with nanosleep function
ts.tv_sec = milliseconds / 1000;
ts.tv_nsec = (milliseconds - ts.tv_sec*1000) * 1000000;
std::cout << "Process sleep clock difference: "
<< finish-start << '\n';
int const CountStart(100000000);
int count( CountStart );
start = clock();
while ( --count )
finish = clock();
std::cout << CountStart << " iteration busy loop clock difference: "
<< finish-start << '\n';
I built the above using MS Visual C++ 8.0 (a.k.a. 2005) under Windows XP (in fact the x64 edition) and g++ 4.0.1 under SuSE Linux 10.0 64-bit running as a VMWare virtual machine on the XP system.
On my system under Windows XP it gave the following output:
Process sleep clock difference: 1000
100000000 iteration busy loop clock difference: 328
However, when run under Linux in the VM it gave:
Process sleep clock difference: 0
100000000 iteration busy loop clock difference: 320000
Which shows two things: first the Linux g++ value of CLOCKS_PER_TICK is larger than that used by Visual C++ 8.0 (i.e. the ticks are of a shorter duration on my Linux platform) and that the Linux g++ implementation of clock() measures CPU time used by a process whereas the Visual C++ implementation measures elapsed time for the process.
Hope one of these methods will be of use.