Keywords: C programming | time measurement | gettimeofday function
Abstract: This article provides a comprehensive guide to measuring computation time in C using the gettimeofday function. It explains the fundamental workings of gettimeofday and the timeval structure, focusing on how to calculate time intervals through simple subtraction and convert results to milliseconds. The discussion includes strategies for selecting appropriate data types based on interval length, along with considerations for precision and overflow. Through detailed code examples and comparative analysis, readers gain deep insights into core timing concepts and best practices for accurate performance measurement.
Fundamentals of Time Measurement
Accurately measuring program execution time is a common and essential task in C programming. The gettimeofday() function, provided by the POSIX standard, is a widely used tool for this purpose. This function returns a struct timeval containing two key fields: tv_sec represents seconds since the UNIX epoch (January 1, 1970, 00:00:00 UTC), while tv_usec provides microsecond-level precision.
Core Calculation Method
Calculating time intervals is fundamentally a subtraction problem. As noted in the best answer, the curtime variable stores the number of seconds from the epoch. By calling gettimeofday() before and after a computational task, two timestamps are obtained. Subtracting the earlier timestamp from the later one yields the execution time in seconds.
struct timeval start_time, end_time;
gettimeofday(&start_time, NULL);
// Execute the computational task to be measured
gettimeofday(&end_time, NULL);
time_t elapsed_seconds = end_time.tv_sec - start_time.tv_sec;
Implementing Millisecond Precision
While direct subtraction gives seconds, practical applications often require higher precision in milliseconds or microseconds. This necessitates calculations involving both seconds and microseconds fields. A complete formula for millisecond-level intervals is:
long long elapsed_microseconds = (end_time.tv_sec - start_time.tv_sec) * 1000000LL
+ (end_time.tv_usec - start_time.tv_usec);
long elapsed_milliseconds = elapsed_microseconds / 1000;
Note the potential "borrowing" scenario for the microseconds field. If the end time's microsecond value is smaller than the start time's, borrowing from the seconds field is required. The above formula naturally handles this edge case by first computing total microseconds before converting to milliseconds.
Data Types and Overflow Considerations
As highlighted in supplementary answers, data type selection becomes critical for longer time intervals. Using long for microsecond calculations may cause overflow after approximately 2000 seconds. Therefore, for scenarios involving long-running tasks, long long is recommended to ensure computational accuracy.
Practical Application Example
The following complete example demonstrates measuring the execution time of a sorting algorithm:
#include <sys/time.h>
#include <stdio.h>
void bubble_sort(int arr[], int n) {
for (int i = 0; i < n-1; i++) {
for (int j = 0; j < n-i-1; j++) {
if (arr[j] > arr[j+1]) {
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
}
}
}
}
int main() {
struct timeval start, end;
int data[10000];
// Initialize test data
for (int i = 0; i < 10000; i++) {
data[i] = 10000 - i;
}
gettimeofday(&start, NULL);
bubble_sort(data, 10000);
gettimeofday(&end, NULL);
long long elapsed_us = (end.tv_sec - start.tv_sec) * 1000000LL
+ (end.tv_usec - start.tv_usec);
printf("Sorting execution time: %lld microseconds (%lld milliseconds)\n",
elapsed_us, elapsed_us / 1000);
return 0;
}
Precision and Alternative Approaches
While gettimeofday() offers microsecond precision, it may be affected by system clock adjustments on some systems. For scenarios requiring higher precision or more stable time sources, consider using clock_gettime() with the CLOCK_MONOTONIC clock source, which is immune to system time changes and more suitable for performance measurements.
Additionally, for extremely short intervals (nanosecond level), platform-specific hardware counters or specialized profiling tools may be necessary. However, for most applications, the precision provided by gettimeofday() is sufficient and offers good cross-platform compatibility.
Conclusion
Using gettimeofday() for time measurement is a straightforward and effective approach. The key lies in understanding the subtractive nature of timestamps, correctly combining seconds and microseconds fields, and selecting appropriate data types based on measurement duration. In practice, considering precision requirements and potential alternatives ensures accurate and reliable time measurements tailored to specific needs.