I used clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts)
to get the CPU time of the thread but found that the time fetched through it on M1 was very small. The iPhone and non-M1 devices are fine.
double CPUTime()
{
struct timespec ts;
int ret = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts);
return ts.tv_sec + ts.tv_nsec/1000.0/1000.0/1000.0;
}
double RealTime()
{
struct timespec ts;
int ret = clock_gettime(CLOCK_MONOTONIC, &ts);
return ts.tv_sec + ts.tv_nsec/1000.0/1000.0/1000.0;
}
void longTask()
{
double begin = RealTime();
double limit = 1.0;
static double g_result = 0;
while (1) {
for (int i=0; i<1000000; i++) {
g_result = sin(g_result+5);
}
double now = RealTime();
if (now - begin >= limit) break;
}
return;
}
void testClocks()
{
double t1, t0, cpuTimeCost, realTimeCost;
t0 = CPUTime();
longTask();
t1 = CPUTime();
cpuTimeCost = t1 - t0;
double t10 = RealTime();
longTask();
double t11 = RealTime();
realTimeCost = t11 - t10;
NSLog(@"\nthread cpu: %lfs\nreal: %lfs", cpuTimeCost, realTimeCost);
return;
}
the output is:
thread cpu: 0.024395s
real: 1.006627s
By contrast, on iPhone or Mac with an Intel chip, these two values are almost the same.
Any help here? I've tried the Rosetta model and it's the same problem.