unexplicable low memory crash (jetsam/per-process-limit)

hello,


i'm helping on a c++ game where the game crash randomly after awhile.


Incident Identifier: 9A342532-A340-4866-9709-FCB0C8E8F695
CrashReporter Key:   key
Hardware Model:      iPhone6,1
OS Version:          iPhone OS 8.1.2 (12B440)
Kernel Version:      Darwin Kernel Version 14.0.0: Mon Nov  3 22:23:57 PST 2014; root:xnu-2783.3.22~1/RELEASE_ARM64_S5L8960X
Date:                2016-08-10 17:33:46 -0400
Time since snapshot: 347 ms

Free pages:                              8149
Active pages:                            92924
Inactive pages:                          40026
Speculative pages:                       6601
Throttled pages:                         0
Purgeable pages:                         0
Wired pages:                             65621
File-backed pages:                       27128
Anonymous pages:                         112423
Compressions:                            698575
Decompressions:                          261500
Compressor Size:                         48900
Uncompressed Pages in Compressor:        96707
Page Size:                               16384
Largest process:   GameName


Processes
     Name       |            <UUID>                |     CPU Time|     rpages|       purgeable| recent_max| lifetime_max| fds |  [reason]         | (state)
GameName           <app_uuid> 645.725 166401 0 - 107755 100 [per-process-limit] (audio) (frontmost) (resume)


a page is 4k(or is it 16k?), which means 166401 * 4096 = 650meg

the OS is killing us when we get to 650megs


however, when I use instrument (allocations, activity monitor, VM tracker), the instruments show that we never use more than 400megs.


what could explain this? fragmentation?


some more infos:

we are using a 32bit build at the moment (64bit still has bugs)

crash happens only on devices with 1gig of ram (works ok with 2+gig devices)


also, i'm having a hard time understanding all the various memory counters in instrument

there is resident / VM size in activity monitor

there is Persistent/total bytes in Allocations

there is dirty/resident size in VM Tracker


finally, is there an instrument tool that allows us to track the rpages values mentionned above?


can anyone help?

Hi!


The "Dirty Size" number you see in the VM Tracker instrument should be close to the number the kernel tracks for your process. It won't be perfect because it is polled, so you could miss transient spikes. The best way to look at memory is to use the Allocations and VM Tracker Instruments paired together. This will allow you to see your allocations, and to see the backing dirty pages, which appear as "rpages" in the log you pasted.

hello,


there is no transient spikes. Allocations doesnt show any spike in memory allocation/usage before crashing. we also log to a file every allocation, and beside a few 100kbs of memory, nothing major is done prior to crashing


so, jetsam says we are using 166401 pages (650meg), while vmtracker says we use 350meg of dirty memory


% of Res/ Type    / #Regs / Path       / Resident   / dirty      / virtual    / res%
86%     / *Dirty* / 3,218 / <multiple> / 348.29 MiB / 346.92 MiB / 723.18 MiB / 48%

what could explain ths 300meg difference? (650 vs 350)

thanks

So the issue with vmtracker is that it polls the system. Due to this, it can miss spikes. I bet what is happening is that something is mapping a large amount of memory into your process. Perhaps the kernel. Also the memory could be compressed out of your process so it doesn't appear in the dirty column.

unexplicable low memory crash (jetsam/per-process-limit)
 
 
Q