Hello Apple Forum,
We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)' when we realized the maximum allocation amount of the memory dropped from our previous test of 32GB to 16GB. We were testing these on the following devices:
- iPad Pro 12.9(6th Gen) 18.4(Beta, 22E4232a)
- iPad Pro 11 M4 18.3.2
Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes, we believe the entitlements are applied correctly. Each test device was on charging mode and battery mode with 60, 80 100% battery. We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory. However, through the same testing method we have done on iPadOS 18.3 and 4, we were able to allocate 31~32GB of RAM on testing done on January this year. (iPadOS 18.0, or maybe 18.1?) which make us wonder, has there been a change in core os that handles memory allocation since 18.0? And what can be the cause of this drop?
The code we ran for our memory testing is as follows:
private var allocatedMemory: [UnsafeMutableRawPointer] = [] public func allocate1GBMemory() { let sizeInBytes = 1024 * 1024 * 1024 if let pointer = malloc(sizeInBytes) { for i in 0..<sizeInBytes { pointer.advanced(by: i).storeBytes(of: UInt8(i % 256), as: UInt8.self) } allocatedMemory.append(pointer) logger.print("Allocated 1GB of memory. Total allocated: \(allocatedMemory.count)GB") } else { logger.print("Failed to allocate memory") } }
Each functions call increases memory usage by allocating 1GB of space and we have called this function until it reaches 16GB then the app crashes.
First off, let me clarify here:
We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory.
The issue here isn't really about battery (or power). Architecturally, most modern operating system have the same broad "attitude" toward RAM, which is that it's a CPU resource that should be "consumed", not saved. Limiting app memory usage doesn't inherently save power, as the system was going to use memory for SOMETHING no matter what. The actual issue here is "what should the kernel do when it's running out of physical memory and still receiving request for new memory?". The classic solution has been to write dirty pages out to disk, however, iOS terminates apps instead. This is partly because of issues like flash write limitation but mostly because being unable to make phone calls because your phone is hung to do swap death is completely ridiculous.
Understanding the role of "swap death" is critical here because, no it's own, simply "disabling swap" wouldn't have actually prevented "swap death". Conceptually (though they've become more specialized over time) swap files are simple another from of memory mapped I/O so, in theory, it's always been possible for an iOS app to:
- Create a large file on disk.
- mmap the file into memory.
- Use that allocation as "general" memory.
...at which point you've just recreated the same memory general memory architecture as macOS, complete with all of the same downsides.
By default, we prevented that issue by artificially limiting address space and terminating the app when it hits that limit. That is, you can use the architecture above but doing so won't actually give you access to significantly more memory than you would otherwise have access to, removing most of the benefit this architecture would have provided.
That leads to here:
We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)'
What those entitlements do is remove/modify the address space restrictions. How much memory that actually gives you access to is a far less predictable question but, in general, they will give you access to "more".
With all that background, let me shift to here:
Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes,
How did you actually "crash"? There are actually two different failure points in the VM system, with very different cause:
-
If you hit an address space limit, you'll crash in the VM system (in this case, inside malloc) when it "rejects" your address space request. This also means that ACTUAL memory usage is irrelevant, so you'll crash in exactly the same place even if you're not touching any pages.
-
You can also crash because you've pushed the VM system to "hard", so it terminated your app to free memory. I haven't tested this in many, many years, but historically it was ENTIRELY possible for an app to do this by very rapidly consuming memory, even as the foreground app which would otherwise have had priority.
Expanding on #2, the typical situation would be:
-
The device had a lot of apps suspended in the background, filling up memory.
-
The app launched into the foreground and immediately start consuming memory very quickly.
-
Memory spiked "high" faster than the system could terminate those suspended apps, forcing it to terminate the foreground app.
The key point here is that how FAST memory was used matter just as much as the total memory being used.
The speed of allocation matters because of how it related to this:
we were able to allocate 31~32GB of RAM on testing done on January this year.
Assuming there weren't any issues in your code, then the reason this worked is that iOS was compressing your old pages faster than you were dirtying new ones. You're writing very "patterned" data (so it should compress well) so, in theory, that could let you write a lot more data out than you'd otherwise expect. I'm not sure how USEFUL that would actually be, but that's how it could work. I'll also not that it's VERY likely that compressed memory is already playing a significant role here:
we are able to reach 16GB of allocation usage
I'm not sure how exact your accounting was here, but a "16GB" iPad is incapable of giving your app "16GB" of physical memory. I don't have the exact numbers at hand, but I'd expect the "rest" of the the system to be consuming at LEAST ~2 GB, leaving with at MOST ~14GB of physically available memory.
has there been a change in core os that handles memory allocation since 18.0?
This is a bit of a trick question, as this VM system is basically under constant development. It's one of the few places in the system were virtually ANY performance improvement will results in significant real world gains. What I will say is:
-
I'm not aware any specific change that would specifically explain what you're seeing, but we don't really document any of these internal changes and it's certainly possible I've overlooked something.
-
There's a LOT of space her for changes in it's implementation to create this kind of issue, particularly in this kind of artificial test.
-
I think letting your app get to 30+ GB of memory was VERY likely a mistake. While the compressor might have let you get that far, it's not particularly "safe". The compressor works by "stuffing" multiple VM pages into each physical pages but that also means that jumping around in those pages and/or modifying them can cause large spikes that will quickly get your app killed.
And what can be the cause of this drop?
The place to start here is the crash log. If you crashed in malloc, then it's likely that you hit a memory limit. If you crashed when touching memory, then it's likely that you pushed the VM system to hard. Either way, post the log and I'll see if that tells me more.
__
Kevin Elliott
DTS Engineer, CoreOS/Hardware