Increased Memory Limit, Extended Virtual Addressing affects on recent iPadOS 18.3

Hello Apple Forum,

We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)' when we realized the maximum allocation amount of the memory dropped from our previous test of 32GB to 16GB. We were testing these on the following devices:

  • iPad Pro 12.9(6th Gen) 18.4(Beta, 22E4232a)
  • iPad Pro 11 M4 18.3.2

Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes, we believe the entitlements are applied correctly. Each test device was on charging mode and battery mode with 60, 80 100% battery. We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory. However, through the same testing method we have done on iPadOS 18.3 and 4, we were able to allocate 31~32GB of RAM on testing done on January this year. (iPadOS 18.0, or maybe 18.1?) which make us wonder, has there been a change in core os that handles memory allocation since 18.0? And what can be the cause of this drop?

The code we ran for our memory testing is as follows:

private var allocatedMemory: [UnsafeMutableRawPointer] = []
public func allocate1GBMemory() {
let sizeInBytes = 1024 * 1024 * 1024
if let pointer = malloc(sizeInBytes) {
for i in 0..<sizeInBytes {
pointer.advanced(by: i).storeBytes(of: UInt8(i % 256), as: UInt8.self)
}
allocatedMemory.append(pointer)
logger.print("Allocated 1GB of memory. Total allocated: \(allocatedMemory.count)GB")
} else {
logger.print("Failed to allocate memory")
}
}

Each functions call increases memory usage by allocating 1GB of space and we have called this function until it reaches 16GB then the app crashes.

Answered by DTS Engineer in 830459022

First off, let me clarify here:

We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory.

The issue here isn't really about battery (or power). Architecturally, most modern operating system have the same broad "attitude" toward RAM, which is that it's a CPU resource that should be "consumed", not saved. Limiting app memory usage doesn't inherently save power, as the system was going to use memory for SOMETHING no matter what. The actual issue here is "what should the kernel do when it's running out of physical memory and still receiving request for new memory?". The classic solution has been to write dirty pages out to disk, however, iOS terminates apps instead. This is partly because of issues like flash write limitation but mostly because being unable to make phone calls because your phone is hung to do swap death is completely ridiculous.

Understanding the role of "swap death" is critical here because, no it's own, simply "disabling swap" wouldn't have actually prevented "swap death". Conceptually (though they've become more specialized over time) swap files are simple another from of memory mapped I/O so, in theory, it's always been possible for an iOS app to:

  • Create a large file on disk.
  • mmap the file into memory.
  • Use that allocation as "general" memory.

...at which point you've just recreated the same memory general memory architecture as macOS, complete with all of the same downsides.

By default, we prevented that issue by artificially limiting address space and terminating the app when it hits that limit. That is, you can use the architecture above but doing so won't actually give you access to significantly more memory than you would otherwise have access to, removing most of the benefit this architecture would have provided.

That leads to here:

We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)'

What those entitlements do is remove/modify the address space restrictions. How much memory that actually gives you access to is a far less predictable question but, in general, they will give you access to "more".

With all that background, let me shift to here:

Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes,

How did you actually "crash"? There are actually two different failure points in the VM system, with very different cause:

  1. If you hit an address space limit, you'll crash in the VM system (in this case, inside malloc) when it "rejects" your address space request. This also means that ACTUAL memory usage is irrelevant, so you'll crash in exactly the same place even if you're not touching any pages.

  2. You can also crash because you've pushed the VM system to "hard", so it terminated your app to free memory. I haven't tested this in many, many years, but historically it was ENTIRELY possible for an app to do this by very rapidly consuming memory, even as the foreground app which would otherwise have had priority.

Expanding on #2, the typical situation would be:

  • The device had a lot of apps suspended in the background, filling up memory.

  • The app launched into the foreground and immediately start consuming memory very quickly.

  • Memory spiked "high" faster than the system could terminate those suspended apps, forcing it to terminate the foreground app.

The key point here is that how FAST memory was used matter just as much as the total memory being used.

The speed of allocation matters because of how it related to this:

we were able to allocate 31~32GB of RAM on testing done on January this year.

Assuming there weren't any issues in your code, then the reason this worked is that iOS was compressing your old pages faster than you were dirtying new ones. You're writing very "patterned" data (so it should compress well) so, in theory, that could let you write a lot more data out than you'd otherwise expect. I'm not sure how USEFUL that would actually be, but that's how it could work. I'll also not that it's VERY likely that compressed memory is already playing a significant role here:

we are able to reach 16GB of allocation usage

I'm not sure how exact your accounting was here, but a "16GB" iPad is incapable of giving your app "16GB" of physical memory. I don't have the exact numbers at hand, but I'd expect the "rest" of the the system to be consuming at LEAST ~2 GB, leaving with at MOST ~14GB of physically available memory.

has there been a change in core os that handles memory allocation since 18.0?

This is a bit of a trick question, as this VM system is basically under constant development. It's one of the few places in the system were virtually ANY performance improvement will results in significant real world gains. What I will say is:

  • I'm not aware any specific change that would specifically explain what you're seeing, but we don't really document any of these internal changes and it's certainly possible I've overlooked something.

  • There's a LOT of space her for changes in it's implementation to create this kind of issue, particularly in this kind of artificial test.

  • I think letting your app get to 30+ GB of memory was VERY likely a mistake. While the compressor might have let you get that far, it's not particularly "safe". The compressor works by "stuffing" multiple VM pages into each physical pages but that also means that jumping around in those pages and/or modifying them can cause large spikes that will quickly get your app killed.

And what can be the cause of this drop?

The place to start here is the crash log. If you crashed in malloc, then it's likely that you hit a memory limit. If you crashed when touching memory, then it's likely that you pushed the VM system to hard. Either way, post the log and I'll see if that tells me more.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Accepted Answer

First off, let me clarify here:

We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory.

The issue here isn't really about battery (or power). Architecturally, most modern operating system have the same broad "attitude" toward RAM, which is that it's a CPU resource that should be "consumed", not saved. Limiting app memory usage doesn't inherently save power, as the system was going to use memory for SOMETHING no matter what. The actual issue here is "what should the kernel do when it's running out of physical memory and still receiving request for new memory?". The classic solution has been to write dirty pages out to disk, however, iOS terminates apps instead. This is partly because of issues like flash write limitation but mostly because being unable to make phone calls because your phone is hung to do swap death is completely ridiculous.

Understanding the role of "swap death" is critical here because, no it's own, simply "disabling swap" wouldn't have actually prevented "swap death". Conceptually (though they've become more specialized over time) swap files are simple another from of memory mapped I/O so, in theory, it's always been possible for an iOS app to:

  • Create a large file on disk.
  • mmap the file into memory.
  • Use that allocation as "general" memory.

...at which point you've just recreated the same memory general memory architecture as macOS, complete with all of the same downsides.

By default, we prevented that issue by artificially limiting address space and terminating the app when it hits that limit. That is, you can use the architecture above but doing so won't actually give you access to significantly more memory than you would otherwise have access to, removing most of the benefit this architecture would have provided.

That leads to here:

We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)'

What those entitlements do is remove/modify the address space restrictions. How much memory that actually gives you access to is a far less predictable question but, in general, they will give you access to "more".

With all that background, let me shift to here:

Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes,

How did you actually "crash"? There are actually two different failure points in the VM system, with very different cause:

  1. If you hit an address space limit, you'll crash in the VM system (in this case, inside malloc) when it "rejects" your address space request. This also means that ACTUAL memory usage is irrelevant, so you'll crash in exactly the same place even if you're not touching any pages.

  2. You can also crash because you've pushed the VM system to "hard", so it terminated your app to free memory. I haven't tested this in many, many years, but historically it was ENTIRELY possible for an app to do this by very rapidly consuming memory, even as the foreground app which would otherwise have had priority.

Expanding on #2, the typical situation would be:

  • The device had a lot of apps suspended in the background, filling up memory.

  • The app launched into the foreground and immediately start consuming memory very quickly.

  • Memory spiked "high" faster than the system could terminate those suspended apps, forcing it to terminate the foreground app.

The key point here is that how FAST memory was used matter just as much as the total memory being used.

The speed of allocation matters because of how it related to this:

we were able to allocate 31~32GB of RAM on testing done on January this year.

Assuming there weren't any issues in your code, then the reason this worked is that iOS was compressing your old pages faster than you were dirtying new ones. You're writing very "patterned" data (so it should compress well) so, in theory, that could let you write a lot more data out than you'd otherwise expect. I'm not sure how USEFUL that would actually be, but that's how it could work. I'll also not that it's VERY likely that compressed memory is already playing a significant role here:

we are able to reach 16GB of allocation usage

I'm not sure how exact your accounting was here, but a "16GB" iPad is incapable of giving your app "16GB" of physical memory. I don't have the exact numbers at hand, but I'd expect the "rest" of the the system to be consuming at LEAST ~2 GB, leaving with at MOST ~14GB of physically available memory.

has there been a change in core os that handles memory allocation since 18.0?

This is a bit of a trick question, as this VM system is basically under constant development. It's one of the few places in the system were virtually ANY performance improvement will results in significant real world gains. What I will say is:

  • I'm not aware any specific change that would specifically explain what you're seeing, but we don't really document any of these internal changes and it's certainly possible I've overlooked something.

  • There's a LOT of space her for changes in it's implementation to create this kind of issue, particularly in this kind of artificial test.

  • I think letting your app get to 30+ GB of memory was VERY likely a mistake. While the compressor might have let you get that far, it's not particularly "safe". The compressor works by "stuffing" multiple VM pages into each physical pages but that also means that jumping around in those pages and/or modifying them can cause large spikes that will quickly get your app killed.

And what can be the cause of this drop?

The place to start here is the crash log. If you crashed in malloc, then it's likely that you hit a memory limit. If you crashed when touching memory, then it's likely that you pushed the VM system to hard. Either way, post the log and I'll see if that tells me more.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Hello @DTS Engineer ,

I sincerely appreciate your detailed response. It has helped tremendously to get a hold of how iOS might be handling memory allocations. I was able to find JetsamEvent files in which they held information about memory related logs during app crash. Please request for any other source that might help you.

Following is the partial snippet of that log:

{
"states" : [
"active",
"frontmost"
],
"genCount" : 0,
"purgeable" : 2,
"rpages" : 856759,
"physicalPages" : {
"internal" : [
15583,
840732
]
},
"mem_regions" : 408,
"uuid" : "{{ uuid }},
"fds" : 50,
"name" : "{{ Test App }}",
"priority" : 100,
"csTrustLevel" : 4,
"cpuTime" : 5.742299,
"freeze_skip_reason:" : "none",
"age" : 517739576,
"coalition" : 2140,
"csFlags" : 570434309,
"reason" : "vm-compressor-space-shortage",
"pid" : 5235,
"lifetimeMax" : 856764,
"killDelta" : 1271598
},

And when I run the test app in release configuration on Xcode the app crashes with the following message:

The app “{{ Test App }}” has been killed by the operating system because it is using too much memory.
Domain: IDEDebugSessionErrorDomain
Code: 11
Recovery Suggestion: Use a memory profiling tool to track the process memory usage.
User Info: {
DVTErrorCreationDateKey = "2025-03-24 04:42:36 +0000";
IDERunOperationFailingWorker = DBGLLDBLauncher;
}
--
Event Metadata: com.apple.dt.IDERunOperationWorkerFinished : {
"device_identifier" = "00008112-001239143678A01E";
"device_isCoreDevice" = 1;
"device_model" = "iPad14,5";
"device_osBuild" = "18.4 (22E5232a)";
"device_platform" = "com.apple.platform.iphoneos";
"device_thinningType" = "iPad14,5-B";
"dvt_coredevice_version" = "397.28";
"dvt_coresimulator_version" = "993.7";
"dvt_mobiledevice_version" = "1759.93.3";
"launchSession_schemeCommand" = Run;
"launchSession_state" = 2;
"launchSession_targetArch" = arm64;
"operation_duration_ms" = 50332;
"operation_errorCode" = 11;
"operation_errorDomain" = IDEDebugSessionErrorDomain;
"operation_errorWorker" = DBGLLDBLauncher;
"operation_name" = IDERunOperationWorkerGroup;
"param_debugger_attachToExtensions" = 0;
"param_debugger_attachToXPC" = 1;
"param_debugger_type" = 3;
"param_destination_isProxy" = 0;
"param_destination_platform" = "com.apple.platform.iphoneos";
"param_diag_113575882_enable" = 0;
"param_diag_MainThreadChecker_stopOnIssue" = 0;
"param_diag_MallocStackLogging_enableDuringAttach" = 0;
"param_diag_MallocStackLogging_enableForXPC" = 1;
"param_diag_allowLocationSimulation" = 1;
"param_diag_checker_tpc_enable" = 1;
"param_diag_gpu_frameCapture_enable" = 0;
"param_diag_gpu_shaderValidation_enable" = 0;
"param_diag_gpu_validation_enable" = 0;
"param_diag_guardMalloc_enable" = 0;
"param_diag_memoryGraphOnResourceException" = 0;
"param_diag_mtc_enable" = 1;
"param_diag_queueDebugging_enable" = 1;
"param_diag_runtimeProfile_generate" = 0;
"param_diag_sanitizer_asan_enable" = 0;
"param_diag_sanitizer_tsan_enable" = 0;
"param_diag_sanitizer_tsan_stopOnIssue" = 0;
"param_diag_sanitizer_ubsan_enable" = 0;
"param_diag_sanitizer_ubsan_stopOnIssue" = 0;
"param_diag_showNonLocalizedStrings" = 0;
"param_diag_viewDebugging_enabled" = 1;
"param_diag_viewDebugging_insertDylibOnLaunch" = 1;
"param_install_style" = 2;
"param_launcher_UID" = 2;
"param_launcher_allowDeviceSensorReplayData" = 0;
"param_launcher_kind" = 0;
"param_launcher_style" = 99;
"param_launcher_substyle" = 0;
"param_runnable_appExtensionHostRunMode" = 0;
"param_runnable_productType" = "com.apple.product-type.application";
"param_structuredConsoleMode" = 1;
"param_testing_launchedForTesting" = 0;
"param_testing_suppressSimulatorApp" = 0;
"param_testing_usingCLI" = 0;
"sdk_canonicalName" = "iphoneos18.2";
"sdk_osVersion" = "18.2";
"sdk_variant" = iphoneos;
}

Seeing from the first log, 'vm-compressor-space-shortage' sounds like there might be some issue related to #2 of vm's failure points. Though I have tested several times both cranking up memory on launch and playing around the app some time, I'm not sure. For every case, we were allocating 1GB of memory each tick and have reached 15GB within 1~5 min mark since app launch.

Also seeing how our code have changed since January when we first tested, we were calling memset(_, 0,), allocating same "patterned" data like you have said. This may have been the reason we were able to allocate so much memory, though reproducing was not possible on current iOS version(18.3).

Anyhow, thank you for your support again.

So, breaking down some of the numbers:

"rpages" : 856759,

This is the total number of pages your process had. This includes pages from things like the shared library caches, so it's not really all that useful (aside from overall "magnitude").

"physicalPages" : {
"internal" : [
15583,
840732
]

Those numbers more directly correspond to "your" memory, with the first number being uncompressed pages and the second being compressed. Converting those pages counts into gigabytes:

page count x page size = size
15583 x 16k = ~243 MiB (uncompressed )
840732 x 16k = ~12.8 GiB (compressed)
856315 x 16k = ~13.0 GiB (total)

Which is broadly in line with my previous post on how much physical memory was available.

That leads to here:

Seeing from the first log, 'vm-compressor-space-shortage' sounds like there might be some issue related to #2 of vm's failure points.

vm-compressor-space-shortage was added in iOS 18 and is triggered by restrictions in the VM system on the total volume of compressed pages in the system. The underlying issue here is that accessing a compressed page is obviously much slower than accessing uncompressed page. If the compressed page set is unconstrained, this can eventually cause the system to fail as the act of touching a previously compressed page forces the system to compress and existing page in order to free memory to decompress the requested page.

Note that while your test avoids this (it only touches pages at initialization), that's obviously not how real world usage looks or a usage pattern the system cares about supporting*.

*You're basically testing the system ability to compress abandoned memory.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Ahh, I see.

I should have added that in the question, our results were based on values derived from this function: task_vm_info_data_t() with values in MB

let usedMemoryMB = vmInfo.phys_footprint / (1024 * 1024)
let virtualMemoryMB = vmInfo.virtual_size / (1024 * 1024)
let swappedMemoryMB = vmInfo.compressed / (1024 * 1024)

What I meant by "we've allocated 32GB" was values from .compressed and phys_footprint. From your previous answer, you have mentioned when os is pressured to allocate more memory, vm on iOS terminates apps instead.. Because vmInfo.compressed had the value of 32GB, we suspected disk swap was occuring, but that may not have been the case. Since we were writing patterned data during that test, the vm system could have been able to compress better.

Nevertheless, thanks to you, we understand our testing strategy was misleading to our actual use-case. Our use-case would have been to see how much memory app might be able to handle during a 3d reconstruction phase which would mean handling more complicated data.

Anyhow, thank you again for the help. Its helped us more understand the workings of an iOS system.

What I meant by "we've allocated 32GB" was values from .compressed and phys_footprint. From your previous answer, you have mentioned when os is pressured to allocate more memory, vm on iOS terminates apps instead.. Because vmInfo.compressed had the value of 32GB, we suspected disk swap was occuring, but that may not have been the case.

So, just to be clear, iOS is still not doing disk swap as such. What iOS 18 added was an additional protection to prevent memory compression growth (which is still happening in physical memory) from impairing (and eventually collapsing) the overall user experience.

The underlying issue here is what would happen to overall performance if the system were allowed to simply compress more and more memory:

  1. Initially, the performance impact is fairly minimal. Strictly speaking, access to the small number of compressed pages is "slower", but the cost and frequency is low enough that it would be difficult to differentiate those delays from normal scheduling variability.

  2. As more and more pages are compressed, the impact does increase but is still basically invisible. For example, memory from suspended app might be compressed first since, by definition, those pages tend to "old". That usage is basically "invisible" to the user and actually improves overall performance, since waking a suspended app is both faster and more power efficient than launching it form scratch, even if "all" of that apps memory is compressed.

  3. As page compression continues to increase, it will eventually begin applying to more are more "live" pages, becoming more and more visible to the user. The exact details of when this point is reached depend heavily on memory usage patterns, but the overall result is a general slow down as the VM system is forced to spend more and more time compressing/decompressing memory.

  4. Eventually you'd reach the point where the VM system is forced to spend "all" of it's time decompressing pages (so they're usable) and recompressing pages (so it has space to decompress). At this point performance completely collapses and the device is no longer really usable.

  5. The kernel panics, either because one of the user space watchdog process fails to check in (more likely) or because the VM system "gives up".

That leads back to "vm-compressor-space-shortage". That jetsam log has basically two goals:

  • The primary goal is to make sure the system stays away from the "bad" side of 3 and can "never" reach 4.

  • The secondary goal is to "discourage" memory patterns that aren't necessarily "dangerous", but aren't really a great idea either.

Expanding on the second point, this started with a case where ~30gb's of memory were being manipulated. Even if the usage pattern made it "safe" I'm not sure using compressed memory to nearly double the device real memory isn't really what compressed memory is "for". It's only possible if the usage pattern is very stable (1 page at a time, relatively slow access, no backtracking) AND the data compresses really well. Those conditions don't apply to most apps and, if they do, then there is almost certainly a better approach than simply pushing everything into memory.

Expanding on that idea, when you're talking about using very large amounts of memory, that usage generally falls into one of two "camps":

  1. The data itself isn't really "valuable" (that is, you could recreate it at anytime), you just don't WANT to recreate it unless you "have" to. This is things like video game textures where having it in memory makes everything faster but the data itself can retrieved at will. This is what purgeable memory (or possibly mmap'ing) is "for"- it lets you keep data in memory and the system will throw it away if it has to.

  2. The data IS "valuable", meaning this is user data you need to perserve and keep safe. This kind of data can't simply be left in memory, it needs to be saved/store/etc. so that it can't be lost. How exactly that's handled is an app specific implementation detail, but the key point here is that any data you've written to disk is immediately become type #1 data.

Anyhow, thank you again for the help. Its helped us more understand the workings of an iOS system.

Happy to help!

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Hi Kevin,

Thank you for this interesting post. A decade ago I was struggling with an app that tried to use large memory-mapped read-only data files.

By default, we prevented that issue by artificially limiting address space and terminating the app when it hits that limit. That is, you can use the architecture above but doing so won't actually give you access to significantly more memory than you would otherwise have access to, removing most of the benefit this architecture would have provided.

So now I know that my inability to map large (multi-gigabyte) files for reading was blocked because you didn't want people to map large files for read/write as a kind of swap. I don't think anyone said that at the time.

It's much too late now, but I do wonder if you could have used some other method than simply restricting the virtual memory size, that would have permitted read-only mappings.

Increased Memory Limit, Extended Virtual Addressing affects on recent iPadOS 18.3
 
 
Q