Clarification on Disk Write Limits (bug_type 145) and Cross-Volume Write Amplification

Hello Apple Developer Support and Community,

I am a senior software engineer investigating a Disk Writes Resource Violation (bug_type 145) for a photo-management application (BeePhotos v2.3.0). We observed a violation where the app dirtied approximately 1GB of file-backed memory in just 48 seconds, triggering a resource report.

[Core Diagnostic Data]

The following data is extracted from the .crash report:

  • Event: disk writes
  • Action taken: none
  • Writes caused: 1073.96 MB over 48.28s (Average: 22.24 MB/second)
  • System Limit: 1073.74 MB over 86,400 seconds (Daily limit)
  • Device: iPhone 15 Pro (iPhone16,2)
  • OS Version: iOS 26.4 (Build 23E244)
  • Free Space: 3852.25 MB (Approx. 3.8 GB)

[Implementation Details]

Our application performs the following sequence for a 1GB video download:

  1. Download: Uses NSURLSessionDownloadTask to download the file to the system-provided location URL (in the /tmp or com.apple.nsurlsessiond directory).
  2. Move: In didFinishDownloadingToURL, we move the file to the App’s sandbox Library/Caches directory using FileManager.default.moveItem(at:to:).
  3. Save: We then add the file to the Photo Library via PHAssetCreationRequest.addResource(with:fileURL:options:) using the local URL in Library/Caches.

[Technical Questions]

I suspect the 1GB download is being "amplified" into ~3GB of total physical writes, and I would like to confirm the following:

  1. Cross-Volume Move: Does moving a file from the nsurlsessiond managed temporary directory to the App’s sandbox Library/Caches constitute a Cross-Volume Move on APFS? If so, does this effectively double the write count (1GB download + 1GB copy-on-move)?
  2. PHPhotoLibrary Ingestion: When using PHAssetCreationRequest, does the system perform another 1:1 data copy of the source file into the assets database? Would this result in a 3rd GB of writing?
  3. Low Disk Space Impact: Given the device only had 3.85 GB free, does the system’s "low disk space" state (near the 150MB threshold) increase the overhead for metadata updates or physical write amplification that counts towards this limit?
  4. Best Practices: To stay within the daily 1GB budget for high-resolution media, is it recommended to call PHAssetCreationRequest directly using the location URL from didFinishDownloadingToURL to avoid intermediary copies? Are there any permission or lifecycle risks with this approach?

Any insights from the Apple engineering team or the community on how to minimize the write footprint during high-speed ingestion would be highly appreciated.

Best regards

The following data is extracted from the .crash report:

So, a few points from the log itself:

Action taken: none

In other words, the system generated this log and then did... nothing. Your app did not crash, nor was there any user visible impact. We use the same format for a variety of different events and many of those logs are informational/diagnostic, not disruptive to the user.

Writes caused: 1073.96 MB over 48.28s (Average: 22.24 MB/second)

Note that the data rate there (22.24 MB/s) is EXTREMELY slow for our Disk I/O performance. Whatever is going on here is tied almost entirely to network I/O, not the disk itself.

Next, I want to start here:

...To stay within the daily 1GB budget

Staying below this budget shouldn't necessarily be one of your app’s "goals". Storage and I/O are resources that exist to perform work on the user’s behalf. If the user tells you to download a 5GB file, then your app should download that 5GB file as quickly as possible, not stagger the download over 5 days to avoid blowing past the budget.

The point of this log is to catch cases where I/O performance is being WASTED on unnecessary work, not restrict useful I/O. The article "Reducing disk writes" is full of excellent guidance, but one of the most important points is this one:

"Assess whether the amount of data recorded seems reasonable for your app. If the numbers are greater than what you expect, you may be writing data too frequently. For example, if your app’s files total 100 KB and your app writes 500 MB of data to disk every day, you might want to investigate how many times you’re writing the same data to disk each day."

The key words there are "greater than what you expect"- do whatever optimization you can, but what ultimately matters here is that you’re using disk in ways that directly serve the user’s needs and interests, not staying below a fairly arbitrary* limit.

*Yes, the limit is basically "arbitrary". The goal was to pick a number big enough that most "typical" apps would only exceed it if/when they made a mistake that caused them to generate lots of unnecessary I/O.

I suspect the 1GB download is being "amplified" into ~3GB of total physical writes, and I would like to confirm the following:

I'd need to manually check each of the code paths you mentioned to be sure, but no, I don't think that's what's going on. If you want to validate that for yourself, you can compare the "volumeIdentifierKey" of both URLs to confirm they're on the same volume. As long as they’re on the same volume, all of our high-level APIs will use file cloning, which means the actual storage growth will be minimal[1].

However, I think it's pretty clear from the "raw" transfer rate that you're not getting the 3x multiplier you think you are. Your data rate of 22 MB/s sounds right about what I'd expect for bulk network I/O over "common" consumer networks. A 2x or 3x multiplier would imply a much slower (11 MB/s or ~7 MB/s) underlying network connection, at which point I think you'd be getting "outlier" logs with much larger numbers in cases where the app was being run on much faster networks.

Putting that in more real-world terms:

Our application performs the following sequence for a 1GB video download:

Per the raw numbers, that means:

  1. No multiplier -> ~48s to download the video.

  2. 2x multiplier -> ~96s/1m 30s to download the video

  3. 3x multiplier -> ~144s/2m 24s to download the video

All of those are theoretically possible, but #1 is the case I'd expect to see most of the time.

A few other comments:

We move the file to the App’s sandbox Library/Caches directory using

It's ultimately a judgement call, but I'm not sure whether or not I'd use Caches for this. Large file downloads are a substantial resource commitment (time/network/storage), so if you're performing a specific "cycle" like this, it could be worth protecting that asset so you're not forced to download the full file all over again.

I don't know the specifics of your app, but my intuition would be to do something like this:

  1. Download file
  2. Move file to your "own" designated directory inside your app’s data container.
  3. Save file into Photo Library
  4. (maybe) Move file to Caches so you can reuse the file again if the user “asks" you too.

Your designated directory can be excluded from backup so you don't need to mark individual files, but this flow makes sure you don't risk losing a large download if/when something goes wrong at exactly the wrong time.

Also, #4 assumes that there is some reason/benefit to your app having access to the file again "in the future". If there isn't, then it's better to just delete the file once you're done.

Low Disk Space Impact: Given the device only had 3.85 GB free, does the system’s "low disk space" state (near the 150MB threshold) increase the overhead for metadata updates or physical write amplification that counts towards this limit?

First off, in terms of the immediate issue, no, it won't really have any effect. The limit is a basically "fixed" value that you'd generally hit regardless of the larger volume state.

Having said that, did that number come from the log itself or did your app retrieve it? Setting the immediate issue aside, you should be checking free space through NSURL and adjusting its behavior based on that. Whether or not 3 GB is a concern depends on the API source— that's about right for volumeAvailableCapacityKey (actual volume usage) but around the point I'd start throttling (opportunistic usage) or even warning the user (important usage) that they're short of storage.

[1] For the curious, in APFS, the actual tracking of I/O usage is done within the lowest levels of the VFS driver at the point where it's actually committing physical blocks to disk, so primarily "logical" operations like file cloning are largely invisible.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Clarification on Disk Write Limits (bug_type 145) and Cross-Volume Write Amplification
 
 
Q