Q&A Summary - Fortify your app: Essential strategies to strengthen security

This thread has been locked by a moderator; it no longer accepts new replies.

This is a recap of the Q&A from the Meet with Apple activity Fortify your app: Essential strategies to strengthen security. If you attended the event and asked questions, thank you for coming and participating! If you weren’t able to join us live we hope this recap is useful.

Memory Integrity Enforcement (MTE)

What is Memory Integrity Enforcement and which devices support it?

Memory Integrity Enforcement is supported on A19, A19 Pro, M5, M5 Pro, and M5 Max chips, which power iPhone 17e, the new MacBook Air (M5), and the new MacBook Pro (M5 Pro or M5 Max). Starting in the 26.4 OS versions, applications that enable MTE (checked-allocations) as part of Enhanced Security will also run with MTE enabled in the simulator when running on macOS hardware that supports MTE.

How can I use Memory Integrity Enforcement with third-party SDKs?

Third-party SDKs linked into your app will generally use the system allocator automatically and benefit from Memory Integrity Enforcement automatically. If there are memory corruption bugs in those SDKs that Memory Integrity Enforcement features like MTE detect and turn into crashes, you'll want to work with the developers of those SDKs to have them fix the underlying bugs. You could use MTE soft mode to avoid having those memory corruptions crash your app while you wait for fixes from the developers, at the cost of the relative reduction in security that entails.

Why does my app crash on launch with MTE enabled, with tags showing as 0?

Tag-check violations where the ltag (logical tag) is 0 and the atag (actual tag) is non-zero can be caused by code patterns that strip the high bits that the ltag is stored in and fail to restore them before use. Additionally, arm64 binaries produced by older versions of clang may have issues where the tag is incorrectly stripped from the pointer. Recompiling the binary with a recent compiler should remediate the issue.

Can I use Memory Integrity Enforcement with older Swift versions?

Yes, Memory Integrity Enforcement can be used with any Swift version.

Pointer Authentication (PAC)

How does Pointer Authentication work and why is it opt-in?

PAC is an opt-in feature because although adopting PAC is frequently as easy as turning on the compiler flag, some software is not trivially compatible. For example, while it mostly works in arm64 to memcpy a C++ object, this is invalid and generates fatal exceptions in arm64e. Additionally, PAC is a compile time change as it requires different instructions throughout the program.

Pointer authentication makes it more difficult to create a pointer (from an integer) or to modify an existing pointer. This complements technologies such as MTE (which can catch many bound and lifetime errors) and typed allocation (which mitigates the effects of memory re-use).

Where are the cryptographic keys for Pointer Authentication stored?

The keys used for generating PAC signatures are stored in the CPU itself as specified by the ARM architecture. These keys are ephemeral and can change across process launches and boots, depending on which PAC key is used. The signatures are, however, stored in the upper bits of the pointer itself.

How does Pointer Authentication work with Objective-C method swizzling?

When you use the functions provided by the ObjC runtime, they ensure that any necessary pointer signing is correctly handled.

What deployment targets and OS versions support Pointer Authentication?

PAC is tied to the arm64e architecture. arm64e is first supported in iOS 17.4, and generally supported starting with iOS 26 and macOS 26. Universal binaries can be built for arm64e + arm64, and arm64 will be used when arm64e isn't supported. When building the universal binary, both architectures can be compiled for an older deployment target, but keep in mind that arm64e will only be used on newer iOS.

How do I enable Pointer Authentication in modular apps?

arm64e is indeed required, and every target that contributes binary code that's linked or dynamically loaded into an app does need to have arm64e added as an architecture. When enabling the Enhanced Security capability, Xcode adds the ENABLE_POINTER_AUTHENTICATION build setting (that adds arm64e) as needed, but you may need to add that separately as well.

Bounds Safety and Annotations

How do bounds safety checks work in Clang?

With -fbounds-safety enabled Clang will emit bounds checks wherever pointers are dereferenced or reassigned (exception: assigning to __bidi_indexable does not trigger a bounds check, since __bidi_indexable can track the fact that the pointer is out of bounds and defer the bounds check). If the bounds check fails the program will jump to an instruction that traps the process. Clang uses a combination of static analysis and runtime checks to enforce that pointer bounds are respected.

How can I work with libraries that don't have bounds annotations?

Forging safe pointers at the boundary (using __unsafe_forge_single etc.) is the recommended approach when interoperating with libraries that do not have bounds annotations, when you want to be explicit about the fact that you're interacting with unsafe code. This makes it easy to grep for "unsafe" in your code base when doing a security audit.

If you are confident that the API adheres to a bounds safe interface but simply lacks the annotations, you can redeclare the signature in your local header with added bounds annotations, like this:

//--- system_header.h
bar_t * /* implicitly __unsafe_indexable */ foo();

//--- project_header.h
#include <ptrcheck.h>
#include <system_header.h>
bar_t * __single foo();

How can I safely pass Swift data to C/C++ functions?

This is a great question! Automatically generated wrapper functions that safely unwrap Span types and pass along the pointer to C/C++ is a feature available since Xcode 26 when the experimental feature SafeInteropWrappers is enabled. This requires annotating std::span<T> parameters with __noescape, or pointer parameters with both __noescape and __counted_by/__sized_by, directly in the header or using API notes. Note that this is only safe if Swift can accurately track the lifetime of the unwrapped pointer, which is why the Span wrapper is not generated without the __noescape annotation.

Since this is an experimental feature with ongoing development, questions and feedback on the Swift forums are extra welcome to help us shape and stabilize this feature!

Continued in next post...

Boost

Swift and Memory Safety

What enhanced security features are useful for fully Swift apps?

Enabling enhanced security features like PAC, EMTE, and typed allocations is still useful in Swift apps. In certain language modes, Swift apps which do not use unsafe can still have memory corruption issues due to data races (concurrently modifying a reference can cause reference counting errors). Similarly, although your application may be fully safe Swift, it may interact with libraries (provided by Apple or third parties) which are not fully memory safe, and so turning on enhanced security features will help protect you against issues not caused by your code.

How can I use unsafe Swift methods safely?

Methods with "unsafe" in the name can be used safely, but it requires more care and attention to do so. You won't always be able to avoid using such methods, but you can carefully isolate such code and of course make sure to devote more effort to reviewing and verifying any code that uses unsafe methods or types.

How can I protect unsafe code in memory-safe languages like Swift or Rust?

On Apple platforms, EMTE is a great way to mitigate issues caused by this gap. Even kernel accesses to user memory are tag checked, and so out-of-bounds/use-after-free accesses to user memory will still be reported according to the process' enforcement mode.

Other techniques such as fuzzing with libfuzzer and either ASan or EMTE on can also serve as strategies to gain confidence that unsafe code has the desired memory safety properties.

Standard Library Hardening

When should I enable standard library hardening?

Yes, we recommend enabling at least the fast hardening mode, even in production builds. It has been designed to have minimal performance impact, but if you have very performance sensitive workloads, as always: benchmark before and after. For configurations with optimization disabled (level set to 0 — normally Debug configs), __LIBRARY_HARDENING_DEFAULT_VALUE defaults to "debug" (the more extensive checks).

For optimized configurations (e.g. Release), __LIBRARY_HARDENING_DEFAULT_VALUE defaults to "fast" if Enhanced Security is enabled, "none" otherwise.

Typed Allocators

Do type and alignment interact in typed allocators?

Mostly, yes, but not entirely. The typed allocators segregate and isolate allocations by size class and, within each size class, by type space partition. Let's call the (size class, type space partition) combination the "type bucket" that serves a particular allocation. Requesting aligned allocations (e.g. via aligned_alloc()) can change the effective size class of an allocation because of implementation details of the allocators, and so can change the type bucket that the allocation is served from.

Development Tools and Testing

How can I verify if an app has adopted enhanced security features?

If you have the application bundle available to you, you can use the codesign tool to view an application's entitlements:

codesign -d --entitlements - /path/to/application/binary

As EMTE is controlled by entitlement, you can use this technique to see if EMTE is enabled for a given executable in the app.

Can I see memory tags in Instruments or memgraphs?

When running under the Allocations template in Instruments, the memory tags do show up in object addresses. In memgraphs and the CLI tools like heap, these tags aren't currently exposed, with the addresses correlating directly to their containing VM regions. If you're interested to see which regions have tagging enabled, this is available with vmmap --attributes <target>.

How can I enable enhanced security features in Swift Package Manager?

Any clang compiler option can be included in cSettings. Caveat: Prior to Swift 6.2, only some clang options were allowed, but starting with Swift 6.2, you can put any option.

What Xcode version do I need to test enhanced security features?

Most of the capabilities of Enhanced Security are supported on and can be tested on Apple OS versions starting from 26.0.

The "hard mode" of MTE (under which tag-check violations result in an immediate crash) is only supported beginning in 26.4, so to test with that capability on hardware with MTE support you'll need to test with 26.4 or later OS versions. The new enhanced-security-version-string and platform-restrictions-string string version entitlements described in the Xcode 26.4 release notes are set automatically by Xcode 26.4, but can be set manually in your entitlements plists using a text editor if you need to stay on an earlier Xcode version.

Third-Party Libraries and Auditing

How can I evaluate third-party libraries for security vulnerabilities?

Evaluating third-party libraries can be tricky and require a lot of domain expertise as there are a wide variety of issues to look for. This is especially true if you only have access to the libraries in a binary form. Certain features—like EMTE—serve as mitigations and can be applied (in some cases) without a recompile, which can help in cases where you cannot easily audit the source.

When auditing the source, some good hints for finding security relevant bugs are:

  1. Memory unsafe code (C/C++, Swift code which uses unsafe, etc.), especially decoders and deserialization routines (e.g. bespoke file/request format parsing).
  2. File path construction and archive expansion (zip files, etc.) are a common source of problems due to path traversal.
  3. Whether or not the code is memory safe, take special care to understand where it comes from and whether you trust it.

Continued in next post...

Data Storage Security

How should I handle data storage in UserDefaults and plist files?

UserDefaults should be used to store "preferences", not "user data". The issues here aren't really security, it's that UserDefaults is an indirect API where you don't really control how the data is stored or managed. That's fine for "preference" data like "where was the user the last time I ran" but isn't really the right way to store actual "user data". plist files — More specifically, data written and read using NSPropertyListSerialization — is a very reasonable way to store user data of the same type/format you'd store in UserDefaults (Strings/Data/Array/Dictionary/etc.). It shouldn't be used to store large amounts of data ("megabytes"), but that's primarily because that use case doesn't really "fit" the API's role, not because of any fundamental flaw.

In terms of pure data security, both approaches are fairly similar. The data is being stored in the file system and is protected the same way any other file does. However, using your own plist file does mean you can directly set the file's protection level, which is another reason to avoid storing sensitive data in UserDefaults.

Similarly, both are using the same on-disk format and data parser. Theoretically, that's an attack vector just like any file parser. However, this particular parser is relatively simple compared to other formats (for example, SQL) and so widely used throughout the entire system that it should be considered quite safe.

What's the recommended baseline for protecting sensitive data in iOS apps?

The baseline for most data should be NSFileProtectionCompleteUntilFirstUserAuthentication (for files) and kSecAttrAccessibleAfterFirstUnlock (keychain). The issue with storing data at higher security levels is that data stored at "Complete" can really only be safely accessed while your app is in the foreground. More specifically, the data will only be accessible if the device is unlocked, but that state changes independently from background execution, meaning data can suddenly become inaccessible anytime your app is running in the background.

You can and should use "Complete" whenever you can, but it's a choice that needs to be made as part of your app's overall design, not as an automatic default.

Cryptographic Keys and Secure Enclave

How can I verify that a public key was generated in the Secure Enclave?

As general guidance, to learn more about generating, protecting, and obtaining extractable cryptographic keys in the Security and Apple CryptoKit frameworks, please review the following resources:

Generating New Cryptographic Keys

Protecting keys with the Secure Enclave

Getting an Existing Key

Storing Keys as Data

BlastDoor and File Processing

How does BlastDoor protect against file-based vulnerabilities?

Properly validating images can itself be a perilous task and so it's one best not performed inside a (relatively) privileged process like Messages. BlastDoor validates and unpacks attachments in a highly restricted environment and then passes a much safer, simpler version of it to Messages. So, for example, Messages may give BlastDoor a JPG and BlastDoor will ultimately hand back a simple bitmap representation of it. This indemnifies Messages against memory corruption bugs in the JPG parser.

Messages allows sending (mostly) arbitrary files but—notably—it does not parse/preview/unpack arbitrary files. Messages and BlastDoor work together to ensure that attachments which are processed are processed in a way that's safe. It is generally safe to accept arbitrary files so long as they are not processed.

Q&A Summary - Fortify your app: Essential strategies to strengthen security
 
 
Q