Hi Apple Developer Community, I'm implementing MetricKit launch performance tracking in our iOS app and need clarification on two properties: histogrammedTimeToFirstDraw histogrammedOptimizedTimeToFirstDraw The Documentation Problem: The official MetricKit documentation provides minimal explanation of these properties beyond their names. Based on naming conventions, I initially assumed: histogrammedTimeToFirstDraw = cold launches histogrammedOptimizedTimeToFirstDraw = warm/optimized launches Based on our measurements: The “optimized” metric appears only in a small fraction of launches The optimized metric is actually slower The naming suggests the opposite behavior Questions: What specific launch conditions does each metric measure? Why would optimized launches be slower and less frequent? Is histogrammedOptimizedTimeToFirstDraw related to iOS app pre-warming or prediction features? If these metrics don’t correspond to cold vs. warm launch times, is there an alternative way to measure them accurately
Search results for
SwiftUI List performance
50,611 results found
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m running into a problem with SwiftUI/AppKit event handling on macOS Tahoe 26.2. I have a layered view setup: Bottom: AppKit NSView (NSViewRepresentable) Middle: SwiftUI view in an NSHostingView with drag/tap gestures Top: Another SwiftUI view in an NSHostingView On macOS 26.2, the middle NSHostingView no longer receives mouse or drag events when the top NSHostingView is present. Events pass through to the AppKit view below. Removing the top layer immediately restores interaction. Everything works correctly on macOS Sequoia. I’ve posted a full reproducible example and detailed explanation on Stack Overflow, including a single-file demo: Stack Overflow post: https://stackoverflow.com/q/79862332 I also found a related older discussion here, but couldn’t get the suggested workaround to apply: https://developer.apple.com/forums/thread/759081 Any guidance would be appreciated. Thanks!
Project Background: I am developing a third-party custom keyboard for iOS whose primary feature is real-time voice input. In my current design, responsibilities are split as follows: 1. The container (main) app is responsible for: Audio recording Speech recognition (ASR) 2. The keyboard extension is responsible for: Providing the keyboard UI Initiating the voice input workflow Receiving transcription results via an App Group Inserting recognized text into the active text field using textDocumentProxy.insertText(_:) Intended User Flow The intended workflow is: The user is typing in a third-party app (for example, WeChat) using my custom keyboard. The user taps a “Voice Input” button in the keyboard extension. The keyboard extension activates the container app so that audio recording and ASR can begin. After recording has started, control returns to the original app where the user was typing. The container app continues running in the background, maintaining active audio recording and ASR. Recognized text is co
Hi, I understand that AVPlayer/AVFoundation doesn’t natively play MPEG-DASH manifests (.mpd) today, while HLS is supported and widely documented by Apple. I’m not asking for roadmap commitments, but I’d like to understand whether there is any publicly documented rationale for not supporting DASH/MPD in AVFoundation (e.g., technical constraints, platform integration, DRM ecosystem, power/performance considerations, etc.). Questions: Is there any Apple statement / documentation explaining why DASH (MPD) isn’t supported in AVFoundation? Is Apple’s recommended approach still “provide HLS for Apple clients” (potentially sharing CMAF segments and generating separate manifests)? If there’s no public rationale, is filing Feedback Assistant the best channel for requesting MPD playback support? Thanks!
To whom it may concern that deals with bugs in SwiftUI for iOS 26: Inadvertently discovered a bug which duplicates ToolbarItem in any placement in the toolbar when navigationBarBackButtonHidden is set to true. .toolbar{ ToolbarItem(placement: .confirmationAction) { Button(Stop, systemImage: stop.fill){ //some action } } } .navigationBarBackButtonHidden(true) Expected Behavior Show the ToolbarItem Actual Behavior Duplicates items in the placement position. Thank you.
First step is to confirm that your decoration is added with fileproviderctl evaluate Then make sure that the decoration is listed as a NSFileProviderDecorations in Info.plist
Topic:
App & System Services
SubTopic:
Core OS
Tags:
I started using Xcode to package and submit my app yesterday, but it failed. I received a prompt Your app must be registered with App Store Connect before it can be uploaded. Xcode will create an app record with the following properties. halfway through the process. After filling it out and submitting, I received another prompt App Record Creation Error,App Record Creation failed as you do not have permission to perform requests of this type.。 I have already listed several apps on the app store. For the other apps, I also tried creating new versions, packaging, and submitting them, but I encountered the same error
Hello, We operate a subscription-based app and have noticed some users canceling after seeing higher-than-expected or multiple Apple charges on their bank statements. In many cases, these appear to be aggregated App Store charges rather than the cost of our subscription alone. Because bank statements show a single Apple charge without an app-level breakdown, some users assume the full amount came from our app and cancel before contacting support. We’ve observed that Google Play lists charges separately with the app name, which seems to reduce this type of confusion. A more granular breakdown or clearer labeling of charges per app could help improve user clarity and avoid churning. I’m interested to know: Whether other developers have experienced similar user confusion If there are recommended best practices to set clearer expectations for users Whether Apple has shared any guidance on mitigating this from a UX or communication standpoint Appreciate any insights or shared experiences. Thank you.
Topic:
App Store Distribution & Marketing
SubTopic:
General
[quote='869492022, yuvalishay, /thread/809084?answerId=869492022#869492022, /profile/yuvalishay'] is there some mechanism to filter packets in advance? [/quote] Just to be sure we’re on the same page here: You want to implement an NE packet filter. But only be called for specific packets. For example, you might want to apply a filter that means that you only see TCP packets with a specific remote port. Is that right? If so, then, no, the current packet filter provider has nothing like that. [quote='869492022, yuvalishay, /thread/809084?answerId=869492022#869492022, /profile/yuvalishay'] do you think there is a point in opening a feature request? [/quote] Yes. The industry has a history of providing general filter support like this (for example, BPF). I don’t know if such an approach will actually improve performance given the constraints of the NE architecture, but it’s a perfectly reasonably request. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1
Topic:
App & System Services
SubTopic:
Networking
Tags:
Not sure I understand Apple AI being unusable. I always include the following in the session instructions: The person's locale is (locale.identifier). And the response is in the expected language, which must be in the list of languages supported by AI (which is not the same as Siri's language). /// Returns locale instructions for the model following Apple's recommended approach. private static func localeInstructions() -> String { guard let preferredLanguage = Bundle.main.preferredLocalizations.first else { return } let locale = Locale(identifier: preferredLanguage) if Locale.Language(identifier: en_US).isEquivalent(to: locale.language) { return } else { return The person's locale is (locale.identifier). } }
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
I want to use the Observations AsyncSequence on some SwiftData @Model instances to determine if internal calculations need to be done. When a simple property is linked to the Observations it fires CONTINUOUSLY even though no change is made to the model property. Also, when I try to observe a property which is a list of another @Model type the Observations sequence does not fire when I add or remove items. I am hoping to use the async-algorithm's merge function so all the associated sequences can be combined since if any of the associated events should fire the calculation event.
[quote='871218022, App Review, /thread/812004?answerId=871218022#871218022'] Thank you for your post. We've begun investigating but we've been unable to locate your app submission to provide further assistance. Can you provide the name and App ID associated with the app? These can be found in App Store Connect in the App Information tab. 0 comments [/quote] Thank you for getting back to us. The app name is BabyNena, and the App ID is 6505036261. These details are listed in App Store Connect under the App Information tab. Please let us know if you need any additional information from our side. We appreciate your assistance and look forward to your guidance. Best regards,
Topic:
App Store Distribution & Marketing
SubTopic:
App Review
Tags:
One of your targets is likely including the Info.plist file in the Copy Bundle Resources step. Select your project in the project viewer. It's the first item at the top, with the little blue icon. Select your main iOS target under the TARGETS section. In the big main pane, select the Build Settings tab. In the search field in the top-right, enter info, then scroll down to the Packaging section. You likely have Generate Info.plist file set to Yes. This is fine, leave it as it is. For each target you have (you may have only one): Select the target from the TARGETS section. In the big main pane, select the Build Phases tab. Expand the Copy Bundle Resources item. If Info.plist is listed there, remove it. (Do not remove InfoPlist.strings!) Hope this helps.
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
I'm building a voice-to-text keyboard extension that needs to open the main app briefly for audio recording (since keyboard extensions can't record audio), then return the user to their original app. The flow I'm trying to achieve: User is in WhatsApp (or Messages, Slack, etc.) User taps Voice button in my keyboard My main app opens via deep link (myapp://keyboard/dictation) App starts recording App automatically returns user to WhatsApp I cannot find a way to detect which app the keyboard is running inside, or which app opened my main app via the deep link. UIInputViewController.textDocumentProxy - No host app information available UIApplication.OpenURLOptionsKey.sourceApplication in application(_:open:options:) - When opened from a keyboard extension, does this return the host app bundle ID or the keyboard extension bundle ID? Private APIs (for research only, not production): _hostBundleID on UIInputViewController - blocked/returns nil on iOS 18 KVC approaches - all blocked Hardcoded app support - Works but
Code signing uses various different identifier types, and I’ve seen a lot of folks confused as to which is which. This post is my attempt to clear up that confusion. If you have questions or comments, put them in a new thread, using the same topic area and tags as this post. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com Code Signing Identifiers Explained An identifier is a short string that uniquely identifies a resource. Apple’s code-signing infrastructure uses identifiers for various different resource types. These identifiers typically use one of a small selection of formats, so it’s not always clear what type of identifier you’re looking at. This post lists the common identifiers used by code signing, shows the expected format, and gives references to further reading. Unless otherwise noted, any information about iOS applies to iOS, iPadOS, tvOS, visionOS, and watchOS. Formats The code-signing identifiers discussed here use on