Hello everyone, I am currently developing an app for my Swift Student Challenge submission that focuses on human motion analysis using the Vision framework. To effectively demonstrate the app's technical capabilities during the review process, I need to include a sample video showing a person performing specific movements. However, I want to ensure that my submission strictly adheres to all intellectual property guidelines. Instead of using existing copyrighted videos or public social media clips, I am considering using Generative AI to create an original, royalty-free sample video. This video would feature a character performing movements designed specifically to test my app's pose estimation and feedback logic. I have a few questions regarding this approach: Is it acceptable to use AI-generated sample assets (like video clips) to demonstrate technical features when it's difficult to record high-quality personal footage due to environmental constraints? If I clearly disclose the tools used
Search results for
SwiftUI List performance
50,607 results found
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi Kevin, Thank you for your valuable insights. Following your advice, we have refactored our driver to use a wrapper pattern where UserProcessBundledParallelTasks serves as a high-performance entry point that forwards commands to our core dispatch logic. To eliminate potential race conditions, we have also moved our interrupt handling to the Default Dispatch Queue using kIOServiceDefaultQueueName. This ensures that command submission and completion are strictly serialized. Here are the key implementation details showing how we unified the dispatch logic: 1. Legacy Entry Point (Single Task): We extracted our core logic into a helper method, DispatchTaskInternal, passing 0xFFFF as a placeholder for the Slot Index. kern_return_t MyDriver::UserProcessParallelTask_Impl( SCSIUserParallelTask parallelRequest, uint32_t *response, OSAction *completion) { // Forward to unified internal dispatcher with no slot index (Legacy Mode) return DispatchTaskInternal(parallelRequest, response, completion, 0xFFFF); } 2.
Topic:
App & System Services
SubTopic:
Drivers
Tags:
(duration.localized()) timer started. This works great as long as these two languages are set to the same language on the user's device: [UI language] Settings → General → Language & Region → Preferred Language [Siri langauge] Settings → Apple Intelligence & Siri → Language However, when they differ, even this method doesn't yield correct results. This behavior makes sense to me. Assuming that: Your device has the UI language set to English, the region set to United States, and the Siri language set to German. duration is set to 10 seconds. I believe this is how it works: Siri creates an instance of your app intent (StartTimerIntent), and runs its perform method in your app's process. The perform method creates an IntentDialog, which triggers localized(). There, .autoupdatingCurrent returns the current Preferred locale (en_US), and so localized() returns: 10 seconds timer started.”. The string is used as a key to create the localized string resource (LocalizedStringResource) for the
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
If a previous reviewer could access the IAP screen, why would the current reviewer claim it doesn't exist? I had my apps rejected for the same reason at least twice last year. (1) A reviewer may not have scrolled all the way to the bottom to see a list of IAP products. In this case, I laid a button at the very top of the store sheet that was labeled 'Jump to in-app purchases' to make sure that the reviewer can go directly to the product list. (2) Right after the company introduced iOS 26, I submitted a new app to the review. And it was rejected. Initially, I tested the app with actual devices running iOS 17.x and iOS 18.x. When I installed iOS 26.0 on my iPhone 14, I found out that all toolbar buttons with asset images were shrunken to 2 px times 2 px dots. And I somehow managed to stabilize the toolbar image size. While I was working on a workaround, the company had introduced iOS 26.0.1 to the public. These are 2 reasons I can think of for inconsistent results for now.
Topic:
App Store Distribution & Marketing
SubTopic:
App Review
Tags:
Hello fahad-sh, Thank you for your question and your sample code. You may indeed need to use pagination or filter predicates to yield a smaller dataset. Have you experimented with using .fetchLimit in conjunction with .fetchOffset on your FetchDescriptor to see if performance improves? Thank you for your patience, Richard Yeh Developer Technical Support
Topic:
App & System Services
SubTopic:
iCloud & Data
Tags:
This extension View { public func renderSomething() async throws { let renderer = ImageRenderer(content: self) // renderer.colorMode = .linear renderer.render { size, context in print(size) // ... } // ... } } let view: some View = ... view.renderSomething() // → exception will cause the program to exit with EXC_BREAKPOINT deep inside SwiftUI. This worked with previous versions of Xcode, SwiftUI, and macOS. Xcode Version 26.2 (17C52), macOS Sequoia 15.7.3, using toolchain bundled with Xcode.
I am using swift-subprocess, and need to disable the SubprocessSpan trait because Xcode 26.2 does not ship with a bundled version libswiftCompatibilitySpan.dylib, causing everything to crash built with Xcode that happens to use Span. However, I cannot disable that trait by doing any of the following things: .package( url: https://github.com/swiftlang/swift-subprocess.git, branch: main, traits: [] ), .package( url: https://github.com/swiftlang/swift-subprocess.git, branch: main, traits: [.trait(name: SubprocessFoundation)] ), Note that SubprocessSpan is default trait in subprocess: // Enable SubprocessFoundation by default var defaultTraits: Set = [SubprocessFoundation] #if compiler(>=6.2) // Enable SubprocessSpan when Span is available [except it is not] defaultTraits.insert(SubprocessSpan) #endif The package still builds with the SubprocessSpan enabled. This is not an issue with the subprocess package. According to this, I should use swift build on the command line, yet this isn't -- as is upgrading to Ta
Who do you ask the questions to ? If the spec is not published (I assume you did perform a web search), developers here will not likely have the information. If you want to ask directly to Apple, the forums is for developers and may not be the best place for academic verification
Topic:
Spatial Computing
SubTopic:
ARKit
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification: Point Cloud Density / Resolution: The effective resolution of the depth map. Sampling Frequency: The sensor's refresh rate. Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range). Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
Topic:
Spatial Computing
SubTopic:
ARKit
@Frameworks Engineer Could you provide the version of your operating system where this issue was seen? I'm using the iOS simulator (iOS 26.2) running on Tahoe 26.1. I could try updating to Tahoe 26.2, if you think that would help? I have not tested this on device. @Apple Designer The best work-around I can offer for now is just add a verification in your tool call itself. Thanks, yes, I experimented with this, but I found that the model sometimes just persistently keeps calling the tool with the same invalid argument, sometimes appearing to get stuck in an infinite loop. I've also tried listing the valid section names in the instructions, but even with that I've observed that the model will still try to call the tool with invalid arguments. (This new world of non-deterministic engineering sure is an adventure!) More general question (assuming there was no bug)... would you recommend listing the valid arguments in the instructions? Or would that be redundant because the valid arguments are listed
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello, I have a question regarding the lifecycle of user consent and tokens in Sign in with Apple. Specifically, I would like to understand the behavior of the auth/revoke API in relation to App Store Connect status changes. Impact of App Status Changes If an app is Removed from Sale or Deleted from App Store Connect, does Apple automatically revoke all associated user tokens and consent? Or is it still the developer's responsibility to programmatically revoke each user's token via the REST API to ensure the app is removed from the user’s Apps Using Apple ID list? API Availability after Removal Once an app is no longer available on the App Store (or its record is deleted in App Store Connect), is the auth/revoke REST API still accessible? I want to ensure that a developer can still perform necessary privacy clean-up tasks (revoking consent) even if the app is not currently distributed. Specific User Impacts of Non-Revocation If we do not call the revocation API, besides the app remaining in
I am profiling a simple SwiftUI test app on my new iPhone through my new MacBook Pro and everything is version 26.2 (iOS, macOS, Xcode). I run Instruments with the SwiftUI template using all of the default settings and get absolutely zero data after interacting with the app for about 20 seconds. Using the Time Profiler template yields trace data. Trying the SwiftUI template again with the sample Landmarks app has the same issue as my app.
I've developed a Multiplatform app under Xcode 26 (currently using 26.2 17C52). The current destinations of the single target are Mac, iPad and Mac(designed for iPad). The minimum deployments are MacOS 15.6 and iOS 18.6. All destinations build and perform correctly on physical devices (running OS 26 versions). The MacOS version has been submitted successfully to the AppStore for TestFlight usage. However, the iPad version shows a submission validation failure: Missing Info.plist value. A value for the key “WKApplication”, or “WKWatchKitApp” if your project has a WatchKit App Extension target, is required in “xxxxx.app/xxxxx.app” bundle. For details, see: https://developer.apple.com/documentation/watchkit/creating_independent_watchos_apps/setting_up_a_watchos_project (ID: 4911506c-39c3-4b69-a8bb-5e5dcd3dc2fb) The app has no WatchKit version (although one's planned for a future release). The Target's Build Settings include a watchOS Deployment Target and Info.plist values related to WatchKit. The Build
Hi, I'm running into a weird notarization issue and wanted to see if anyone else has seen something similar. I have one main macOS app that keeps doing the following: The notarization sits in In Progress for a few days Then it flips to Rejected with error code 7000 The notarytool log shows no issues and no ticket info At the same time, smaller test apps on the same Apple Developer account notarize. They do take around 2-3 days though. So it doesn't seem like an account or certificate problem. It looks like something about this specific app causes it to go into a long review and then fail with that vague 7000 error. The app is fairly large (Python + Qt, lots of bundled libraries), so I'm wondering if that triggers deeper scanning or some kind of policy check. Has anyone else seen: Multi day notarization jobs? Error 7000 that only affects one particular app? Rejections with no issues listed? If so, did you find a way around it? Also for context, my Apple Developer account was created recently I have co
Topic:
Code Signing
SubTopic:
Notarization
I have a new app I am working on, it uses, a container id like com.me.mycompany.FancyApp.prod, the description in the app is My Fancy App. When I deploy the app via TestFlight on a real device, the sync seems to work, but when I view iCloud->Storage-List, I see my app icon, and the name prod. Where did the name prod come from? It should be My Fancy App, which is the actual name of the App.