I'm using Swift 6 and tasks to concurrently process multiple PDF files for rendering, and it's working well.
But currently I'm manually limiting the number of simultaneous tasks to 2 out of fear that the system might run many tasks concurrently without having enough RAM to do the PDF processing.
Testing on a variety of devices, I've tried increasing the task limit and haven't seen any crashes, but I'm quite concerned about the possibility. Any given device might be using a lot of RAM at any moment, and any given PDF might strain resources more than the average PDF.
Is there a recommended technique for handling this kind of scenario?
Should I not worry about it and just go ahead and start a high number of tasks, trusting that the system won't run too many concurrently and therefore won't run out of RAM?
Overview
Post
Replies
Boosts
Views
Activity
Hi,
I'll start by saying that I'm a new developer for apps for Apple devices, especially for Apple Watch, so please have mercy
I'm trying to create an app for Apple Watch (WatchOs 10+) and I have a problem, my interface is made up of 3 buttons, one at the bottom of the screen and two smaller ones positioned on the top of the screen, each in their respective corners (one at the top right and the other at the top left).
By positioning these buttons on the top of the screen, the top right button is covered by the default Apple time, and I would like to move it to the center, thus creating two side buttons and the time in the center.
I'm also asking if there's a way to remove it since it's not useful to me, but reading some forums it seems that without it I don't pass the app verification, so I'm waiting for your advice.
The only app I've seen in circulation on the app store that has the centered clock is petey.
I'm working on adding CarPlay support to an audio app and I'd like to mimic the behavior of the Apple Music app on launch.
Forgive me, but I think using Gherkin syntax here will help to best describe the desired behavior:
GIVEN the Apple Music app is in a cold state (not launched or in memory)
AND another audio app is actively playing audio
WHEN I launch the Apple Music app from CarPlay
THEN the Now Playing template is shown via a push
AND the appropriate Now Playing info is shown
AND the Now Playing button is shown on the tab bar
AND the actively playing audio from another audio app is NOT interrupted
The current behavior I see in my own app is that I can push on the Now Playing template and fill out the MPNowPlayingInfoCenter's info dictionary, but it won't render the info or show the Now Playing button on the tab bar until I start playing audio.
Also, is there a way to hide the Now Playing button after the queue of content has finished playing? I'm able to pop the Now Playing template, but the Now Playing button is still present and tapping it will navigate the user to the now blank Now Playing template.
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence.
I've checked almost everything I can imagine but didn't find any possible reason of issue.
Conditions:
AVAudioRecorder uses following configuration:
[
AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue,
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 16000.0
]
App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment)
App does AVAudioSession category change to playAndRecord
App gets routeChangeNotification with categoryChange and category = playAndRecord
There is no any interruption notifications from AVAudioSession during recording
There is no any error notification from AVAudioRecorder
Any idea what exactly I do wrong? Is there anything else I should check?
Thanks in advance.
P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log?
Full runnable demo here:
https://github.com/SpaceGrey/ColorSpaceDemo
session.sessionPreset = .inputPriority
// get the back camera
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
backCamera = deviceDiscoverySession.devices.first!
try! backCamera.lockForConfiguration()
backCamera.automaticallyAdjustsVideoHDREnabled = false
backCamera.isVideoHDREnabled = false
let formats = backCamera.formats
let appleLogFormat = formats.first { format in
format.supportedColorSpaces.contains(.appleLog)
}
print(appleLogFormat!.supportedColorSpaces.contains(.appleLog))
backCamera.activeFormat = appleLogFormat!
backCamera.activeColorSpace = .appleLog
print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)")
backCamera.unlockForConfiguration()
do {
let input = try AVCaptureDeviceInput(device: backCamera)
session.addInput(input)
} catch {
print(error.localizedDescription)
}
// add output
output = AVCaptureMovieFileOutput()
session.addOutput(output)
let connection = output.connection(with: .video)!
print(
output.outputSettings(for: connection)
)
/*
["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled.
"AVVideoCompressionPropertiesKey": {
AverageBitRate = 220029696;
ExpectedFrameRate = 30;
PrepareEncodedSampleBuffersForPaddedWrites = 1;
PrioritizeEncodingSpeedOverQuality = 0;
RealTime = 1;
}]
*/
previewSource = DefaultPreviewSource(session: session)
queue.async {
self.session.startRunning()
}
}
We are using an app distributed via an iOS enterprise certificate. There is an exceptional user who could normally use the app signed with this certificate before upgrading to iOS 18. However, after updating to iOS 18 (currently on version 18.3), the app crashes immediately upon launch. Real-time logs indicate that the application fails to start. This issue is unique to this user, as other users on the same iOS 18.3 system do not experience the problem.
console log
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time
So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay.
On iOS works easy with AVAudioEngine, but on macOS bruh...
I use Homepod Mini as the gateway and have bound 9 Matter lights and 7 Matter switches. Each of my switches has 12 buttons, and each button supports three functions: single click, double click, and long press. Therefore, a total of 252 actions can be configured for 7 * 12 * 3. Currently, a total of 71 actions are configured for the 7 Matter switches. When configuring the 72nd action, the app will prompt that the operation cannot be completed. But if you delete a few previously configured actions, such as 68 actions, then you can configure 3 more actions (69, 70, 71). However, as long as you configure the 72nd action, the app will prompt that the operation cannot be completed, as if the available space is occupied. What is the reason for this?
In response to inquiries from users, we have confirmed the following phenomenon.
If you select "Private email address" in the flow of new user registration with Apple ID, you will not receive the verification code email when performing two-factor authentication.
■User impact
If you use your Apple ID to link an external account without making your email address public, you will not receive the authentication code during two-factor authentication and will not be able to proceed. The date and time of the impact is currently unknown.
◎Impact 1: New registration
If you select "Private email address" in the flow of registering a new user with Apple ID, the verification code will not be received during two-factor authentication and registration will not be completed.
◎Impact 2: Login of existing account
When two-factor authentication is required for an existing account registered with Apple ID set to "Private email address," the verification code is not received and the user cannot log in.
→If you have not registered a login method other than Apple ID for the relevant account, there is no other way to log in.
■About workarounds
・I thought that I could avoid this issue by canceling the private setting of my Apple ID, but I was unable to do so.
→There is currently no workaround found for existing users who are experiencing this issue.
・However, the scope of influence is limited.
■Cause investigation status
Premise: For an Apple ID whose email address is not made public, the two-factor authentication authentication code email follows the following route.
①CDC/GIGYA
miraiz-persol.jp (SendGrid)
Apple's email server (relay server to hide the user's real email address)
User mailbox
→Since '1' are working, the problem seems to have occurred after the connection from ② or ③.
(At this stage, we cannot determine who is at fault: the user, MIRAIZ, or Apple. We are currently investigating.)
◎Hypothesis
・Is there something wrong with Apple's mail server?
・Is it not delivered because the user's mailbox is full?
■Questions, research, and responses we would like to receive
Please check the following two points and reply.
1st point
As shown in the attached image, there seems to be no problem with the SPF settings.
Is it possible to check to see if any errors have occurred with Apple's mail server?
2nd point
Are there any cases where you still can't receive emails even if you deactivate your Apple ID?
I would like to know if there are any patterns in which emails are not being delivered in terms of past inquiries or overall specifications
I am creating new Apple ID, I am creating new Apple ID, I am able to verify the email, but when I get to mobile otp verification, I just keep seeing this error
I am really frustrated, I tried with multiple pc and browser thinking that maybe my IP is blocked,
I want to deploy app on App Store but not able to create basic apple account
this is really making me go mad!
Why do I can not apply an Apple developer account successfully. I aways meet the error that says 'We are unable to process your request'.
Wow, you are unable to process my request, why? Are you such a coward? You won’t even let me become a developer.
I can't understand why you prevent me from becoming a developer, please give me explain as soon as possible.
Hi
Could you recommend a video, sample code, or YouTube tutorial for implementing an app with In-App Purchases? Alternatively, how can we set up a subscription plan within the app?
Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
Problem Description: On the iPad 10th tablet, when a USB 2.0 interface extended by a Type - C interface through a USB 2.0 hub is used to connect a keyboard, there is an issue that the keyboard cannot wake up the system when the system version is 18.0 (inclusive) or higher. However, this problem does not occur on systems earlier than 18.0.
Note: To ensure low power consumption of the product, if there is no key press on the keyboard for 10 minutes, the power supply of some functions of the hub will be cut off, but the power supply of the keyboard will remain on. When a key is pressed, the power supply of all functions of the hub will be reconnected.
Hi team, if I log into my app on Safari and try to enroll/challenge MFA security key option, I will be able to see this pop-up that gives me the option to pick either passkeys or external security keys
However, my team member who's using the same version of safari, can only see the external security key option
Why is this?
I'm currently running into an issue during the App Store review process where my reviewer isn't liking how the Screen Time API is being used to hide apps.
For some context, my app uses the Managed Settings and Device Activity frameworks in the Screen Time API to allow users to set restrictions on their personal devices and save those restrictions into a preference object that they can switch between. This was detailed as my app's primary purpose in my Family Controls & Personal Device Usage Entitlement Request, which was approved last year.
After around a year of working on this app, it's finally done and ready for submission to the App Store. However, my App Reviewer recently rejected the app with this single complaint:
Guideline 2.5.1 - Performance - Software Requirements
The app uses public APIs in an unapproved manner, which does not comply with guideline 2.5.1.
Specifically, your app uses ScreenTime API to hide apps.
Since there is no accurate way of predicting how an API may be modified and what effects those modifications may have, unapproved uses of public APIs in apps is not allowed.
Next Steps
Please revise the app to ensure that documented APIs are used in the manner prescribed in the documentation.
All I'm doing is passing a set of Application objects to ManagedSettings.ApplicationSettings.blockedApplications, I'm not doing anything special. The documentation for this API itself states:
The system hides blocked applications and prevents the user from launching them.
In my reply, I let the reviewer know
Regarding Guideline 2.5.1, I believe my use of the Screen Time API appears to align with Apple's documented intended functionality. The specific API I'm using, ManagedSettings.ApplicationSettings.blockedApplications, is explicitly documented by Apple as: "The system hides blocked applications and prevents the user from launching them."
This is why I used the term "hide" in my app's marketing and functionality descriptions - I was directly referencing Apple's own terminology for this feature. The documentation clearly indicates this is an approved capability of this API.
The source for this documentation can be found here: https://developer.apple.com/documentation/managedsettings/applicationsettings/blockedapplications-swift.property. I've also provided a screenshot of this documentation below.
Despite providing a link to the documentation and a screenshot that shows the text from Apple explicitly stating "The system hides blocked applications", the App Reviewer just copy-and-pasted the same text in their reply and rejected the app.
I should also note that we don't have control over how the system handles the Application set we pass into ManagedSettings.ApplicationSettings.blockedApplications, the system will always try to "hide" these apps as specified in the documentation. We can't change this behavior.
Has anyone else faced this sort of rejection before? Is using ManagedSettings.ApplicationSettings.blockedApplications now considered an illegal use of the API? Or are we not allowed to use the words noted in the documentation of this API? The app rejection suggested I "consult with fellow developers and Apple engineers on the Apple Developer Forums." Any guidance here would be much appreciated as I continue to appeal this. For any Apple staff members reading this post, I can provide the Submission ID of the App Review privately if needed to help resolve this issue.
Dear Apple Support,
I'm experiencing an unusual issue with the developer name displayed for my app on the App Store. Here are the details:
App Submission: The app was created and uploaded using our team's Apple Developer account (Team Name).
App Store Display: However, the developer name shown on the App Store is my personal name, not our team/company name.
Account Access: In App Store Connect, I have Admin access to the team account. I can see and manage the app, but I cannot find where to change the displayed developer name.
Expected Behavior: The App Store should display our team/company name (Team Name) as the developer, not my personal name.
Attempted Solutions: I've reviewed the app's information in App Store Connect, checked our account settings, and verified the build settings in Xcode. I cannot find the source of this discrepancy.
Additional Information:
I'm seeking assistance to:
Understand why my personal name is being displayed instead of our team name.
Learn how to correct this so that our team/company name appears as the developer on the App Store.
Ensure this doesn't happen with future app submissions.
Thank you for your help in resolving this issue.
I’ve set up a focus filter, but the perform() method in SetFocusFilterIntent isn't called when the focus mode is toggled on or off on my iPhone since I updated to iOS 18 beta (22A5326f).
I can reproduce the issue for my app, but focus filters are also broken for any third-party apps installed on my phone, so I guess it's not specific to how I've implemented my filter intent.
This used to work perfectly on iOS 17. I didn't change a single line of code, and it broke completely on the latest iOS 18 beta.
I've filed a bug report including a sysdiagnose (FB14715113).
To the developers out there, is this something you are also observing in your apps?
Hi everyone,
We've been struggling for weeks to update our app in the App Store, but we keep receiving the same rejection.
Here's the exact response from Apple:
"The app provides loan services but the domains listed on the app's Product Pages are still not clearly under your control or ownership. Since users may use these domains to contact you to request support, the domains used on the Product Page for loan apps must be under your control or ownership.
Update the Product Page metadata in App Store Connect to only include domains that are associated with Apple Accounts registered to your developer account."
The issue is that we've always listed the same domain that we used when registering our Apple Developer account. If you visit our website, the company name and address are clearly displayed in the footer.
We’ve tried asking for clarification, but we keep receiving the same generic response without any specific details.
This is blocking a critical update that fixes technical issues, and the situation is extremely stressful for us.
We've seen at least two similar threads (1, 2) here on the forum, but they didn’t provide any clear resolution.
Today, we also reached out through the "Contact Us" module, but we would really appreciate any guidance from the community. If anyone has successfully resolved this issue, could you share how you confirmed that your domain is associated with your Apple Developer account?
Any help would be greatly appreciated!
Thanks in advance.
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.