Search results for

Apple Maps Guides

151,865 results found

Post

Replies

Boosts

Views

Activity

Reply to KeyChain Sharing with App Extensions
Hi, Thanks for following up. Platform: iOS (tested on iOS 17.x and iOS 18 betas). Extension type: Apple MatterSupport “Matter Add Device Extension” (Accessory Setup extension). We’re not building a Network Extension provider—our earlier “Network Extension” tag was a mistake. Our setup is a standard iOS app (com.infibrite…) plus the Matter setup extension (com.infibrite…MatterSetupExtension). Both targets need to share Matter fabric credentials via a single keychain access group (com.infibrite.matter.shared) so the extension can commission devices while the main app reuses the stored fabric. App Groups and other capabilities enable correctly, but the “Keychain Sharing” toggle never appears for either App ID in the portal. Because the provisioning profiles can’t include that entitlement, the OS returns errSecMissingEntitlement whenever we reference kSecAttrAccessGroup, so the extension can’t read the credentials. Could you enable Keychain Sharing for these iOS App IDs (main app + Matter setup extension
2w
The "com.apple.developer.web-browser" entitlement has no effect on our iOS app
Hi, I was sent here by Apple developer account, it seems here is the only option for me, so your help is very much appreciated! Basically we are building a chromium based browser on iOS, we applied the com.apple.developer.web-browser entitlement, and it shows up in our identifier, profile etc. The app is signed with the new entitlement and published to the app store. However it is not listed as an option for default browser, doesn't matter which device I tried. I did verified that the Info.plist contains http/https urlschemes as required. In fact a few of us checked all available documents multiple times and still couldn't see why.
1
0
90
2w
System Data
System data is beyond a joke now. There constant logs being generated and stored somewhere with zero access and ability to delete or clear up. I’ve even tried changing the date hack, however this just created some space from system but mostly just unloaded my apps (a feature which I’ve now disabled, as virtually everything is unloaded now apart from essential items). Apple support says wipe and start again, I’m sure that’s not a solution? Anyone figured out how to resolve this? Also you can’t even force videos into the cloud, frustrating when I’ve got no space and pay for iCloud and have zero space left on my phone. It’s suppose to be intelligent.
4
0
322
2w
Reply to System Data
Please consider filing a bug report about this so our engineering teams can investigate this issue. A resolution may involve changes to Apple's software. If you post the Feedback number here I'll check the status next time I do a sweep of forums posts where I've suggested bug reports. Bug Reporting: How and Why? has tips on creating your bug report.
2w
Reply to Apple Vision Pro is too costly to develop for.
We appreciate your interest in participating in the forums! These forums are for questions about developing software and accessories for Apple platforms. Your question seems related to a consumer feature like price and is better suited for the Apple Support Communities https://discussions.apple.com/welcome Hope this helps Albert Pascual
  Worldwide Developer Relations.
Topic: Community SubTopic: Apple Developers Tags:
2w
Reply to CallKit VoIP → App launch → Auto WebRTC/MobileRTC connection: Does Apple allow this flow?
So, the first thing to understand is that what you're describing here: Our app receives a CallKit VoIP call. When the user taps “Answer”, the app launches and automatically connects to a real-time audio session using WebRTC or MobileRTC. ...is NOT what's actually happens on iOS. Your app doesn't receive a CallKit call, nor is CallKit something that really controls how your app works. This is how incoming voip pushes actually work: The device receives an voip push for your app. The system either launches or wakes your app (depending on whether or not your app is running). Your app receives the voip push. Your app reports a new call into CallKit. The system present the incoming call UI, sending updates back to your app about the actions the user takes in that UI. If the user answers the call, the system activates the audio session you previously configured. The critical thing to understand here is that CallKit is best understood as an interface framework (albiet a very narrowly focused one), NOT a voip calling
Topic: App & System Services SubTopic: General Tags:
2w
CallKit VoIP → App launch → Auto WebRTC/MobileRTC connection: Does Apple allow this flow?
Our app receives a CallKit VoIP call. When the user taps “Answer”, the app launches and automatically connects to a real-time audio session using WebRTC or MobileRTC. We would like to confirm whether the following flow (“CallKit Answer → app opens → automatic WebRTC or MobileRTC audio session connection”) complies with Apple’s VoIP Push / CallKit policy. In addition, our service also provides real-time video-class functionality using the Zoom Meeting SDK (MobileRTC). When an incoming CallKit VoIP call is answered, the app launches and the user is automatically taken to the Zoom-based video lesson flow: the app opens → the user is landed on the Zoom Meeting pre-meeting room → MobileRTC initializes immediately. In the pre-meeting room, audio and video streams can already be active and MobileRTC establishes a connection, but the actual meeting screen is not joined until the user explicitly taps “Join”. We would like to confirm whether this flow for video lessons (“CallKit Answer → app opens → pre-meetin
1
0
47
2w
It takes a long time to create a developer account as a company.
We registered as a developer company (NGO) more than 50 days ago and it's still under review. We've already sent the documents, we've confirmed by phone, and every time we contact them we receive no information. They just tell us it's under review and there's no deadline for when this review will be completed. On the same day we registered with Apple, we registered with Google and our app has been published on Google for more than 30 days. We have no support or answers. Could someone help us to at least know a deadline for this review?
2
0
96
2w
Reply to Are read-only filesystems currently supported by FSKit?
[1] If you're curious it was an old Apple Watch I had that got stuck in a boot loop after restarting it to try to fix some crashy app behavior, and IIRC there were low storage alerts shortly before that happened. Definitely way after the iOS 8 days. Hmm... Maybe? Turns out my memory was wrong and APFS was actually adopted in iOS 10.3, not 8.3. watchOS did adopt it before that, so it's possible you could have hit something then. __ Kevin Elliott DTS Engineer, CoreOS/Hardware
Topic: App & System Services SubTopic: Core OS Tags:
2w
Are read-only filesystems currently supported by FSKit?
I'm writing a read-only filesystem extension. I see that the documentation for loadResource(resource:options:replyHandler:) claims that the --rdonly option is supported, which suggests that this should be possible. However, I have never seen this option provided to my filesystem extension, even if I return usableButLimited as a probe result (where it doesn't mount at all - FB19241327) or pass the -r or -o rdonly options to the mount(8) command. Instead I see those options on the volume's activate call. But other than saving that readonly state (which, in my case, is always the case) and then throwing on all write-related calls I'm not sure how to actually mark the filesystem as read-only. Without such an indicator, the user is still offered the option to do things like trash items in Finder (although of course those operations do not succeed since I throw an EROFS error in the relevant calls). It also seems like the FSKit extensions that come with the system handle read-only strangely as well. For example, fo
11
0
292
2w
Reply to Behavior of BGContinuedProcessingTask on Failure
Based off that, it feels like I should NOT rely on the BGContinuedProcessingTask notifications (or whatever they are called) to communicate state. It seems like instead what I should do is do something like local notifications to communicate state and handle it more in my app, is that correct? This is one of those questions where the right answer really depends ENTIRELY on the exact details of what you're trying to do. The simple end of the spectrum here are things like short-lived, single jobs, where putting up extra” UI to manage a single task which isn't going to take very long anyway might be unnecessary. At the other end of things, I think there are lots of situations where the work the app is doing doesn't nicely map directly to the task model and you'll absolutely want to use other tools/APIs to tell the user what's going on. As one example, if an app is doing lots of small network transfers, using an individual processing task for each transfer is probably a mistake. At a technical level, the
2w