Prioritize user privacy and data security in your app. Discuss best practices for data handling, user consent, and security measures to protect user information.

Post

Replies

Boosts

Views

Activity

Dynamic XCFramework that uses a Required Reason API and does not declare it inside its Privacy Manifest
Will an app be rejected after the 1st of May 2024 if it contains a Embedded Dynamic XCFramework that uses a Required Reason API and it does not declare the Required Reason API usage inside its Privacy Manifest? Important note: I am asking about dynamic xcframeworks that are NOT on Apple's list of commonly-used SDKs. I am asking because I'm only getting warnings about missing API declaration for the main app binary and app extensions. I do not get any warnings for the embedded dynamic xcframeworks that i have in my app.
0
0
914
Mar ’24
Privacy manifests for app extensions?
I thought I read somewhere in the privacy manifest documentation that they were not required for app extensions because extensions will inherit the privacy info from their parent apps and SDKs, but now I can't find a reference for that. If that is the case, I don't think it is working correctly, because we are getting warnings about missing API declarations for things that should be covered by an app or SDK manifest from what I can tell.
6
2
2.8k
Mar ’24
Privacy Manifest Warning Email is missing SDKs
We submitted an app to TestFlight and received the expected warning email. However, the email did not mention any of the SDK frameworks that were in the app. The email only mentioned the app itself and the app's extensions. We expected to get warnings for our frameworks that used required reason APIs. We also expected to get warnings for frameworks in the "list of commonly used third-party SDKs" Why are the warnings not as expected? Is this because Test Flight is not making the same kind of warning emails that will be created for the App Store?
4
0
1.8k
Mar ’24
Xamarin Forms - API Declaration not working
Morning All, just wanted a little help with my xamarin forms app. When I publish to test flight for a public test build I am always receiving the email about ITMS-91053: Missing API declaration. I have followed the steps and created a PrivacyInfo.xcprivacy in Xcode and I can see it in my xamarin iOS project but I still get the email saying it is missing. Is there something I am missing or ned to reference in the info.plist etc. My looks like the following: `
2
0
1.5k
Mar ’24
NSPrivacyTracking and NSPrivacyTrackingDomains
My app is using Advertising data type to track but it leverages third-party ads SDKs to do so. I add NSPrivacyCollectedDataTypeAdvertisingData and NSPrivacyCollectedDataTypeTracking that is true to my app's manifest file. Those third-party ads SDKs will have their own manifests declaring the values of NSPrivacyTracking and tracking domains. In this case, do I need to set NSPrivacyTracking as true and add domains those SDKs are connecting to the host app's privacy manifest? My guess is no since all manifests are merged in to a single report finally.
1
2
953
Mar ’24
Login issue with socialiteproviders in laravel
Hi all, I create web app laravel with function login with apple. This is any my information app and packet what i'm use : Laravel: 10.x PHP: 8.1 Packages for login: https://socialiteproviders.com/ I'm done with API appleid.apple.com/auth/authorize for auth user with apple ID. Response below : So next step i call to this API : https://appleid.apple.com/auth/token for verify token but response is below : I'm try with postman but response is same that ( invalid_client ). Everything is correct( client_id, team_id, private_key ). I use https://jwt.io/#debugger for test verify token. Signature Verified is result. Can help me for declare what is issue ? what client is invalid ? Thank you so much. P/s : Sorry for my poor English
1
0
2.1k
Mar ’24
NSPrivacyTrackingDomains and WebView based functionality
Hello Apple We have read your guide on https://developer.apple.com/documentation/bundleresources/privacy_manifest_files#4284009 and it is unclear how the NSPrivacyTrackingDomains affects WebView functionality of the app. We have WebView based functionality we use for signup/ login of customers in the app and that can potentially track users. It is stated that If the user has not granted tracking permission through the App Tracking Transparency framework, network requests to these domains fail and your app receives an error. However based on our testing the domains listed in NSPrivacyTrackingDomains have no effect on network requests happening in the WebView if the user declines tracking via the App Tracking Transparency prompt. (e.g pages are loaded, network requests to listed tracking domains are happening) Can you confirm it is the case on what should de done about it? Right now we have a custom implementation on our side that passes the result of the App Tracking Transparency prompt to the WebView instructing it weather it can send requests to tracking domains or not.
0
0
642
Mar ’24
eventDidReachThreshold is working as expected only when the app is in debug mode
As per our code, we have the apps to be shielded whenever the threshold is reached. According to this use-case, our code in DeviceActivityExtension looks something like: override func eventDidReachThreshold(_ event: DeviceActivityEvent.Name, activity: DeviceActivityName) { super.eventDidReachThreshold(event, activity: activity) defaults?.setValue(event.rawValue, forKey: "appLimitEventName") defaults?.setValue(true, forKey: "appLimitReached") defaults?.synchronize() // using darwinNotificationCenter to trigger callback in the application let darwinNotificationCenter = DarwinNotificationsManager.sharedInstance() darwinNotificationCenter.postNotification(withName: "nextAppLimitInitiated") // using Notifications to debug since print doesn't work scheduleNotification(with: "interval threshold reached") } And in our application, we have the shielding logic in place, init() { let darwinNotificationCenter = DarwinNotificationsManager.sharedInstance() darwinNotificationCenter.register(forNotificationName: "nextAppLimitInitiated"){ print("callback received") let appLimitReached = self.defaults?.bool(forKey: "appLimitReached") let appLimitEventName = self.defaults?.string(forKey: "appLimitEventName") if appLimitReached ?? false, appLimitEventName != "" { // this sends the notification when callback is received self.scheduleNotification(with: "init start") self.defaults?.setValue(false, forKey: "appLimitReached") guard var dataArray = self.defaults?.array(forKey: "appLimitdataArray"), !dataArray.isEmpty else { return } let appLimitData = dataArray.first as! NSDictionary let appLimitKey = appLimitData["appLimitId"] as! String let data = self.getSchedule(key: appLimitEventName ?? "") if let appTokens = data?.applicationTokens { for token in appTokens { if !self.applicationTokens.contains(appTokens) { self.applicationTokens.insert(token) } } } self.store.shield.applications = self.applicationTokens self.store.shield.applicationCategories = ShieldSettings.ActivityCategoryPolicy.specific(self.categoryTokens, except: Set()) dataArray.removeFirst() //dataArray.append(appLimitData) self.defaults?.set(dataArray, forKey: "appLimitdataArray") self.initiateMonitoring(initiateAgain: true) self.scheduleNotification(with: "init end") } } } This works as expected for multiple App Limits but only when the device is connected to the Xcode. If we disconnect the device from Xcode/ stop application from Xcode/ try in release mode, the callback is not received from extension to the app/init block. When the device is connected to Xcode, if the apps hit the threshold, they are shielded automatically. But if the device is disconnected/ app is in release mode, the apps are not shielded automatically even after the threshold is reached. It is shielded later only after opening our app once. Please let me know if I'm doing anything wrong in receiving callback or in my shielding logic. If I need to place the shielding logic in the extension, please tell me how I can handle multiple appTokens.
2
0
1k
Mar ’24
PrivacyInfo.xcprivacy in .bundle for static library not seem to be taken into account by Apple
Hello, When you integrate framework linked statically, the usage is that those framework provide a bundle in which they put their PrivacyInfo.xcprivacy file. If you decompress an .ipa file you submit to Apple, you can see this bundle at the root. The problem is that the PrivacyInfo.xcprivacy files inside bundles seem not to be scan by Apple in the privacy process. Thus Apple send us issues about missing privacy. Have you already heard about this problem ? Probably link to what i am saying : Firebase issue #12557 Thank you very much for your feedback !
0
2
897
Mar ’24
[Privacy Manifests] Framework with Alamofire in podfile
I have developed a framework that uses Alamofire which is included in the list of third-party SDKs that require the Privacy Manifest. https://developer.apple.com/support/third-party-SDK-requirements/ The latest version of Alamofire already includes the PrivacyManifest.xcprivacy file and is visible from my own framework. My question is if it is necessary to add a PrivacyManifest.xcprivacy to my framework in that case and if so, should it be the same as Alamofire's? Wouldn't that be redundant? My framework does not use any API that has to be declared. If another framework were to use my framework that uses Alamofire, should I also create a PrivacyManifest.xcprivacy? Thank you.
0
1
920
Apr ’24
Several situations where it is difficult to apply the Privacy Manifest?
Existing external libraries are distributed in the form of framework files. However, if the company providing the library delays the Privacy Manifest registration patch, how can I handle this situation? Am I just keep getting my app rejected? In an app that uses a specific commit in open source branch, what should I do if the Privacy Manifest is registered only in the latest version of that version? For various reasons, including functional stability, the open source cannot be updated to the latest version.
0
0
353
Apr ’24
Privacy Regarding my submission
I receive this from apple on review , what I suppose to change Guideline 5.1.2 - Legal - Privacy - Data Use and Sharing The app privacy information you provided in App Store Connect indicates you collect data in order to track the user, including Browsing History, Other Diagnostic Data, Crash Data, Performance Data, Name, Search History, Physical Address, Customer Support, and Other Data Types. However, you do not use App Tracking Transparency to request the user's permission before tracking their activity. Apps need to receive the user’s permission through the AppTrackingTransparency framework before collecting data used to track them. This requirement protects the privacy of users. Next Steps Here are two ways to resolve this issue: If you do not currently track, or decide to stop tracking, update your app privacy information in App Store Connect. You must have the Account Holder or Admin role to update app privacy information. If you track users, you must implement App Tracking Transparency and request permission before collecting data used to track. When you resubmit, indicate in the Review Notes where the permission request is located.
1
0
675
Apr ’24
Passkeys authenticatorAttachment and transports in macOS 14.4 and iOS 17.4
In the new macOS and iOS updates (14.4 and 17.4 respectively), something has changed in regards to passkey creation: Any passkey created from Safari doesn't have any transports + the authenticatorAttachment is always set to platform, irrespective of whether a cross-platform authentication method is utilized, such as a hardware security key. All passkeys saved in iCloud Keychain created from any browser have an authenticatorAttachment always set to platform + empty authenticator transports. authenticatorAttachment always set to platform According to the WebAuthn specification (Section 5.4.5), the authenticatorAttachment descriptor plays a crucial role in guiding the client (browser or platform) to create or use an authenticator of a specific type. The options are platform for a built-in authenticator or cross-platform for a roaming authenticator. Some relying parties mandate a cross-platform method for the first passkey or as second authentication factor. This is to ensure users do not find themselves locked out when they try to sign in from a device that doesn't have access to the non-roaming webauthn credential. Unfortunately, the current implementation in Sonoma 14.4 forces the authenticatorAttachment to platform, thus preventing the creation of passkeys that comply with such policies on websites. For comparison, browsers like Chrome correctly return a cross-platform authenticatorAttachment when a hardware security key is used, and the same used to happen on previous macOS and iOS versions from Safari. Authenticator transports missing The absence of transport data (WebAuthn Section 5.8.4) for all passkeys created via Safari and iCloud Keychain passkeys created from all browsers further complicates the scenario. The transport hint is crucial for informing relying parties about the preferred transport method for the authenticator, be it USB, NFC, BLE, HYBRID or internal. This omission could lead to inefficiencies and a diminished user experience, as the system cannot optimize the authentication process based on the authenticators available to the user. These issues jeopardize the utility and adoption of passkeys across various platforms and browsers, a primary goal of WebAuthn and FIDO2 for widespread secure authentication practices. What is the rationale behind this choice and is there any workaround to be considered? Thanks for all the help and clarification!
2
2
1.1k
Apr ’24
Apple rejected app by using user content
Hello, my dear colleages. I'm a new ios developer (actually I'm sr. android dev), so this is my first publishing in app store. I have create an app with memes, where users can create memes, share it and judge. I have already tearm of uses, privacy policy, registration and report (because I want to create a stable product), but apple has own opinion: Require that users agree to terms (EULA) and these terms must make it clear that there is no tolerance for objectionable content or abusive users - okay, I will add EULA to my links, but it already contains the rules of creating content A method for filtering objectionable content - blocking happens automatically by user reports. I explained it to the reviewer, but he ignored it and repeated this mark (all marks) again. By user reports the memes with 10 or more marks will be hidden for content delivery. What does he want else? How can the filters resolve it or content already hidden? What actually should I do and these "filters"? A mechanism for users to flag objectionable content - The same. What else does he want? A mechanism for users to block abusive users - This is jsut ridiculous! Users can not write each other and can not communicate with each other. They can only create and judge memes. I'm not sure that reviewer really was looking my app. Maybe 30 seconds? So, how can I follow to his marks if he doesn't listen and doesn't check? Use fake feature? That's shame! The developer must act on objectionable content reports within 24 hours by removing the content and ejecting the user who provided the offending content - The same. The blocking happens automatically. We don't have moderators and can control this process manually (only 2 members in the team). I really don't understand why apple make my life harder)) Google and Huawui have already published app in the internal testing without wrong useless marks. As I know this situation is normal behaviour for apple. Anyway I want to resolve this "marks" and finish the publish process - users are waiting for. Please guys, help me to do it correct - I don't have experience with apple support and it looks for me like a circus! P.S. Links to the terms of uses and privacy policy available on the register screen
2
0
878
Apr ’24
Nonce handling in CryptoKit’s HPKE Sender & Recipient
G'day all, I'm working through the creation of a cross-platform decryption implementation for CryptoKit's HPKE and wish to use the Sender & Recipient type. I have been able to engineer the derived key, but the missing link is the nonce that is created and utilised by HPKE.Sender.seal(). I understand that I could create the key exchange and sealed box by myself and set my own random nonce, but I want to be able to utilise the HPKE.Sender.seal() functions to assist with this as well as create ciphertext data externally that can be opened with HPKE.Recipient.open(). By looking at Apple's open-source code available here, I can see that it seems to be exporting a key based on a "base_nonce" label on the context, which I think is what HPKE.Sender's exportSecret(context:outputByteCount:) can achieve. However using senders exportSecret(context:outputByteCount:) in the following way: let noncedata = try hpkeSender.exportSecret(context: Data("base_nonce".utf8), outputByteCount: 12) even just for one message (so the sequence number would be 0 and thus this data block unchanged), the AES-GCM implementation still returns a "cipher: message authentication failed" error. This is specifically in Go, but can be replicated in Python easily. I'm confident that the derived key is correct and is being fed to AES-GCM with the ciphertext correctly, and it's just the nonce generation that is not understood.
1
0
879
Apr ’24