Search results for

“iPhone 16 pro”

80,734 results found

Post

Replies

Boosts

Views

Activity

Admin not appearing in "Choose a Candidate" list for Account Holder Transfer
Hello, I am trying to transfer the Account Holder role to a different person on our team, but I am stuck in a loop. Despite the recipient being an active Admin on the team for some time, their name does not appear in the Choose a Candidate dropdown menu on the Apple Developer website. Here is our current setup and what we have tried: Recipient Role: The person I am transferring to is already an Admin (verified in Users and Access). Security: 2FA is enabled and active on the recipient’s Apple ID. Account Type: This is for an Organization/Company membership. The Issue: When I (the current Account Holder) go to Membership Details > Update Your Information > Transfer Account Holder Role, the candidate list is empty or does not show this specific Admin. We have investigated the Identity Verification requirement, but hit a dead end: The recipient logged into the Apple Developer app on an iPhone (signed into the same Apple ID and primary iCloud account on the device). In the Account tab, there is no V
0
0
37
Feb ’26
modifierFlags Monterey
Hello Using a MacBook Pro Intel with macOS 12.7.6 Monterey, I test in Objective-C an event for the option key but it is not working - (void)keyDown:(NSEvent *)event { printf(%s %pn, __FUNCTION__, self); BOOL altFlag = [event modifierFlags] & NSEventModifierFlagOption; if (altFlag) { // UpdateClass printf(option pressedn); } else { printf(option not pressedn); } } The same in Swift works fine override func keyDown(with event: NSEvent) { if event.modifierFlags.contains(.option) { print(option pressed) } else { print(option NOT pressed) } } The Obj-C code works fine on a MacBook Air Tahoe 26.3 Any idea why it does not work on the macOS 12.7.6 Intel? Many Thanks Jean
2
0
164
Feb ’26
Long delays in App Review
I'm reaching out here in hopes of finding some guidance regarding our app submission, which has now been in Waiting for Review status for 16 days. We submitted our app on February 9, 2026. On February 20, with no movement in sight, I submitted an expedited review request. This was followed by an email to Apple Developer Support, and then a phone call yesterday, and another this morning. To date, we have received no direct response, no update, and no change in status, from the App Review team. To be fair, the Apple Support personnel have been great, very sympathetic, and did all that they could, but could only provide limited information. Although they did confirm that the Expedited Request had been approved (progress!). This delay is particularly impactful for us. This is not a standalone digital product — the app is directly and inseparably tied to a physical hardware installation that is happening as I write this. And another planned for tomorrow. The app serves as the essential companion
3
0
561
Feb ’26
Reply to localnetwork issue from local device.
[quote='816770021, sardinelee, /thread/816770, /profile/sardinelee'] At the OS level, the only error codes we receive are: … -1005 … -1001 [/quote] Right. These are NSURLErrorNetworkConnectionLost and NSURLErrorTimedOut, respectively. They are transport errors, meaning that the HTTP request failed because something went wrong with the underlying TCP/IP transport connection (TCP for HTTP/2 and earlier, QUIC for HTTP/3). There’s more info about the -1005 error in QA1941 Handling “The network connection was lost” Errors. The -1001 is pretty straightforward: The transport connection stopped moving data and eventually the HTTP request timed out. Given the above, you need to look for problems deeper in the TCP/IP stack. I usually start an investigation like this with an RVI packet trace, which lets you see what’s happening on the ‘wire’ as far as iOS’s TCP/IP stack is concerned. I suspect you’ll find that the Wi-Fi has simply stopped delivering packets. The critical factor here is that it’s limited to iPhone
Feb ’26
Is Xcode Intelligence Ready for Production? My Experience and a Quest for Better Tools
I am looking to optimize my AI-assisted workflow within Xcode. Previously, my process was inefficient: Manually selecting and copying code snippets from Xcode into Gemini. Asking a specific question (e.g., Modify this to show an alertError message box). Copying the result back into Xcode. I attempted to switch to the new native Intelligence feature in Xcode to streamline this, but I found significant shortcomings: Latency: The response time is noticeably slow. Much slower than asking directly on Gemini 3 Pro. Lack of Context: The AI often fails to grasp the full project context. For example, it frequently claims it cannot see the code for ScannerView even though it is part of the project. I often have to prompt it multiple times before it finally finds the file. Is Xcode's Intelligence feature actually production-ready yet? If not, what tools do you recommend that integrate well with iOS development? To be clear, I am not looking for vibe coding. I have a clear grasp of the problem and the high-level
1
0
79
Feb ’26
Reply to Core Image for depth maps & segmentation masks: numeric fidelity issues when rendering CIImage to CVPixelBuffer (looking for Architecture suggestions)
The problem might be this: Core Image uses 16-bit float RGBA as the default working format. That means that, whenever it needs an intermediate buffer for the rendering, it will create a 4-channel 16-bit float surface to render into. This also meant that your 1-channel unsigned integer values will automatically be mapped to float values in 0.0...1.0. That's probably where you lose precision. There are a few options to circumvent this: You could set the workingFormat context option to .L8 or .R8. However, this means all intermediate buffers will have that format. If you want to mix processing of the segmentation mask with other images, this won't work. If you only want to process the mask separately, you can set up a separate CIContext with this option. Note, however, that most built-in CIFilters assume a floating-point working format and might not perform well with this format. You can process your segmentation map with Metal (as you suggested) as part of your CIFilter pipeline using a CIImag
Topic: Machine Learning & AI SubTopic: General Tags:
Feb ’26
Reply to BGContinuedProcessingTask GPU access — no iPhone support?
Good question! I haven't seen the new GPU background entitlements. Good to know! My guess is that they don't want to support iPhone here because they prioritize battery over expensive processing on the mobile devices. The iPads on the other hand are more targeted towards pro use cases (and have a bigger battery). What we are currently doing in our apps is to prevent the screen from locking during video export (using UIApplication.shared.isIdleTimerDisabled = true). We also pause the processing when the app is backgrounded and resume when it's active again.
Topic: Graphics & Games SubTopic: Metal Tags:
Feb ’26
Unable to load a quantized Qwen 1.7B model on an iPhone SE 3
I am trying to benchmark and see if the Qwen3 1.7B model can run in an iPhone SE 3 [4 GB RAM]. My core problem is - Even with weight quantization the SE 3 is not able to load into memory. What I've tried: I am converting a Torch model to the Core ML format using coremltools. I have tried the following combinations of quantization and context length 8 bit + 1024 8 bit + 2048 4 bit + 1024 4 bit + 2048 All the above quantizations are done with dynamic shape with the default being [1,1] in the hope that the whole context length does not get allocated in memory The 4-bit model is approximately 865MB on disk The 8-bit model is approximately 1.7 GB on disk During load: With the int4 quantization the memory spikes during intitial load a lot. Could this be because many operations are converted to int8 or fp16 as core ML does not perform operations natively on int4? With int8 on the profiler the memory does not go above 2 GB (only 900 MB) but it is still not able to load as it shows the following error. 2GB is
2
0
232
Feb ’26
PHAssetCreationRequest merges new Burst Photos into "Recently Deleted" instead of Library
Description I am observing a critical issue when saving burst photos using the Photos Framework. If a burst photo with the same burstIdentifier already exists in the Recently Deleted album, any new assets saved via PHAssetCreationRequest are automatically merged into that deleted entry instead of appearing in the main Library or All Photos. Environment Framework: Photos Framework (iOS) API: [[PHPhotoLibrary sharedPhotoLibrary] performChanges:...] Code Snippet The following logic is used to save the burst assets: [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{ // 'paths' is a custom object providing the creation request PHAssetCreationRequest *assetCreationRqst = [paths assetCreationRqst]; assetCreationRqst.favorite = [FavorManager.shared isSetDownloadedAssetFavorite:self.curItem]; PHObjectPlaceholder *placeHolder = assetCreationRqst.placeholderForCreatedAsset; localIdentifier = placeHolder.localIdentifier; } completionHandler:^(BOOL success, NSError * _Nullable error) { if (success) { // The handler re
6
0
373
Feb ’26
Reply to Can NWConnection.receive(minimumIncompleteLength:maximumLength:) return nil data for UDP while connection remains .ready?
I agree with what you stated above, and I apologize for not mentioning that we will prepare the datagram while keeping the maximum MTU in mind. The datagram size will be less than the maximum MTU. I had two queries following that: What are the pros and cons of using receive and receiveMessage, and when each should be used? We understand that in the case of receiveMessage, we will only receive nil data if some kind of error has occurred. Howerver if we use receive - in what situations can the data be nil?
Feb ’26
Admin not appearing in "Choose a Candidate" list for Account Holder Transfer
Hello, I am trying to transfer the Account Holder role to a different person on our team, but I am stuck in a loop. Despite the recipient being an active Admin on the team for some time, their name does not appear in the Choose a Candidate dropdown menu on the Apple Developer website. Here is our current setup and what we have tried: Recipient Role: The person I am transferring to is already an Admin (verified in Users and Access). Security: 2FA is enabled and active on the recipient’s Apple ID. Account Type: This is for an Organization/Company membership. The Issue: When I (the current Account Holder) go to Membership Details > Update Your Information > Transfer Account Holder Role, the candidate list is empty or does not show this specific Admin. We have investigated the Identity Verification requirement, but hit a dead end: The recipient logged into the Apple Developer app on an iPhone (signed into the same Apple ID and primary iCloud account on the device). In the Account tab, there is no V
Replies
0
Boosts
0
Views
37
Activity
Feb ’26
modifierFlags Monterey
Hello Using a MacBook Pro Intel with macOS 12.7.6 Monterey, I test in Objective-C an event for the option key but it is not working - (void)keyDown:(NSEvent *)event { printf(%s %pn, __FUNCTION__, self); BOOL altFlag = [event modifierFlags] & NSEventModifierFlagOption; if (altFlag) { // UpdateClass printf(option pressedn); } else { printf(option not pressedn); } } The same in Swift works fine override func keyDown(with event: NSEvent) { if event.modifierFlags.contains(.option) { print(option pressed) } else { print(option NOT pressed) } } The Obj-C code works fine on a MacBook Air Tahoe 26.3 Any idea why it does not work on the macOS 12.7.6 Intel? Many Thanks Jean
Replies
2
Boosts
0
Views
164
Activity
Feb ’26
Long delays in App Review
I'm reaching out here in hopes of finding some guidance regarding our app submission, which has now been in Waiting for Review status for 16 days. We submitted our app on February 9, 2026. On February 20, with no movement in sight, I submitted an expedited review request. This was followed by an email to Apple Developer Support, and then a phone call yesterday, and another this morning. To date, we have received no direct response, no update, and no change in status, from the App Review team. To be fair, the Apple Support personnel have been great, very sympathetic, and did all that they could, but could only provide limited information. Although they did confirm that the Expedited Request had been approved (progress!). This delay is particularly impactful for us. This is not a standalone digital product — the app is directly and inseparably tied to a physical hardware installation that is happening as I write this. And another planned for tomorrow. The app serves as the essential companion
Replies
3
Boosts
0
Views
561
Activity
Feb ’26
Reply to 22 days in waiting for review for my app
I excitedly check every notification I get on my phone, thinking it's from App Store Connect, but unfortunately, it turns out to be a notification from another app :(( (20 days)
Replies
Boosts
Views
Activity
Feb ’26
Reply to macOS 26.4 Dev Beta 2 Install Fails
Yes, also seeing this, on M2 Pro
Topic: Community SubTopic: Apple Developers Tags:
Replies
Boosts
Views
Activity
Feb ’26
Reply to macOS 26.4 Dev Beta 2 Install Fails
I have the same issue on a MacBook Pro M4.
Topic: Community SubTopic: Apple Developers Tags:
Replies
Boosts
Views
Activity
Feb ’26
Can't compose in Gmail app after iOS 26.4 beta update
I installed the new 26.4 ios and now can't compose or reply to emails on the Gmail app. I tried restarting my phone and uninstalling and reinstalling the app, but nothing helps. How can I fix this?
Replies
0
Boosts
0
Views
65
Activity
Feb ’26
Reply to localnetwork issue from local device.
[quote='816770021, sardinelee, /thread/816770, /profile/sardinelee'] At the OS level, the only error codes we receive are: … -1005 … -1001 [/quote] Right. These are NSURLErrorNetworkConnectionLost and NSURLErrorTimedOut, respectively. They are transport errors, meaning that the HTTP request failed because something went wrong with the underlying TCP/IP transport connection (TCP for HTTP/2 and earlier, QUIC for HTTP/3). There’s more info about the -1005 error in QA1941 Handling “The network connection was lost” Errors. The -1001 is pretty straightforward: The transport connection stopped moving data and eventually the HTTP request timed out. Given the above, you need to look for problems deeper in the TCP/IP stack. I usually start an investigation like this with an RVI packet trace, which lets you see what’s happening on the ‘wire’ as far as iOS’s TCP/IP stack is concerned. I suspect you’ll find that the Wi-Fi has simply stopped delivering packets. The critical factor here is that it’s limited to iPhone
Replies
Boosts
Views
Activity
Feb ’26
Is Xcode Intelligence Ready for Production? My Experience and a Quest for Better Tools
I am looking to optimize my AI-assisted workflow within Xcode. Previously, my process was inefficient: Manually selecting and copying code snippets from Xcode into Gemini. Asking a specific question (e.g., Modify this to show an alertError message box). Copying the result back into Xcode. I attempted to switch to the new native Intelligence feature in Xcode to streamline this, but I found significant shortcomings: Latency: The response time is noticeably slow. Much slower than asking directly on Gemini 3 Pro. Lack of Context: The AI often fails to grasp the full project context. For example, it frequently claims it cannot see the code for ScannerView even though it is part of the project. I often have to prompt it multiple times before it finally finds the file. Is Xcode's Intelligence feature actually production-ready yet? If not, what tools do you recommend that integrate well with iOS development? To be clear, I am not looking for vibe coding. I have a clear grasp of the problem and the high-level
Replies
1
Boosts
0
Views
79
Activity
Feb ’26
Reply to Core Image for depth maps & segmentation masks: numeric fidelity issues when rendering CIImage to CVPixelBuffer (looking for Architecture suggestions)
The problem might be this: Core Image uses 16-bit float RGBA as the default working format. That means that, whenever it needs an intermediate buffer for the rendering, it will create a 4-channel 16-bit float surface to render into. This also meant that your 1-channel unsigned integer values will automatically be mapped to float values in 0.0...1.0. That's probably where you lose precision. There are a few options to circumvent this: You could set the workingFormat context option to .L8 or .R8. However, this means all intermediate buffers will have that format. If you want to mix processing of the segmentation mask with other images, this won't work. If you only want to process the mask separately, you can set up a separate CIContext with this option. Note, however, that most built-in CIFilters assume a floating-point working format and might not perform well with this format. You can process your segmentation map with Metal (as you suggested) as part of your CIFilter pipeline using a CIImag
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’26
Reply to BGContinuedProcessingTask GPU access — no iPhone support?
Good question! I haven't seen the new GPU background entitlements. Good to know! My guess is that they don't want to support iPhone here because they prioritize battery over expensive processing on the mobile devices. The iPads on the other hand are more targeted towards pro use cases (and have a bigger battery). What we are currently doing in our apps is to prevent the screen from locking during video export (using UIApplication.shared.isIdleTimerDisabled = true). We also pause the processing when the app is backgrounded and resume when it's active again.
Topic: Graphics & Games SubTopic: Metal Tags:
Replies
Boosts
Views
Activity
Feb ’26
Reply to macOS 26.4 Dev Beta 2 Install Fails
I have the same issue on MacBook Pro M4 Pro
Topic: Community SubTopic: Apple Developers Tags:
Replies
Boosts
Views
Activity
Feb ’26
Unable to load a quantized Qwen 1.7B model on an iPhone SE 3
I am trying to benchmark and see if the Qwen3 1.7B model can run in an iPhone SE 3 [4 GB RAM]. My core problem is - Even with weight quantization the SE 3 is not able to load into memory. What I've tried: I am converting a Torch model to the Core ML format using coremltools. I have tried the following combinations of quantization and context length 8 bit + 1024 8 bit + 2048 4 bit + 1024 4 bit + 2048 All the above quantizations are done with dynamic shape with the default being [1,1] in the hope that the whole context length does not get allocated in memory The 4-bit model is approximately 865MB on disk The 8-bit model is approximately 1.7 GB on disk During load: With the int4 quantization the memory spikes during intitial load a lot. Could this be because many operations are converted to int8 or fp16 as core ML does not perform operations natively on int4? With int8 on the profiler the memory does not go above 2 GB (only 900 MB) but it is still not able to load as it shows the following error. 2GB is
Replies
2
Boosts
0
Views
232
Activity
Feb ’26
PHAssetCreationRequest merges new Burst Photos into "Recently Deleted" instead of Library
Description I am observing a critical issue when saving burst photos using the Photos Framework. If a burst photo with the same burstIdentifier already exists in the Recently Deleted album, any new assets saved via PHAssetCreationRequest are automatically merged into that deleted entry instead of appearing in the main Library or All Photos. Environment Framework: Photos Framework (iOS) API: [[PHPhotoLibrary sharedPhotoLibrary] performChanges:...] Code Snippet The following logic is used to save the burst assets: [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{ // 'paths' is a custom object providing the creation request PHAssetCreationRequest *assetCreationRqst = [paths assetCreationRqst]; assetCreationRqst.favorite = [FavorManager.shared isSetDownloadedAssetFavorite:self.curItem]; PHObjectPlaceholder *placeHolder = assetCreationRqst.placeholderForCreatedAsset; localIdentifier = placeHolder.localIdentifier; } completionHandler:^(BOOL success, NSError * _Nullable error) { if (success) { // The handler re
Replies
6
Boosts
0
Views
373
Activity
Feb ’26
Reply to Can NWConnection.receive(minimumIncompleteLength:maximumLength:) return nil data for UDP while connection remains .ready?
I agree with what you stated above, and I apologize for not mentioning that we will prepare the datagram while keeping the maximum MTU in mind. The datagram size will be less than the maximum MTU. I had two queries following that: What are the pros and cons of using receive and receiveMessage, and when each should be used? We understand that in the case of receiveMessage, we will only receive nil data if some kind of error has occurred. Howerver if we use receive - in what situations can the data be nil?
Replies
Boosts
Views
Activity
Feb ’26