iOS is the operating system for iPhone.

Posts under iOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

Unable to discover the BLE device which is in range
We have iOS application to connect and monitor the Wiser smart devices. This application supports WiFi access point change as well. When there is change in the WiFi, wiser devices supports for 3mins BLE mode for re-pairing. When BLE devices are newly commissioning, all devices are being able to discover by CoreBluetooth. But when we try to change the WiFi, Previously connected BLE devices are not being discoverable though they are in pairing mode.
2
0
231
Aug ’25
Does URLSession support ticket-based TLS session resumption
My company has a server that supports ticket-based TLS session resumption (per RFC 5077). We have done Wireshark captures that show that our iOS client app, which uses URLSession for REST and WebSocket connections to the server, is not sending the TLS "session_ticket" extension in the Client Hello package that necessary to enable ticket-based resumption with the server. Is it expected that URLSession does not support ticket-based TLS session resumption? If "yes", is there any way to tell URLSession to enable ticket-based session resumption? the lower-level API set_protocol_options_set_tls_tickets_enabled() hints that the overall TLS / HTTP stack on IOS does support ticket-based resumption, but I can't see how to use that low-level API with URLSession. I can provide (lots) more technical details if necessary, but hopefully this is enough context to determine whether ticket-based TLS resumption is supported with URLSession. Any tips / clarifications would be greatly appreciated.
6
2
700
Aug ’25
BGContinuedProcessingTask compatibility with background URLSession
My app does really large uploads. Like several GB. We use the AWS SDK to upload to S3. It seemed like using BGContinuedProcessingTask to complete a set of uploads for a particular item may improve UX as well as performance and reliability. When I tried to get BGContinuedProcessingTask working with the AWS SDK I found that the task would fail after maybe 30 seconds. It looked like this was because the app stopped receiving updates from the AWS upload and the task wants consistent updates. The AWS SDK always uses a background URLSession and this is not configurable. I understand the background URLSession runs in a separate process from the app and maybe that is why progress updates did not continue when the app was in the background. Is it expected that BGContinuedProcessingTask and background URLSession are not really compatible? It would not be shocking since they are 2 separate background APIs. Would the Apple recommendation be to use a normal URLSession for this, in which case AWS would need to change their SDK? Or does Apple think that BGContinuedProcessingTask should just not be used with uploads? In other words use an upload specific API. Thanks!
2
0
156
Aug ’25
iOS 26 Beta 6 in Simulator
Is iOS 26 Beta 6 available in the iOS Simulator? I saw that the latest betas launched earlier this week. I updated my dev machine to macOS 26 Beta 6, however this release had no corresponding Xcode 26 Beta 6. So I am still running Xcode Beta 5 and when I go into Settings -> Components within Xcode there is no option to install iOS 26 Beta 6. It's still running Beta 5 for macOS and iOS. Is that correct? My current assumption is that iOS Beta 6 is for physical devices only?
1
0
295
Aug ’25
UISegmentedControl Not Switching Segments on iOS Beta 26
While testing my application on iOS beta 26, I am experiencing issues with the native UISegmentedControl component from UIKit. After implementing the control, I noticed that I am unable to switch to the second segment option—the selection remains fixed on the first segment regardless of user interaction. I have already reviewed the initial configuration of the control, the addition of the segments, and the implementation of the target-action, but the issue persists. I would like to understand what could be causing this behavior and if there are any specific adjustments or workarounds for iOS 26. I created a minimal application containing only a UISegmentedControl to clearly demonstrate the issue.
18
4
845
Aug ’25
Unable to find Slider/UISlider neutralValue
An WWDC 25 a neutral value was demonstrated that allows the Slider to be 'coloured in' from the point of the neutral value to the current thumb position. Trying to use this in Dev release 1 I get errors saying no such modifier. Was this functionality released in Dev Release 1 or am I using it incorrectly?
7
2
189
Aug ’25
How to detect which entity was tapped?
Hi, I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario: I tap on the iPhone screen to select an Entity that I want to drag. If an Entity was tapped, it should then be possible to drag it left, right, etc. SceneKit solution: func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 { let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth))) let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z)) return view.unprojectPoint(locationWithz) } and then I was calling: SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true]) the code was called inside of the UIPanGestureRecognizer in my UIViewController. Could I reuse that code or should I go with the SwiftUI approach - something like that: var body: some View { RealityView { .... } .gesture(TapGesture().onEnded { }) ? I already have this code: @State private var location: CGPoint? .onTapGesture { location in self.location = location } I'm trying to identify the entity that was tapped within the RealityView like that: RealityView { content in let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes content.add(box) let anchor = AnchorEntity(world: [0, 0, 0]) content.add(anchor) _ = content.subscribe(to: SceneEvents.Update.self) { event in //TODO: find tapped entity, so that it could be dragged inside of the DragGesture() } Any help would be appreciated. I also noticed that if I create a TapGesture like that: TapGesture(count: 1) .targetedToAnyEntity() and add it to my view using .gesture() then it is not triggered.
2
0
188
Aug ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
407
Aug ’25
Leading Swipe action in UIPageViewController is not working when it is pushed on UINavigationController.
When a UIPageViewController is pushed in a UINavigationController, the leading swipe action from middle of screen dismisses the PageViewController instead of going to previous page. When the Example code is opened from Xcode 16.4.0, ✅ Left Swipe action from left screen edge of screen dismisses the Page View Controller. ✅ Left Swipe action from middle of screen goes to previous Page in Page View Controller When the Example code is opened from Xcode 26.0 - Beta 6, ✅ Left Swipe action from left screen edge of screen dismisses the Page View Controller. ❌ Left Swipe action from middle of screen sometimes goes to previous page and sometimes dismisses the Page View Controller. Example code that the issue occurs: import Foundation import UIKit import PlaygroundSupport PlaygroundPage.current.setLiveView( UINavigationController(rootViewController: RootViewController()) ) class RootViewController: UIViewController { lazy var pageVCButton: UIButton = { let button = UIButton() button.setTitle("Open Page VC", for: .normal) button.setTitleColor(.label, for: .normal) button.addAction(UIAction(handler: { [weak self] _ in self?.didTapPageVCButton() }), for: .touchUpInside) return button }() lazy var pageContainerViewController = PageContainerViewController(startIndex: 3) func didTapPageVCButton() { print("didTapPageVCButton") navigationController?.pushViewController(pageContainerViewController, animated: true) } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground addOpenPageVCButton() } private func addOpenPageVCButton() { view.addSubview(pageVCButton) pageVCButton.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ pageVCButton.centerXAnchor.constraint(equalTo: view.centerXAnchor), pageVCButton.centerYAnchor.constraint(equalTo: view.centerYAnchor), ]) } } class PageContainerViewController: UIViewController { lazy var pageViewController: UIPageViewController = { let pageViewController = UIPageViewController( transitionStyle: .scroll, navigationOrientation: .horizontal, options: nil ) pageViewController.dataSource = self pageViewController.delegate = self return pageViewController }() lazy var pages: [ColouredViewController] = [ ColouredViewController(backgroundColor: .red), ColouredViewController(backgroundColor: .blue), ColouredViewController(backgroundColor: .green), ColouredViewController(backgroundColor: .yellow), ColouredViewController(backgroundColor: .brown), ColouredViewController(backgroundColor: .link), ColouredViewController(backgroundColor: .cyan), ] var startIndex = 0 init(startIndex: Int) { super.init(nibName: nil, bundle: nil) self.startIndex = startIndex } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewDidLoad() { super.viewDidLoad() navigationController?.title = "Page View Controller" print(pageViewController.gestureRecognizers) setupPageViewController() } override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) } private func setupPageViewController() { addChild(pageViewController) view.addSubview(pageViewController.view) pageViewController.didMove(toParent: self) pageViewController.view.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ pageViewController.view.topAnchor.constraint(equalTo: view.topAnchor), pageViewController.view.bottomAnchor.constraint(equalTo: view.bottomAnchor), pageViewController.view.leadingAnchor.constraint(equalTo: view.leadingAnchor), pageViewController.view.trailingAnchor.constraint(equalTo: view.trailingAnchor), ]) pageViewController.setViewControllers([pages[startIndex]], direction: .forward, animated: true) } } extension PageContainerViewController: UIPageViewControllerDataSource { func pageViewController(_ pageViewController: UIPageViewController, viewControllerBefore viewController: UIViewController) -> UIViewController? { print("Leading Swipe") guard let viewController = viewController as? ColouredViewController else { return nil } guard let currentPageIndex = pages.firstIndex(of: viewController) else { return nil } if currentPageIndex == 0 { return nil } return pages[currentPageIndex - 1] } func pageViewController(_ pageViewController: UIPageViewController, viewControllerAfter viewController: UIViewController) -> UIViewController? { print("Trailing Swipe") guard let viewController = viewController as? ColouredViewController else { return nil } guard let currentPageIndex = pages.firstIndex(of: viewController) else { return nil } if currentPageIndex == pages.count - 1 { return nil } return pages[currentPageIndex + 1] } } extension PageContainerViewController: UIPageViewControllerDelegate {} class ColouredViewController: UIViewController { var backgroundColor: UIColor? init(backgroundColor: UIColor) { super.init(nibName: nil, bundle: nil) self.backgroundColor = backgroundColor } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = backgroundColor } }
1
0
98
Aug ’25
SwiftUI iOS 16 TabView PageTabViewStyle index behavior is wrong for right to left layoutDirection
TabView page control element has a bug on iOS 16 if tabview is configured as RTL with PageTabViewStyle. Found iOS 16 Issues: Page indicators display dots in reverse order (appears to treat layout as LTR while showing RTL) Index selection is reversed - tapping indicators selects wrong pages Using the page control directly to navigate eventually breaks the index binding The underlying index counting logic conflicts with the visual presentation iOS 18 Behavior: Works as expected with correct dot order and index selection. Xcode version: Version 16.3 (16E140) Conclusion: Confirmed broken on iOS 16 Confirmed working on iOS 18 iOS 17 and earlier versions not yet tested I've opened a feedback assistant ticket quite a while ago but there is no answer. There's a code example and a video there. Anyone else had experience with this particular bug? Here's the code: public struct PagingView<Content: View>: View { //MARK: - Public Properties let pages: (Int) -> Content let numberOfPages: Int let pageMargin: CGFloat @Binding var currentPage: Int //MARK: - Object's Lifecycle public init(currentPage: Binding<Int>, pageMargin: CGFloat = 20, numberOfPages: Int, @ViewBuilder pages: @escaping (Int) -> Content) { self.pages = pages self.numberOfPages = numberOfPages self.pageMargin = pageMargin _currentPage = currentPage } //MARK: - View's Layout public var body: some View { TabView(selection: $currentPage) { ForEach(0..<numberOfPages, id: \.self) { index in pages(index) .padding(.horizontal, pageMargin) } } .tabViewStyle(PageTabViewStyle(indexDisplayMode: .always)) .ignoresSafeArea() } } //MARK: - Previews struct ContentView: View { @State var currentIndex: Int = 0 var body: some View { ZStack { Rectangle() .frame(height: 300) .foregroundStyle(Color.gray.opacity(0.2)) PagingView( currentPage: $currentIndex.onChange({ index in print("currentIndex: ", index) }), pageMargin: 20, numberOfPages: 10) { index in ZStack { Rectangle() .frame(width: 200, height: 200) .foregroundStyle(Color.gray.opacity(0.2)) Text("\(index)") .foregroundStyle(.brown) .background(Color.yellow) } }.frame(height: 200) } } } #Preview("ContentView") { ContentView() } extension Binding { @MainActor func onChange(_ handler: @escaping (Value) -> Void) -> Binding<Value> { Binding( get: { self.wrappedValue }, set: { newValue in self.wrappedValue = newValue handler(newValue) } ) } }
0
0
122
Aug ’25
SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
1
0
190
Aug ’25
CGColorRef is NOT a struct
The documentation for CGColorRef (https://developer.apple.com/documentation/coregraphics/cgcolorref?language=objc) clearly shows that it is a struct. However, when I try to store a cell's border color using CGColorRef originalColor = self.bg.layer.borderColor and inspect what happens in the debugger, both that property and its copy have the same address. And later when I try to restore the border color the copy still has the same address but is no longer valid and causes a crash on assignment (originalColor is actually an instance variable...) This is all object behavior, not struct behavior. If CGColorRef really was a struct, the contents would have been copied, the instance variable would have had its own address that would never have changed, and the value would have remained valid indefinitely and let me copy it back without a problem. Why is this documented wrong? Was this a recent change? I actually had this code working at some point, and now it's broken.
9
0
1.1k
Aug ’25
Setting Required Capabilities for Foundation Models
Cross-posting this from https://developer.apple.com/forums/thread/795707 per ask from DTS Engineer: Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into for Apple Intelligence-capable devices, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable since we're really wanting the devices with Apple Neural Engine sufficient for Apple Intelligence features in the SDKs (like Foundaton Models, Image Playgrounds, Audio Transcription, etc.) While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, your device is not capable of Apple Intelligence and so can't use this app" I've created a Feedback Assistant ticket tracking the question/ask here: FB19366221
0
0
108
Aug ’25
Setting Required Capabilities for Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable. While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
2
2
243
Aug ’25
Issue with beta Declared Age Range
I'm trying to work with the beta version of the Declared Age Range framework based on an article's tutorial but am getting the following error: [C:1-3] Error received: Invalidated by remote connection. and AgeRangeService.Error.notAvailable is being thrown on the call to requestAgeRange. I'm using Xcode 26 beta 5 and my simulator is running the 26.0 beta. The iCloud account that I have signed into the simulator has a DOB set as well. This is my full ContentView where I'm trying to accomplish this. struct ContentView: View { @Environment(\.requestAgeRange) var requestAgeRange @State var advancedFeaturesEnabled = false var body: some View { VStack { Button("Advanced Features") {} .disabled(!advancedFeaturesEnabled) } .task { await requestAgeRangeHelper() } } func requestAgeRangeHelper() async { do { let ageRangeResponse = try await requestAgeRange(ageGates: 16) switch ageRangeResponse { case let .sharing(range): if let lowerBound = range.lowerBound, lowerBound >= 16 { advancedFeaturesEnabled = true } case .declinedSharing: break // Handle declined sharing default: break } } catch AgeRangeService.Error.invalidRequest { print("Invalid request") // Handle invalid request (e.g., age range < 2 years) } catch AgeRangeService.Error.notAvailable { print("Not available") // Handle device configuration issues } catch { print("Other") } } }
1
0
259
Aug ’25
Large title is not visible in iOS 26
I am using below code to change navigationBar bg colour, but the text is hidden in large title. It works fine in previous versions. Kindly refer below code and attached images. Code: override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) navigationController?.navigationBar.prefersLargeTitles = true navigationItem.largeTitleDisplayMode = .always let appearance = UINavigationBarAppearance() appearance.backgroundColor = UIColor( red: 0.101961, green: 0.439216, blue: 0.388235, alpha: 1.0 ) navigationController?.navigationBar.standardAppearance = appearance navigationController?.navigationBar.scrollEdgeAppearance = appearance navigationController?.navigationBar.compactAppearance = appearance } Referenced images:
3
2
350
Aug ’25