This topic area is about the programming languages themselves, not about any specific API or tool. If you have an API question, go to the top level and look for a subtopic for that API. If you have a question about Apple developer tools, start in the Developer Tools & Services topic.
For Swift questions:
If your question is about the SwiftUI framework, start in UI Frameworks > SwiftUI.
If your question is specific to the Swift Playground app, ask over in Developer Tools & Services > Swift Playground
If you’re interested in the Swift open source effort — that includes the evolution of the language, the open source tools and libraries, and Swift on non-Apple platforms — check out Swift Forums
If your question is about the Swift language, that’s on topic for Programming Languages > Swift, but you might have more luck asking it in Swift Forums > Using Swift.
General:
Forums topic: Programming Languages
Swift:
Forums subtopic: Programming Languages > Swift
Forums tags: Swift
Developer > Swift website
Swift Programming Language website
The Swift Programming Language documentation
Swift Forums website, and specifically Swift Forums > Using Swift
Swift Package Index website
Concurrency Resources, which covers Swift concurrency
How to think properly about binding memory Swift Forums thread
Other:
Forums subtopic: Programming Languages > Generic
Forums tags: Objective-C
Programming with Objective-C archived documentation
Objective-C Runtime documentation
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
Swift
RSS for tagSwift is a powerful and intuitive programming language for Apple platforms and beyond.
Posts under Swift tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi there! I'm making an app that stores data for the user's profile in SwiftData. I was originally going to use UserDefaults but I thought SwiftData could save Images natively but this is not true so I really could switch back to UserDefaults and save images as Data but I'd like to try to get this to work first. So essentially I have textfields and I save the values of them through a class allProfileData. Here's the code for that:
import SwiftData
import SwiftUI
@Model
class allProfileData {
var profileImageData: Data?
var email: String
var bio: String
var username: String
var profileImage: Image {
if let data = profileImageData,
let uiImage = UIImage(data: data) {
return Image(uiImage: uiImage)
} else {
return Image("DefaultProfile")
}
}
init(email:String, profileImageData: Data?, bio: String, username:String) {
self.profileImageData = profileImageData
self.email = email
self.bio = bio
self.username = username
}
}
To save this I create a new class (I think, I'm new) and save it through ModelContext
import SwiftUI
import SwiftData
struct CreateAccountView: View {
@Query var profiledata: [allProfileData]
@Environment(\.modelContext) private var modelContext
let newData = allProfileData(email: "", profileImageData: nil, bio: "", username: "")
var body: some View {
Button("Button") {
newData.email = email
modelContext.insert(newData)
try? modelContext.save()
print(newData.email)
}
}
}
To fetch the data, I originally thought that @Query would fetch that data but I saw that it fetches it asynchronously so I attempted to manually fetch it, but they both fetched nothing
import SwiftData
import SwiftUI
@Query var profiledata: [allProfileData]
@Environment(\.modelContext) private var modelContext
let fetchRequest = FetchDescriptor<allProfileData>()
let fetchedData = try? modelContext.fetch(fetchRequest)
print("Fetched count: \(fetchedData?.count ?? 0)")
if let imageData = profiledata.first?.profileImageData,
let uiImage = UIImage(data: imageData) {
profileImage = Image(uiImage: uiImage)
} else {
profileImage = Image("DefaultProfile")
}
No errors. Thanks in advance
Hi all,
I have a working macOS (Intel) system extension app that currently uses only a Content Filter (NEFilterDataProvider). I need to capture/log HTTP and HTTPS traffic in plain text, and I understand NETransparentProxyProvider is the right extension type for that.
For HTTPS I will need TLS inspection / a MITM proxy — I’m new to that and unsure how complex it will be.
For DNS data (in plain text), can I use the same extension, or do I need a separate extension type such as NEPacketTunnelProvider, NEFilterPacketProvider, or NEDNSProxyProvider?
Current architecture:
Two Xcode targets: MainApp and a SystemExtension target.
The SystemExtension target contains multiple network extension types.
MainApp ↔ SystemExtension communicate via a bidirectional NSXPC connection.
I can already enable two extensions (Content Filter and TransparentProxy). With the NETransparentProxy, I still need to implement HTTPS capture.
Questions I’d appreciate help with:
Can NETransparentProxy capture the DNS fields I need (dns_hostname, dns_query_type, dns_response_code, dns_answer_number, etc.), or do I need an additional extension type to capture DNS in plain text?
If a separate extension is required, is it possible or problematic to include that extension type (Packet Tunnel / DNS Proxy / etc.) in the same SystemExtension Xcode target as the TransparentProxy?
Any recommended resources or guidance on TLS inspection / MITM proxy setup for capturing HTTPS logs?
There are multiple DNS transport types — am I correct that capturing DNS over UDP (port 53) is not necessarily sufficient? Which DNS types should I plan to handle?
I’ve read that TransparentProxy and other extension types (e.g., Packet Tunnel) cannot coexist in the same Xcode target. Is that true?
Best approach for delivering logs from multiple extensions to the main app (is it feasible)? Or what’s the best way to capture logs so an external/independent process (or C/C++ daemon) can consume them?
Required data to capture (not limited to):
All HTTP/HTTPS (request, body, URL, response, etc.)
DNS fields: dns_hostname, dns_query_type, dns_response_code, dns_answer_number, and other DNS data — all in plain text.
I’ve read various resources but remain unclear which extension(s) to use and whether multiple extension types can be combined in one Xcode target. Please ask if you need more details.
Thank you.
Topic:
Privacy & Security
SubTopic:
Sign in with Apple
Tags:
Swift
Frameworks
Network Extension
System Extensions
Problem Description
(1) I am using ARKit in an iOS app to provide AR capabilities. Specifically, I'm trying to use the ARSession's captureHighResolutionFrame(using:) method to capture a high-resolution frame along with its corresponding depth data:
open func captureHighResolutionFrame(using photoSettings: AVCapturePhotoSettings?) async throws -> ARFrame
(2) However, when I attempt to do so, the call fails at runtime with the following error, which I captured from the Xcode debugger:
[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] settings.depthDataDeliveryEnabled must be NO if self.isDepthDataDeliveryEnabled is NO
Code Snippet Explanation
(1) ARConfig and ARSession Initialization
The following code configures the ARConfiguration and ARSession. A key part of this setup is setting the videoFormat to the one recommended for high-resolution frame capturing, as suggested by the documentation.
func start(imagesDirectory: URL, configuration: Configuration = Configuration()) {
// ... basic setup ...
let arConfig = ARWorldTrackingConfiguration()
arConfig.planeDetection = [.horizontal, .vertical]
// Enable various frame semantics for depth and segmentation
if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) {
arConfig.frameSemantics.insert(.smoothedSceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
arConfig.frameSemantics.insert(.sceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) {
arConfig.frameSemantics.insert(.personSegmentationWithDepth)
}
// Set the recommended video format for high-resolution captures
if let videoFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing {
arConfig.videoFormat = videoFormat
print("Enabled: High-Resolution Frame Capturing by selecting recommended video format.")
}
arSession.run(arConfig, options: [.resetTracking, .removeExistingAnchors])
// ...
}
(2) Capturing the High-Resolution Frame
The code below is intended to manually trigger the capture of a high-resolution frame. The goal is to obtain both a high-resolution color image and its associated high-resolution depth data. To achieve this, I explicitly set the isDepthDataDeliveryEnabled property of the AVCapturePhotoSettings object to true.
func requestImageCapture() async {
// ... guard statements ...
print("Manual image capture requested.")
if #available(iOS 16.0, *) { // Assuming 16.0+ for this API
if let defaultSettings = arSession.configuration?.videoFormat.defaultPhotoSettings {
// Create a mutable copy from the default settings, as recommended
let photoSettings = AVCapturePhotoSettings(from: defaultSettings)
// Explicitly enable depth data delivery for this capture request
photoSettings.isDepthDataDeliveryEnabled = true
do {
let highResFrame = try await arSession.captureHighResolutionFrame(using: photoSettings)
print("Successfully captured a high-resolution frame.")
if let initialDepthData = highResFrame.capturedDepthData {
// Process depth data...
} else {
print("High-resolution frame was captured, but it contains no depth data.")
}
} catch {
// The exception is caught here
print("Error capturing high-resolution frame: \(error.localizedDescription)")
}
}
}
// ...
}
Issue Confirmation & Question
(1) Through debugging, I have confirmed the following behavior: If I call captureHighResolutionFrame without providing the photoSettings parameter, or if photoSettings.isDepthDataDeliveryEnabled is set to false, the method successfully returns a high-resolution ARFrame, but its capturedDepthData is nil.
(2) The error message clearly indicates that settings.depthDataDeliveryEnabled can only be true if the underlying AVCapturePhotoOutput instance's own isDepthDataDeliveryEnabled property is also true.
(3) However, within the context of ARKit and ARSession, I cannot find any public API that would allow me to explicitly access and configure the underlying AVCapturePhotoOutput instance that ARSession manages.
(4) My question is:
Is there a way to configure the ARSession's internal AVCapturePhotoOutput to enable its isDepthDataDeliveryEnabled property? Or, is simultaneously capturing a high-resolution frame and its associated depth data simply not a supported use case in the current ARKit framework?
In our app, there is a UIWindow makeKeyAndVisible crash, and for now, it appears once, crash stack:
the crash detail:
crash.txt
in the RCWindowSceneManager class's makeWindowKeyAndVisible method, we check and set a window's windowScene and makeKeyAndVisible:
public func makeWindowKeyAndVisible(_ window: UIWindow?) {
guard let window else {
return
}
if let currentWindowScene {
if window.windowScene == nil || window.windowScene != currentWindowScene {
window.windowScene = currentWindowScene
}
window.makeKeyAndVisible()
}
}
and I set a break point at a normal no crash flow, the stack is:
why it crash? and how we avoid this, thank you.
In one of my apps, i am using .glassEffect(_:In) to add glass effect on various elements. The app always crushes when a UI element with glassEffect(_in:) modifier is being rendered. This only happens on device running iOS 26 public beta. I know this for certain because I connected the particular device to xcode and run the app on the device. When i comment out the glassEffect modifier, app doesn't crush.
Is it possible to check particular realeases with #available? If not, how should something like this be handled. Also how do i handle such os level erros without the app crushing. Thanks.
Announcing the Swift Student Challenge 2026
Every year, Apple’s Swift Student Challenge celebrates the creativity and ingenuity of student developers from around the world, inviting them to use Swift and Xcode to solve real-world problems in their own communities and beyond.
Learn more → https://developer.apple.com/swift-student-challenge/
Submissions for the 2026 challenge will open February 6 for three weeks, and students can prepare with new Develop in Swift tutorials and Meet with Apple code-along sessions.
The Apple Developer team is here is to help you along the way - from idea to app, post your questions at any stage of your development here in this forum board or be sure to add the Swift Student Challenge tag to your technology-specific forum question.
Your designs. Your apps. Your moment.
I have made a screensaver for mac in swift, but couldn't find how to add an icon the logo image that shows up on saver file) and thumbnail (the cover image that shows up in the screensaver catalogue).
Currently, it just shows a default blue spiral galaxy thumbnail and no icon image
Starting with iOS 18, UITabBarController no longer updates tab bar item titles when localized strings are changed or reassigned at runtime.
This behavior worked correctly in iOS 17 and earlier, but in iOS 18 the tab bar titles remain unchanged until the app restarts or the view controller hierarchy is reset. This regression appears to be caused by internal UITabBarController optimizations introduced in iOS 18.
Steps to Reproduce
Create a UITabBarController with two or more tabs, each having a UITabBarItem with a title.
Localize the tab titles using NSLocalizedString():
tabBar.items?[0].title = NSLocalizedString("home_tab", comment: "")
tabBar.items?[1].title = NSLocalizedString("settings_tab", comment: "")
Run the app.
Change the app’s language at runtime (without restarting), or manually reassign the localized titles again:
tabBar.items?[0].title = NSLocalizedString("home_tab", comment: "")
tabBar.items?[1].title = NSLocalizedString("settings_tab", comment: "")
Observe that the tab bar titles do not update visually.
Hi everyone,
I’m building a full-screen Map (MapKit + SwiftUI) with persistent top/bottom chrome (menu buttons on top, session stats + map controls on bottom). I have three working implementations and I’d like guidance on which pattern Apple recommends long-term (gesture correctness, safe areas, Dynamic Island/home indicator, and future compatibility).
Version 1 — overlay(alignment:) on Map
Idea: Draw chrome using .overlay(alignment:) directly on the map and manage padding manually.
Map(position: $viewModel.previewMapCameraPosition, scope: mapScope) {
UserAnnotation {
UserLocationCourseMarkerView(angle: viewModel.userCourse - mapHeading)
}
}
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.overlay(alignment: .top) { mapMenu } // manual padding inside
.overlay(alignment: .bottom) { bottomChrome } // manual padding inside
Version 2 — ZStack + .safeAreaPadding
Idea: Place the map at the back, then lay out top/bottom chrome in a VStack inside a ZStack, and use .safeAreaPadding(.all) so content respects safe areas.
ZStack(alignment: .top) {
Map(...).ignoresSafeArea()
VStack {
mapMenu
Spacer()
bottomChrome
}
.safeAreaPadding(.all)
}
Version 3 — .safeAreaInset on the Map
Idea: Make the map full-bleed and then reserve top/bottom space with safeAreaInset, letting SwiftUI manage insets
Map(...).ignoresSafeArea()
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.safeAreaInset(edge: .top) { mapMenu } // manual padding inside
.safeAreaInset(edge: .bottom) { bottomChrome } // manual padding inside
Question
I noticed:
Safe-area / padding behavior
– Version 2 requires the least extra padding and seems to create a small but partial safe-area spacing automatically.
– Version 3 still needs roughly the same manual padding as Version 1, even though it uses safeAreaInset. Why doesn’t safeAreaInset fully handle that spacing?
Rotation crash (Metal)
When using Version 3 (safeAreaInset + ignoresSafeArea), rotating the device portrait↔landscape several times triggers a
Metal crash:
failed assertion 'The following Metal object is being destroyed while still required… CAMetalLayer Display Drawable'
The same crash can happen with Version 1, though less often. I haven’t tested it much with Version 2.
Is this a known issue or race condition between Map’s internal Metal rendering and view layout changes?
Expected behavior
What’s the intended or supported interaction between safeAreaInset, safeAreaPadding, and overlay when embedding persistent chrome inside a SwiftUI Map?
Should safeAreaInset normally remove the need for manual padding, or is that by design?
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app.
CameraViewController:
func settingsButtonTapped() {
#if isLockedCameraCaptureExtension
//App is launched from Lock Screen
//Launch main app here...
#else
//App is launched from Home Screen
self.showSettings(animated: true)
#endif
}
In this document:
https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen
Apple asks you to use:
func launchApp(with session: LockedCameraCaptureSession, info: String) {
Task {
do {
let activity = NSUserActivityTypeLockedCameraCapture
activity.userInfo = [UserInfoKey: info]
try await session.openApplication(for: activity)
} catch {
StatusManager.displayError("Unable to open app - \(error.localizedDescription)")
}
}
}
However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
I’m building a custom input field using UITextField. When the user taps to focus the field and the text is long, the entire text becomes selected by default. This is the same behavior you can see in iOS Safari’s search field or the Messages app search field.
What I want: when the field becomes first responder, the caret should be placed at the end of the text (latest word), without selecting all the text.
Here’s the code that builds my text field:
public func makeTextField() -> UITextField {
let textField = UITextField()
textField.autocorrectionType = .no
textField.setContentCompressionResistancePriority(.required, for: .horizontal)
textField.setContentCompressionResistancePriority(.required, for: .vertical)
if #available(iOS 13.0, *) {
textField.smartInsertDeleteType = .no
}
textField.smartQuotesType = .no
textField.smartDashesType = .no
textField.autocapitalizationType = .none
textField.contentMode = .scaleToFill
if let font = attributes[.font] as? UIFont {
textField.font = font
}
if let color = attributes[.foregroundColor] as? UIColor {
textField.textColor = color
}
// Truncate long text at the head
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.lineBreakMode = .byTruncatingHead
textField.defaultTextAttributes[.paragraphStyle] = paragraphStyle
textField.delegate = self
textField.backgroundColor = .clear
textField.addTarget(self, action: #selector(textFieldDidChange(_:)), for: .editingChanged)
return textField
}
Entire text is selected when focusing the field if the text is long.
What I’ve tried
Forcing the caret to the end in textFieldDidBeginEditing:
func textFieldDidBeginEditing(_ textField: UITextField) {
let end = textField.endOfDocument
textField.selectedTextRange = textField.textRange(from: end, to: end)
}
Doing the same asynchronously (next runloop) to avoid the system overriding selection:
func textFieldDidBeginEditing(_ textField: UITextField) {
DispatchQueue.main.async {
let end = textField.endOfDocument
textField.selectedTextRange = textField.textRange(from: end, to: end)
}
}
Despite these, the system still selects all text on focus when the string is long/truncated at the head.
The following minimal snippet SEGFAULTS with SDK 26.0 and 26.1. Won't crash if I remove async from the enclosing function signature - but it's impractical in a real project.
import Metal
import MetalPerformanceShaders
let SEED = UInt64(0x0)
typealias T = Float16
/* Why ran in async context? Because global GPU object,
and async makeMTLFunction,
and async makeMTLComputePipelineState.
Nevertheless, can trigger the bug without using global
@MainActor
let myGPU = MyGPU()
*/
@main
struct CMDLine {
static func main() async {
let ptr = UnsafeMutablePointer<T>.allocate(capacity: 0)
async let future: Void = randomFillOnGPU(ptr, count: 0)
print("Main thread is playing around")
await future
print("Successfully reached the end.")
}
static func randomFillOnGPU(_ buf: UnsafeMutablePointer<T>, count destbufcount: Int) async {
// let (device, queue) = await (myGPU.device, myGPU.commandqueue)
let myGPU = MyGPU()
let (device, queue) = (myGPU.device, myGPU.commandqueue)
// Init MTLBuffer, async let makeFunction, makeComputePipelineState, etc.
let tempDataType = MPSDataType.uInt32
let randfiller = MPSMatrixRandomMTGP32(device: device, destinationDataType: tempDataType, seed: Int(bitPattern:UInt(SEED)))
print("randomFillOnGPU: successfully created MPSMatrixRandom.")
// try await computePipelineState
// ^ Crashes before this could return
// Or in this minimal case, after randomFillOnGPU() returns
// make encoder, set pso, dispatch, commit...
}
}
actor MyGPU {
let device : MTLDevice
let commandqueue : MTLCommandQueue
init() {
guard let dev: MTLDevice = MPSGetPreferredDevice(.skipRemovable),
let cq = dev.makeCommandQueue(),
dev.supportsFamily(.apple6) || dev.supportsFamily(.mac2)
else { print("Unable to get Metal Device! Exiting"); exit(EX_UNAVAILABLE) }
print("Selected device: \(String(format: "%llX", dev.registryID))")
self.device = dev
self.commandqueue = cq
print("myGPU: initialization complete.")
}
}
See FB20916929. Apparently objc autorelease pool is releasing the wrong address during context switch (across suspension points). I wonder why such obvious case has not been caught before.
Colombia is not yet listed as a Tap to Pay user, but it is in the process of becoming so.
We are currently a group of developers at a company in Colombia working on a project to integrate Tap to Pay into our application. After reviewing Apple's documentation, my company is not certified to meet Apple's security requirements, PCI standards, or licensing requirements. However, the payment service provider we have contracted for this is in the process of obtaining the certifications, authorizations, and licenses that Apple specifies. Our team members and managers overseeing this Tap to Pay project have told us that we, as iOS developers, should integrate and use the Proximity Reader API, but we know that we, as developers, are not authorized by Apple to do so.
Is the payment service provider the only one who can make this possible, enabling its use with NFC and Proximity Reader?
I would like to know if the service provider will provide us with the SDK containing the Proximity Reader API for integration into the project, or if my company will have to implement the Proximity Reader API ourselves?
Topic:
App & System Services
SubTopic:
Tap to Pay on iPhone
Tags:
Swift Packages
Swift
SwiftUI
Tap to Pay on iPhone
I have a tab bar with 4 tabs. Since iOS 26, I’m facing an issue with the tab alignment — the text automatically shrinks, and the tab icon shifts upward for some tabs.
I am using UITab property for creating Tabs from os Version above 18 and UITabBar for os version below 18.
Xcode SPM (Swift Package Manager) Error
I added the "Apple App Store Server Swift Library" library to Xcode using Swift Package Manager.
Both the project and target are set to iOS 14 or higher.
However, when I build after adding the library, an error occurs with the library.
A message appears stating that the target is set to iOS 12.
I'm using Xcode 26.0.1.
Even after adding it to all my projects, the build continues with the same error.
I've tried building the library from version 1.0.0 to the latest version, but the same error persists.
Even after completely cleaning the project and running it, the same error persists.
Does anyone know how to fix this?
Hello everyone,
I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false
In this case, I get a valid, playable .mp4 file.
However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off.
I've already added the "Privacy - Microphone Usage Description" (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem.
I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream:
ScreenRecorder.swift
// This is my main SCStreamConfiguration
private var streamConfiguration: SCStreamConfiguration {
var streamConfig = SCStreamConfiguration()
// ... other HDR/preset config ...
// These are the problem properties
streamConfig.capturesAudio = isAudioCaptureEnabled
streamConfig.captureMicrophone = isMicCaptureEnabled // breaks it if true
streamConfig.excludesCurrentProcessAudio = false
streamConfig.showsCursor = false
if let region = selectedRegion, let display = currentDisplay {
// My region/frame logic (works fine)
let regionWidth = Int(region.frame.width)
let regionHeight = Int(region.frame.height)
streamConfig.width = regionWidth * scaleFactor
streamConfig.height = regionHeight * scaleFactor
// ... (sourceRect logic) ...
}
streamConfig.pixelFormat = kCVPixelFormatType_32BGRA
streamConfig.colorSpaceName = CGColorSpace.sRGB
streamConfig.minimumFrameInterval = CMTime(value: 1, timescale: 60)
return streamConfig
}
And here is how I'm setting up the SCRecordingOutput that writes the file:
ScreenRecorder.swift
private func initRecordingOutput(for region: ScreenPickerManager.SelectedRegion) throws {
let screeRecordingOutputURL = try RecordingWorkspace.createScreenRecordingVideoFile(
in: workspaceURL,
sessionIndex: sessionIndex
)
let recordingConfiguration = SCRecordingOutputConfiguration()
recordingConfiguration.outputURL = screeRecordingOutputURL
recordingConfiguration.outputFileType = .mp4
recordingConfiguration.videoCodecType = .hevc
let recordingOutput = SCRecordingOutput(configuration: recordingConfiguration, delegate: self)
self.recordingOutput = recordingOutput
}
Finally, my CaptureEngine adds these to the SCStream:
CaptureEngine.swift
class CaptureEngine: NSObject, @unchecked Sendable {
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
// ... (dispatch queues) ...
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
// Add outputs for raw buffers (not used for file recording)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
// Add the file recording output
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
// ... (stopCapture, etc.) ...
}
When I had the .captureMicrophone value to be false, I get a perfect .mp4 video playable everywhere, however, when its true, I am getting corrupted video which doesn't play at all :-
I'm downloading a fine-tuned model from HuggingFace which is then cached on my Mac when the app first starts. However, I wanted to test adding a progress bar to show the download progress. To test this I need to delete the cached model. From what I've seen online this is cached at
/Users/userName/.cache/huggingface/hub
However, if I delete the files from here, using Terminal, the app still seems to be able to access the model.
Is the model cached somewhere else?
On my iPhone it seems deleting the app also deletes the cached model (app data) so that is useful.
Hey everyone,
I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS.
So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
I assign my CustomVideoCompositor class to the videoComposition.
I create an AVPlayerItem with the composition and video composition.
The Problem:
Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second.
Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97).
I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video.
My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60.
Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :-
func exportVideo(to outputURL: URL) async throws {
guard let composition = composition,
let videoComposition = videoComposition else {
throw VideoCompositionError.noValidVideos
}
try? FileManager.default.removeItem(at: outputURL)
guard let exportSession = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetHighestQuality // Is this the problem?
) else {
throw VideoCompositionError.trackCreationFailed
}
exportSession.outputFileType = .mp4
exportSession.videoComposition = videoComposition // This has the 60fps setting
try await exportSession.export(to: outputURL, as: .mp4)
}
I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor
My Question:
Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
In my code, I used the newly added class AssetPackManager in Xcode 26. When compiling with Xcode 16, it would report an error saying "Cannot find 'AssetPackManager' in scope". Swift cannot use __IPHONE_OS_VERSION_MAX_ALLOWED. How to solve this problem?
Ok so for some background, our app has a keyboard extension where we run a dictation service. Due to iOS limitations, this requires the user to press a button on the keyboard which will then bring the user to our app to activate an audio session. Once the audio session has been activated, it takes the user back to the original app it came from to continue using the keyboard + dictation service.
The problem we're running into involves iOS 26.0 and the iMessages app. Whenever our app tries to switch back to the iMessages app using Deep Link (specifically the messages:// URL), the iMessages app opens up a new message compose sheet. This compose sheet replaces the view or message thread that the user was previously looking at which we don't want.
This behavior appears to be only happening in iOS 26 and not in any of the previous iOS versions (tested up to iOS 18.6). We know that it should be possible to bring the user back to the messages app without opening up this new compose sheet, because similar apps do the same thing and these apps have been verified to work on iOS 26.
We've tried also using the sms:// URL but that always opens a new message compose sheet regardless of whether or not it's iOS 26.0.