Hello,
I’ve uploaded a new build of my macOS app with the first two in-app purchases, but it was rejected under 2.1.0 Performance: App Completeness. After further investigation, it seems that the Apple review team is unable to fetch products. The following code:
private let productIDs = ["co.app.freetrial", "co.app.full"]
self.products = try await Product.products(for: productIDs)
is returning an empty array. (In the TestFlight build, it correctly returns the products.)
For me, everything works as expected via Xcode and on a fresh machine using TestFlight.
Here’s what I’ve tried so far:
The in-app purchases were added to the binary with the first build.
I confirmed that each in-app purchase is free of any yellow or red warning messages.
Downloaded the app from TestFlight and confirmed that all in-app purchases are available.
Updated the in-app purchase price in App Store Connect and verified that the new price is reflected in the app (to rule out any ID mismatches).
Reviewed all agreements to ensure no missing signatures. (A few sources online suggested that this could potentially cause issues with in-app purchases for the review team.)
I created a new build using a 3rd-party certificate and a provision profile. (Older builds - before adding in-app purchases - were signed with a development certificate and no provision profile, yet they still made it to the App Store. I’m not sure how that was possible or if it contributed to this issue.).
Despite these steps, the app continues to be rejected for the same reason. I’m struggling to understand how products are successfully fetched for testers via TestFlight while the review team repeatedly sees zero products.
Any guidance on how to resolve this would be greatly appreciated.
Thank you
Posts under macOS tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In SwiftUI's List, on macOS, if I embed a TextField then the text field is presented as non-editable. If the user clicks on the text and waits a short period of time, the text field will become editable.
I'm aware this is generally the correct behaviour for macOS. However, is there a way in SwiftUI to supress this behaviour such that the TextField is always presented as being editable?
I want a scrollable, List of editable text fields, much like how a Form is presented. The reason I'm not using a Form is because I want List's support for reordering by drag-and-drop (.onMove).
Use Case
A view that allows a user to compose a questionnaire. They are able to add and remove questions (rows) and each question is editable. They require drag-and-drop support so that they can reorder the questions.
On macOS, it's not uncommon to present windows as sheets that can be resized. By setting the NSWindow's various frame auto save properties, you can restore the size of the sheet the next time it is presented.
When presenting a Sheet from within SwiftUI using the .sheet view modifier, how can I preserve and restore the sheet's frame size?
The closest I've been able to come is to put the SwiftUI view into a custom NSHostingController and then into an NSViewControllerRepresentable and then override viewWillAppear and look for self.view.window, which is all little awkward.
Is there a more idiomatic way to achieve this in "pure" SwiftUI?
Bear with me, please. Please make sure a highly skilled technical person reads and understands this.
I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo).
Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI.
First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function.
Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately.
In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model.
Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase).
Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
We have a Privileged Helper tool that we install with SMJobBless. I would like to debug it.
I've added WaitForDebugger in our helper tool launchd plist and it does wait.
But I can't attach to it via Debug->Attach to Process in Xcode with error:
Code: 3
Failure Reason: tried to attach to process as user 'myusername' and process is running as user 'root'
I set Debug process as: root in scheme settings. But I'm not sure if it does anything since I'm not running this particular scheme at the moment of attach.
I tried unsuccessfully to set "Wait for the executable to be launched" but I had to create a new scheme for it since helper is built as part of the main app so maybe I did something wrong.
Am I doing something wrong or is it not possible to attach to root process?
I am aware the USB / HID devices can come and go, if you have a long running application that's what you want to monitor.
But for a "one-shot" command-line tool for example, I would like to enumerate the devices present on a system at a certain point in time, interact with a subset of them (and this interaction can fail since the device may have been disconnected in-between enumerating and me creating the HIDDeviceClient), and then exit the application.
It seems that HIDDeviceManager only allows monitoring an Async[Throwing]Stream which provides the initial devices matching the query but then continues to deliver updates, and I have no idea when this initial list is done.
I could sleep for a while and then cancel the stream and see what was received up to then, but that seems like the wrong way to go about this, if I just want to know "which devices are connected", so I can maybe list them in a "usage" or help screen.
Am I missing something?
We are encountering a download issue in Safari 18.2 on macOS Sequoia 15.2 where file downloads initiated by our AngularJS application (such as Excel exports) are silently blocked.
There are no errors in the browser console, and the download does not occur.
Interestingly, after testing on Safari 18.3 with Sequoia 15.3, the downloads worked as expected.
However, the problem reappeared on Safari 18.4 with Sequoia 15.4.
We suspect that recent changes in Safari’s security or download handling may be preventing downloads triggered via asynchronous JavaScript (e.g., AJAX calls) that are not initiated directly by user interaction.
We would appreciate any insights, suggestions, or possible workarounds from the community. Looking forward to your guidance on this matter.
How do you atomically set a Picker's selection and contents on macOS such that you don't end up in a situation where the selection is not present within the Picker's content?
I presume Picker on macOS is implemented as an NSPopUpButton and an NSPopUpButton doesn't really like the concept of "no selection". SwiftUI, when presented with that, outputs:
Picker: the selection "nil" is invalid and does not have an associated tag, this will give undefined results.
Consider the following pseudo code:
struct ParentView: View {
@State private var items: [Item]
var body: some View {
ChildView(items: items)
}
}
struct ChildView: View {
let items: [Item]
@State private var selectedItem: Item?
var body: some View {
Picker("", selection: $selectedItem) {
ForEach(items) { item in
Text(item.name).tag(item)
}
}
}
}
When items gets passed down from ParentView to the ChildView, it's entirely possible that the current value in selectedItem represents an Item that is not longer in the items[] array.
You can "catch" that by using .onAppear, .task, .onChange and maybe some other modifiers, but not until after at least one render pass has happened and an error has likely been reported because selectedItem is nil or it's not represented in the items[] array.
Because selectedItem is private state, a value can't easily be passed down from the parent view, though even if it could that just kind of moves the problem one level higher up.
What is the correct way to handle this type of data flow in SwiftUI for macOS?
Hi there,
I'm trying to use SFAuthorizationPluginView in order to show some fields in the login screen, have the user click the arrow, then continue to show more fields as a second step of authentication. How can I accomplish this?
Register multiple SecurityAgentPlugins each with their own mechanism and nib?
Some how get MacOS to call my SFAuthorizationPluginView::view() and return a new view?
Manually remove text boxes and put in new ones when button is pressed
I don't believe 1 works, for the second mechanism ended up calling the first mechanism's view's view()
Cheers,
-Ken
I have a simple document-based application for macOS.
struct ContentView: View {
@Binding var document: TextDocument
var body: some View {
.onReceive(NotificationCenter.default.publisher(for: .notificationTextWillAppendSomeTextt), perform: { _ in
})
VStack {
TextEditor(text: $document.text)
}
}
}
extension Notification.Name {
static let notificationTextWillAppendSomeTextt = Notification.Name("TextWillAppendSomeText")
}
Suppose that my application currently has three tabs. If I call a menu command through
post(name:object:)
this menu command call will affect all three of them. This stackoverflow topic talks about it, too. So how could I tell which window should get a call and others don't? Thanks.
I have a sample document-based macOS app. I understand that you can open a new window or a new tab with some text.
import SwiftUI
struct ContentView: View {
@Binding var document: TexDocument
@Environment(\.newDocument) var newDocument
var body: some View {
VStack(spacing: 0) {
topView
}
}
private var topView: some View {
Button {
newDocument(TexDocument(text: "A whole new world!"))
} label: {
Text("Open new window")
.frame(width: 200)
}
}
}
Suppose that I have a path to a text file whose security-scoped bookmark can be resolved with a click of a button. I wonder if you can open a new window or a new tab with the corresponding content?. I have done that in Cocoa. I hope I can do it in SwiftUI as well. Thanks.
Hello!
I'm a developer working on a plugin for the Elgato Stream Deck, called GPU Metrics. The plugin currently only works on Windows but I'd like to bring it to macOS. However, based on forum posts I've read (and StackOverflow) there isn't a very clear path to query GPU metrics like usage, temperature, used GPU memory, and power consumption. There are some tools out there that do similar things, but I wanted to see what would be the recommendation from Apple's engineering team to get this data via a public API.
Requirements:
Access GPU utilization, temperature, memory usage, power usage
C/C++ based API for querying the metrics so I can expose the data to JavaScript via Node Addon
No need to compatibile with Intel-based Macs, as Apple silicon will be fine for now
Plugin GitHub
Thank you!
Noah
I'm currently integrating SwiftUI into an AppKit based application and was curious if the design pattern below was viable or not. In order to "bridge" between AppKit and SwiftUI, most of my SwiftUI "root" views have aViewModel that is accessible to the SwiftUI view via @ObservedObject.
When a SwiftUI views need to use NSViewRepresentable I'm finding the use of a ViewModel and a Coordinator to be an unnecessary layer of indirection. In cases where it makes sense, I've just used the ViewModel as the Coordinator and it all appears to be working ok, but I'm curious if this is reasonable design pattern or if I'm overlooking something.
Consider the following pseudo code:
// 1. A normal @ObservedObject acting as the ViewModel that also owns and manages an NSTableView.
@MainActor final class ViewModel: ObservedObject, NSTableView... {
let scrollView: NSScrollView
let tableView: NSTableView
@Published var selectedTitle: String
init() {
// ViewModel manages tableView as its dataSource and delegate.
tableView.dataSource = self
tableView.delegate = self
}
func reload() {
tableView.reloadData()
}
// Update view model properties.
// Simpler than passing back up through a Coordinator.
func tableViewSelectionDidChange(_ notification: Notification) {
selectedTitle = tableView.selectedItem.title
}
}
// 2. A normal SwiftUI view, mostly driven by the ViewModel.
struct ContentView: View {
@ObservedObject model: ViewModel
var body: some View {
Text(model.selectedTitle)
// No need to pass anything down other than the view model.
MyTableView(model: model)
Button("Reload") { model.reload() }
Button("Delete") { model.deleteRow(...) }
}
}
// 3. A barebones NSViewRepresentable that just vends the required NSView. No other state is required as the ViewModel handles all interactions with the view.
struct MyTableView: NSViewRepresentable {
// Can this even be an NSView?
let model: ViewModel
func makeNSView(context: Context) -> some NSView {
return model.scrollView
}
func updateNSView(_ nsView: NSViewType, context: Context) {
// Not needed, all updates are driven through the ViewModel.
}
}
From what I can tell, the above is working as expected, but I'm curious if there are some situations where this could "break", particularly around the lifecycle of NSViewRepresentable
Would love to know if overall pattern is "ok" from a SwiftUI perspective.
C++ code that compiled fine on Xcode 16.2 when targeting macOS 13.3 after upgrading to Xcode 16.3 gives an error that the minimum required target is macOS 13.4 with an error like:
`/Applications/Xcode-16.3.0.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.4.sdk/usr/include/c++/v1/__format/formatter_floating_point.h:74:30: error: 'to_chars' is unavailable: introduced in macOS 13.4 unknown 74 | to_chars_result __r = std::to_chars(__first, __last, __value, __fmt, __precision);
Here’s example taken directly from (https://en.cppreference.com/w/cpp/utility/format/format):
#include <format>
#include <iostream>
#include <string>
#include <string_view>
template<typename... Args>
std::string dyna_print(std::string_view rt_fmt_str, Args&&... args)
{
return std::vformat(rt_fmt_str, std::make_format_args(args...));
}
int main()
{
std::cout << std::format("Hello {}!\n", "world");
std::string fmt;
for (int i{}; i != 3; ++i)
{
fmt += "{} "; // constructs the formatting string
std::cout << fmt << " : ";
std::cout << dyna_print(fmt, "alpha", 'Z', 3.14, "unused");
std::cout << '\n';
}
}
It doesn’t make any sense to suddenly require targeting 13.4 for features that worked fine on 13.3. The Apple documentation for C++ feature support explicitly discusses 13.3. https://developer.apple.com/xcode/cpp/ (search for P0067R5 on the page)
I haven't tested it, but based on the standard library headers the minimum required iOS version has also been bumped - from 16.3 to 16.5.
Am I doing something wrong? Is there a known work-around?
Filed feedback: FB17081499
Hi, looks like 15.4.x has broken softwareupdate --verbose --fetch-full-installer --full-installer-version for all versions (still works in 15.3.1)
softwareupdate --verbose --list-full-installers
Finding available software
Software Update found the following full installers:
* Title: macOS Sequoia, Version: 15.4.1, Size: 15244333KiB, Build: 24E263, Deferred: NO
* Title: macOS Sequoia, Version: 15.4, Size: 15243957KiB, Build: 24E248, Deferred: NO
* Title: macOS Sequoia, Version: 15.3.2, Size: 14890483KiB, Build: 24D81, Deferred: NO
* Title: macOS Sequoia, Version: 15.3.1, Size: 14891477KiB, Build: 24D70, Deferred: NO
* Title: macOS Sequoia, Version: 15.3, Size: 14889290KiB, Build: 24D60, Deferred: NO
* Title: macOS Sequoia, Version: 15.2, Size: 14921025KiB, Build: 24C101, Deferred: NO
* Title: macOS Sonoma, Version: 14.7.5, Size: 13337289KiB, Build: 23H527, Deferred: NO
* Title: macOS Sonoma, Version: 14.7.4, Size: 13332546KiB, Build: 23H420, Deferred: NO
* Title: macOS Sonoma, Version: 14.7.3, Size: 13334287KiB, Build: 23H417, Deferred: NO
* Title: macOS Sonoma, Version: 14.7.2, Size: 13333067KiB, Build: 23H311, Deferred: NO
* Title: macOS Ventura, Version: 13.7.5, Size: 11916960KiB, Build: 22H527, Deferred: NO
* Title: macOS Ventura, Version: 13.7.4, Size: 11915317KiB, Build: 22H420, Deferred: NO
* Title: macOS Ventura, Version: 13.7.3, Size: 11915737KiB, Build: 22H417, Deferred: NO
* Title: macOS Ventura, Version: 13.7.2, Size: 11916651KiB, Build: 22H313, Deferred: NO
* Title: macOS Monterey, Version: 12.7.4, Size: 12117810KiB, Build: 21H1123, Deferred: NO
* Title: macOS Big Sur, Version: 11.7.10, Size: 12125478KiB, Build: 20G1427, Deferred: NO
* Title: macOS Catalina, Version: 10.15.7, Size: 8055650KiB, Build: 19H15, Deferred: NO
* Title: macOS Catalina, Version: 10.15.7, Size: 8055522KiB, Build: 19H2, Deferred: NO
* Title: macOS Catalina, Version: 10.15.6, Size: 8055450KiB, Build: 19G2021, Deferred: NO
softwareupdate --verbose --fetch-full-installer --full-installer-version 15.4
Scanning for 15.4 installer
Install failed with error: Update not found
https://discussions.apple.com/thread/256042015
I have an iOS app that relies on dynamic text size such that all fonts in the app respect the user's setting of Text Size in the iOS Settings app.
This app also runs on macOS via Mac Catalyst. But until macOS 14 Sonoma, there was no Text Size setting in the macOS Settings app. But even as of Sonoma, the Text Size setting isn't usable by 3rd party apps. And Sequoia doesn't seem to change that.
As a work around, my Mac Catalyst app provides its own Text Size setting. I was able to make it work by providing my own UIApplication subclass and overriding preferredContentSizeCategory. Under macOS 12 to macOS 14, this workaround works just fine and all fonts in the app created with code such as UIFont.preferredFont(forTextStyle:) gives appropriately sized fonts based on the overridden content size category.
However, this workaround stopped working with macOS 15 Sequoia. I've also tried code such as:
self.window.traitOverrides.preferredContentSizeCategory = myCustomSizeCategoryValue
and
self.window.maximumContentSizeCategory = myCustomSizeCategoryValue
self.window.minimumContentSizeCategory = myCustomSizeCategoryValue
in the scene delegate but that made no difference.
Is there any way to get code such as UIFont.preferredFont(forTextStyle:) to return an appropriately sized font based on some app provided content size category in a Mac Catalyst app running under macOS 15?
It sure would be nice if Mac Catalyst apps automatically responded to the macOS Text Size setting under Settings -> Accessibility -> Display -> Text Size just like a native iOS app.
I have a sample document-based application for macOS. According to this article (https://jujodi.medium.com/adding-a-new-tab-keyboard-shortcut-to-a-swiftui-macos-application-56b5f389d2e6), you can create a new tab programmatically. It works. Now, my question is whether you can open a tab with some data. Is that possible under the SwiftUI framework? I could do it in Cocoa. Hopefully, we can do it in SwiftUI as well. Muchos thankos.
import SwiftUI
@main
struct SomeApp: App {
var body: some Scene {
DocumentGroup(newDocument: SomeDocument()) { file in
ContentView(document: file.$document)
}
}
}
import SwiftUI
struct ContentView: View {
@Binding var document: SomeDocument
var body: some View {
VStack {
TextEditor(text: $document.text)
Button {
createNewTab()
} label: {
Text("New tab")
.frame(width: 64)
}
}
}
}
extension ContentView {
private func createNewTab() {
if let currentWindow = NSApp.keyWindow,
let windowController = currentWindow.windowController {
windowController.newWindowForTab(nil)
if let newWindow = NSApp.keyWindow,
currentWindow != newWindow {
currentWindow.addTabbedWindow(newWindow, ordered: .above)
}
}
}
}
I'm building a macOS + iOS SwiftUI app using Xcode 14.1b3 on a Mac running macOS 13.b11. The app uses Core Data + CloudKit.
With development builds, CloudKit integration works on the Mac app and the iOS app. Existing records are fetched from iCloud, and new records are uploaded to iCloud. Everybody's happy.
With TestFlight builds, the iOS app has no problems. But CloudKit integration isn't working in the Mac app at all. No existing records are fetched, no new records are uploaded.
In the Console, I see this message:
error: CoreData+CloudKit: Failed to set up CloudKit integration for store: <NSSQLCore: 0x1324079e0> (URL: <local file url>)
Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.cloudd was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.cloudd was invalidated: failed at lookup with error 159 - Sandbox restriction.}
I thought it might be that I was missing the com.apple.security.network.client entitlement, but adding that didn't help.
Any suggestions what I might be missing? (It's my first sandboxed Mac app, so it might be really obvious to anyone but me.)
Has Objective-C been deprecated?
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback.
The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs.
The relevant capture code is:
func capturePhoto() {
let captureSettings = AVCapturePhotoSettings()
captureSettings.flashMode = .auto
photoOutput.maxPhotoQualityPrioritization = .quality
photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared)
print("**** CameraManager: capturePhoto")
}
and the delegate callbacks are:
class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate {
nonisolated(unsafe) static let shared = PhotoDelegate()
// MARK: - Delegate callbacks
func photoOutput(
_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: (any Error)?
) {
print("**** CameraManager: didFinishProcessingPhoto")
guard let pData = photo.fileDataRepresentation() else {
print("**** photoOutput is empty")
return
}
print("**** photoOutput data is \(pData.count) bytes")
}
func photoOutput(
_ output: AVCapturePhotoOutput,
willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings
) {
print("**** CameraManager: willBeginCaptureFor")
}
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("**** CameraManager: willCaptureCapturePhotoFor")
}
}
The crash report significant parts are.....
Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [30850]
VM Region Info: 0 is not in any region. Bytes before following region: 4296495104
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78
2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692
3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19
4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145
5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365
6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560
7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292
8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657
9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64
10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858
11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214
12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586
13 AppKit 0x7ff806c6b111 NSApplicationMain + 817
14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419
15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164
16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047
17 ContinuityCamera 0x10017b49e 0x100175000 + 25758
18 dyld 0x7ff8037d1418 start + 1896
Thread 1:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
0 ??? 0x0 ???
1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61
2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498
3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36
4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69
5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8
6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387
7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393
8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417
9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765
10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327
11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15
Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible.
Any thoughts on solving this situation would be appreciated.
Regards, Michaela