To establish a privileged helper daemon from a command line app to handle actions requiring root privileges I still use the old way of SMJobBless. But this is deprecated since OSX 10.13 and I want to finally update it to the new way using SMAppService.
As I'm concerned with securing it against malicious exploits, do you have a recommended up-to-date implementation in Objective-C establishing a privileged helper and verifying it is only used by my signed app?
I've seen the suggestion in the documentation to use SMAppService, but couldn't find a good implementation covering security aspects. My old implementation in brief is as follows:
bool runJobBless () {
// check if already installed
NSFileManager* filemgr = [NSFileManager defaultManager];
if ([filemgr fileExistsAtPath:@"/Library/PrivilegedHelperTools/com.company.Helper"] &&
[filemgr fileExistsAtPath:@"/Library/LaunchDaemons/com.company.Helper.plist"])
{
// check helper version to match the client
// ...
return true;
}
// create authorization reference
AuthorizationRef authRef;
OSStatus status = AuthorizationCreate (NULL, kAuthorizationEmptyEnvironment, kAuthorizationFlagDefaults, &authRef);
if (status != errAuthorizationSuccess) return false;
// obtain rights to install privileged helper
AuthorizationItem authItem = { kSMRightBlessPrivilegedHelper, 0, NULL, 0 };
AuthorizationRights authRights = { 1, &authItem };
AuthorizationFlags flags = kAuthorizationFlagDefaults | kAuthorizationFlagInteractionAllowed | kAuthorizationFlagPreAuthorize | kAuthorizationFlagExtendRights;
status = AuthorizationCopyRights (authRef, &authRights, kAuthorizationEmptyEnvironment, flags, NULL);
if (status != errAuthorizationSuccess) return false;
// SMJobBless does it all: verify helper against app and vice-versa, place and load embedded launchd.plist in /Library/LaunchDaemons, place executable in /Library/PrivilegedHelperTools
CFErrorRef cfError;
if (!SMJobBless (kSMDomainSystemLaunchd, (CFStringRef)@"com.company.Helper", authRef, &cfError)) {
// check helper version to match the client
// ...
return true;
} else {
CFBridgingRelease (cfError);
return false;
}
}
void connectToHelper () {
// connect to helper via XPC
NSXPCConnection* c = [[NSXPCConnection alloc] initWithMachServiceName:@"com.company.Helper.mach" options:NSXPCConnectionPrivileged];
c.remoteObjectInterface = [NSXPCInterface interfaceWithProtocol:@protocol (SilentInstallHelperProtocol)];
[c resume];
// call function on helper and wait for completion
dispatch_semaphore_t semaphore = dispatch_semaphore_create (0);
[[c remoteObjectProxy] callFunction:^() {
dispatch_semaphore_signal (semaphore);
}];
dispatch_semaphore_wait (semaphore, dispatch_time (DISPATCH_TIME_NOW, 10 * NSEC_PER_SEC));
dispatch_release (semaphore);
[c invalidate];
[c release];
}
Objective-C
RSS for tagObjective-C is a programming language for writing iOS, iPad OS, and macOS apps.
Posts under Objective-C tag
98 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared.
ImmersiveSpace(id: "ImmersiveSpace-Metal") {
CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in
SpatialRenderer_InitAndRun(layerRenderer)
}
}
Is there any potential solution too see Personas in Metal view?
Thanks in advance!
I can compile this
#import <Cocoa/Cocoa.h>
@interface AppDelegate : NSObject <NSApplicationDelegate>
@property (strong) NSWindow *window;
@property (strong) NSSlider *slider;
@end
@implementation AppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)notification {
// Window size
NSRect frame = NSMakeRect(0, 0, 400, 300);
NSUInteger style = NSWindowStyleMaskTitled |
NSWindowStyleMaskClosable |
NSWindowStyleMaskResizable;
self.window = [[NSWindow alloc] initWithContentRect:frame
styleMask:style
backing:NSBackingStoreBuffered
defer:NO];
[self.window setTitle:@"Centered Slider Example"];
[self.window makeKeyAndOrderFront:nil];
// Slider size
CGFloat sliderWidth = 200;
CGFloat sliderHeight = 32;
CGFloat windowWidth = self.window.frame.size.width;
CGFloat windowHeight = self.window.frame.size.height;
CGFloat sliderX = (windowWidth - sliderWidth) / 2;
CGFloat sliderY = (windowHeight - sliderHeight) / 2;
self.slider = [[NSSlider alloc] initWithFrame:NSMakeRect(sliderX, sliderY, sliderWidth, sliderHeight)];
[self.slider setMinValue:0];
[self.slider setMaxValue:100];
[self.slider setDoubleValue:50];
[self.window.contentView addSubview:self.slider];
}
@end
int main(int argc, const char * argv[]) {
@autoreleasepool {
NSApplication *app = [NSApplication sharedApplication];
AppDelegate *delegate = [[AppDelegate alloc] init];
[app setDelegate:delegate];
[app run];
}
return 0;
}
with
(base) johnzhou@Johns-MacBook-Pro liquidglasstest % clang -framework Foundation -framework AppKit testobjc.m
and get this neat liquid glass effect:
https://github.com/user-attachments/assets/4199493b-6011-4ad0-9c9f-25db8585e547
However if I use pyobjc to make an equivalent
import sys
from Cocoa import (
NSApplication, NSApp, NSWindow, NSSlider, NSMakeRect,
NSWindowStyleMaskTitled, NSWindowStyleMaskClosable,
NSWindowStyleMaskResizable, NSBackingStoreBuffered,
NSObject
)
class AppDelegate(NSObject):
def applicationDidFinishLaunching_(self, notification):
# Create the main window
window_size = NSMakeRect(0, 0, 400, 300)
style = NSWindowStyleMaskTitled | NSWindowStyleMaskClosable | NSWindowStyleMaskResizable
self.window = NSWindow.alloc().initWithContentRect_styleMask_backing_defer_(
window_size, style, NSBackingStoreBuffered, False
)
self.window.setTitle_("Centered Slider Example")
self.window.makeKeyAndOrderFront_(None)
# Slider size and positioning
slider_width = 200
slider_height = 32
window_width = self.window.frame().size.width
window_height = self.window.frame().size.height
slider_x = (window_width - slider_width) / 2
slider_y = (window_height - slider_height) / 2
self.slider = NSSlider.alloc().initWithFrame_(NSMakeRect(slider_x, slider_y, slider_width, slider_height))
self.slider.setMinValue_(0)
self.slider.setMaxValue_(100)
self.slider.setDoubleValue_(50)
self.window.contentView().addSubview_(self.slider)
if __name__ == "__main__":
app = NSApplication.sharedApplication()
delegate = AppDelegate.alloc().init()
app.setDelegate_(delegate)
app.run()
I get a result shown at
https://github.com/user-attachments/assets/7da022bc-122b-491d-9e08-030dcb9337c3
which does not have the new liquid glass effect.
Why is this? Is this perhaps related to the requirement that you must compile on latest Xcode as indicated in the docs? Why, is the compiler doing some magic?
I keep getting the following error when trying to run Passkey sign in on macOS.
Told not to present authorization sheet: Error Domain=com.apple.AuthenticationServicesCore.AuthorizationError Code=1 "(null)"
ASAuthorizationController credential request failed with error: Error Domain=com.apple.AuthenticationServices.AuthorizationError Code=1004 "(null)"
This is the specific error.
Application with identifier a is not associated with domain b
I have config the apple-app-site-association link and use ?mode=developer
Could there be any reason for this?
Topic:
Privacy & Security
SubTopic:
General
Tags:
macOS
Objective-C
Authentication Services
Passkeys in iCloud Keychain
Greetings,
With MacOS 15 Sequoia, Apple updated key-value-observations in such a way, that an unremoved observation by a now deallocated object will no longer cause an exception, when the observed object triggers said observation.
While this is fundamentally a positive change, the issue comes with the fact, that this change affects systems based on the version of their operating system and not the version of the operating system of the build system.
Many of our customers use old operating system versions and old hardware - meaning they can't easily update. Since we need to use up to date Xcode versions to develop for newer systems, our developers tend to have rather new OS versions.
This means, that we are increasingly facing bugs which only happen in customer systems but not in most developer (or QA) systems.
Currently, we still can use earlier OS versions with Xcode to fix these bugs, but once the used Xcode versions no longer support MacOS 14 or below this will become a major hurdle.
Are there known solutions to this issue?
We are currently attempting to swizzle observer adding and removal in order to fix the problem for old systems as well, but this has proven to be quite difficult and unstable. Since any weakly held property in the middle of an observation keypath can cause crashes, one would have to split up observations into multiple subobservations, which is less simple than it sounds, due to custom implementations of addObserver (such as there seems to be in array controller proxies) and collection operators.
Thanks for any suggestions!
We're trying to implement a backup/restore data feature in our business productivity iPad app using UIDocumentPickerViewController and AppleArchive, but discovered odd behavior of [UIDocumentPickerViewController initForOpeningContentTypes: asCopy:YES] when reading large archive files from a USB drive.
We've duplicated this behavior with iPadOS 16.6.1 and 17.7 when building our app with Xcode 15.4 targeting minimum deployment of iPadOS 16. We haven't tested this with bleeding edge iPadOS 18.
Here's our Objective-C code which presents the picker:
NSArray* contentTypeArray = @[UTTypeAppleArchive];
UIDocumentPickerViewController* docPickerVC = [[UIDocumentPickerViewController alloc] initForOpeningContentTypes:contentTypeArray asCopy:YES];
docPickerVC.delegate = self;
docPickerVC.allowsMultipleSelection = NO;
docPickerVC.shouldShowFileExtensions = YES;
docPickerVC.modalPresentationStyle = UIModalPresentationPopover;
docPickerVC.popoverPresentationController.sourceView = self.view;
[self presentViewController:docPickerVC animated:YES completion:nil];
The UIDocumentPickerViewController remains visible until the selected external archive file has been copied from the USB drive to the app's local tmp sandbox. This may take several seconds due to the slow access speed of the USB drive. During this time the UIDocumentPickerViewController does NOT disable its tableview rows displaying files found on the USB drive. Even the most patient user will tap the desired filename a second (or third or fourth) time since the user's initial tap appears to have been ignored by UIDocumentPickerViewController, which lacks sufficient UI feedback showing it's busy copying the selected file.
When the user taps the file a second time, UIDocumentPickerViewController apparently begins to copy the archive file once again. The end result is a truncated copy of the selected file based on the time between taps. For instance, a 788 MB source archive may be copied as a 56 MB file. Here, the UIDocumentPickerDelegate receives a 56 MB file instead of the original 788 MB of data.
Not surprisingly, AppleArchive fails to decrypt the local copy of the archive because it's missing data. Instead of failing gracefully, AppleArchive crashes in AAArchiveStreamClose() (see forums post 765102 for details).
Does anyone know if there's a workaround for this strange behavior of UIDocumentPickerViewController?
3
I am working on an application to get when input audio device is being used. Basically I want to know the application using the microphone (built-in or external)
This app runs on macOS. For Mac versions starting from Sonoma I can use this code:
int getAudioProcessPID(AudioObjectID process)
{
pid_t pid;
if (@available(macOS 14.0, *)) {
constexpr AudioObjectPropertyAddress prop {
kAudioProcessPropertyPID,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain
};
UInt32 dataSize = sizeof(pid);
OSStatus error = AudioObjectGetPropertyData(process, &prop, 0, nullptr, &dataSize, &pid);
if (error != noErr) {
return -1;
}
} else {
// Pre sonoma code goes here
}
return pid;
}
which works.
However, kAudioProcessPropertyPID was added in macOS SDK 14.0.
Does anyone know how to achieve the same functionality on previous versions?
When the app launches, it assigns the correct string stored in NSUserDefaults to the instance variable (serverAddress) of type NSString.
Afterwards, when the app transitions to a suspended state and receives a VoIP Push notification approximately 4 hours later, causing the app to enter the background state, referencing the value of serverAddress may result in it showing as <NSMallocBlock: 0x106452d60> even though it hasn't been deallocated. This does not occur every time.
The specific class definitions are as follows.
@implementation NewPAPEOPLEClient
{
NSString* serverAddress;
NSString* apiKey;
dispatch_semaphore_t lock;
NSMutableArray* errCallID;
}
#define PREF_STR(key) [[NSUserDefaults standardUserDefaults] stringForKey:(key)
The global instance variable (serverAddress) is set with the following value when the application starts:
serverAddress = PREF_STR(@"APP_CONTACT_SERVICE_SERVER_ADDRESS")
Based on the above, please answer the following questions.
(1) If the instance variable's value becomes <NSMallocBlock: 0x106452d60>, does that mean the memory area has been released?
(2) Can the memory area of an instance variable be automatically released without being explicitly released?
Can an instance variable of type NSString that is not autoreleased be automatically released?
(3) Please tell me how to prevent the memory area from being automatically released. Will using retain prevent automatic release?
Hi everyone,
I'm encountering a behaviour related to Secure Event Input on macOS and wanted to understand if it's expected or if there's a recommended approach to handle it.
Scenario:
App A enables secure input using EnableSecureEventInput().
While secure input is active, App A launches App B using NSWorkspace or similar.
App B launches successfully, but it does not receive foreground focus — it starts in the background.
The system retains focus on App A, seemingly to preserve the secure input session.
Observed Behavior:
From what I understand, macOS prevents app switching during Secure Event Input to avoid accidental or malicious focus stealing (e.g., to prevent UI spoofing during password entry). So:
Input focus remains locked to App A.
App B runs but cannot become frontmost unless the secure input session ends or App B is brought to the frontmost by manual intervention or by running a terminal script.
This is consistent with system security behaviour, but it presents a challenge when App A needs to launch and hand off control to another app.
Questions:
Is this behaviour officially documented anywhere?
Is there a recommended pattern for safely transferring focus to another app while Secure Event Input is active?
Would calling DisableSecureEventInput() just before launching App B be the appropriate (and safe) solution? Or is there a better way to defer the transition?
Thanks in advance for any clarification or advice.
I have a standard Mac project created using Xcode templates with Storyboard. I manually removed the window from the Storyboard file and use ViewController.h as the base class for other ViewControllers. I create new windows and ViewControllers programmatically, implementing the UI entirely through code.
However, I've discovered that on all macOS versions prior to macOS 26 beta, the application would trigger automatic termination due to App Nap under certain circumstances. The specific steps are:
Close the application window (the application doesn't exit immediately at this point)
Click on other windows or the desktop, causing the application to lose focus (though this description might not be entirely accurate - the app might have already lost focus when the window closed, but the top menu bar still shows the current application immediately after window closure)
The application automatically terminates
In previous versions, I worked around this issue by manually executing [[NSApplication sharedApplication] run]; to create an additional main event loop, keeping the application active (although I understand this isn't an ideal approach).
However, in macOS 26 beta, I've found that calling [[NSApplication sharedApplication] run]; causes all NSButton controls I created in the window to become unresponsive to click events, though they still respond to mouse enter events.
Through recent research, I've learned that the best practice for preventing App Nap automatic termination is to use [[NSProcessInfo processInfo] disableAutomaticTermination:@"User interaction required"]; to increment the automatic termination reference count.
My questions are:
Why did calling [[NSApplication sharedApplication] run]; work properly on macOS versions prior to 15.6.1?
Why does calling [[NSApplication sharedApplication] run]; in macOS 26 beta cause buttons to become unresponsive to click events?
Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
AudioComponentDescription description;
guint i, i_stop;
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
/* instruments */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
format:av_format
maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio engine start - %d", [ns_error code]);
}
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
/* pre sync */
/* IO buffers */
av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
/* fill input buffer */
/* schedule av input buffer */
frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
[av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
/* render */
ns_error = NULL;
status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("render offline error - %d", [ns_error code]);
}
}
regards, Joël
The Apple documentation for [UIViewController willMoveToParentViewController:] states,
Called just before the view controller is added or removed from a container view controller.
Hence, the view controller being popped should appear at the top of the navStack when willMoveToParentViewController: is called.
In iPadOS 16 and 17 the view controller being popped appears at the top of the navStack when willMoveToParentViewController: is called.
In iPadOS 18 the view controller being popped has already been (incorrectly) removed from the navStack. We confirmed this bug using iPadOS 18.2 as well as 18.0.
Our app relies upon the contents of the navStack within willMoveToParentViewController: to track the user's location within a folder hierarchy.
Our app works smoothly on iPadOS 16 and 17. Conversely, our customers running iPadOS 18 have lost client data and corrupted their data folders as a result of this bug! These customers are angry -- not surprisingly, some have asked for refunds and submitted negative app reviews.
Why doesn't willMoveToParentViewController: provide the correct navStack in iPadOS 18.2?
Hi,
I am curious about if hyperthreading is enabled/disabled on my macbook pro M1 or M4. Howto figure out?
I am using macOS 15.5.
Further, I develop a multi-threaded audio sequencer that creates threads per instrument. I use vector operations to increase performance.
I recognized lowering synchronization rate from 250 Hz to 60 Hz gives additional performance advantages.
Howto programmatically check if Hyperthreading is enabled/disabled and howto enable/disable it programmatically?
After some research I found sysctl() and nvram SMTDisable=%01.
https://support.apple.com/en-us/101870
Can anyone provide me an Objective C example?
regards, Joël
Noob Software has been working on a new programming language designed primarily for mac development, with a syntax similar to PHP and some aspects from JavaScript. To demonstrate this language i've released the code for one of my apps "Noob Music"
https://github.com/noobsoftware/NoobMusic3
i would like to hear if people are interested in using this programming language. I think there are many benefits to the high level syntax and high level thinking, and not having to define datatypes and such.
The language also supports multithreading using the "async" keyword prefixed in front of the function keyword, like you would do in JavaScript only with multithreading instead of interleaved processing.
Pushing to array is threadsafe and some other functinonality as well and it is recommended to use a new class instance within each async function.
I have relied heavily on using webviews for UI so there is a layout engine i have developed for native view elements in Cocoa which at this point ar primarily WebViews. The webviews can define a callback function to receive messages from JavaScript, so combining these methods you get a full fledged "web" development feeling for mac development, with HTML+CSS+JavaScript and NoobScript
The objective C code using the kernel API ‘sysctlbyname’ for ‘kern.osproductversion’ returns 16.0 instead of 26.0 on macOS Tahoe.
sysctlbyname("kern.osproductversion", version, &size, NULL, 0)
The command ‘sysctl kern.osproductversion’ returns ‘kern.osproductversion: 26.0’ on same macOS Tahoe.
Note: The objective C code was built using Xcode 16.3.
My framework has private Objective-C API that is only used within the framework. It should not be exposed in the public interface (so it shouldn't be imported in the umbrella header).
To expose this API to Swift that's within the framework only the documentation seems to indicate that this needs to be imported in the umbrella header?
Import Code Within a Framework Target
To use the Objective-C declarations in files in the same framework target as your Swift code, configure an umbrella header as follows:
1.Under Build Settings, in Packaging, make sure the Defines Module setting for the framework target is set to Yes.
2.In the umbrella header, import every Objective-C header you want to expose to Swift.
Swift sees every header you expose publicly in your umbrella header. The contents of the Objective-C files in that framework are automatically available from any Swift file within that framework target, with no import statements. Use classes and other declarations from your Objective-C code with the same Swift syntax you use for system classes.
I would imagine that there must be a way to do this?
Good day!
I have a long-term project ported all the way up from old Think C through many versions of Xcode. Its source files are encoded in "Western (Mac OS Roman)".
Some of my error messages have characters outside the straight ASCII character set (i.e. "å"). The editor correctly displays these, but I get plenty of Illegal Character warnings and the messages do not display properly.
I imagine there's a way to have seperate files of localized text for internationalized applications, but I am the only end-user of this application, and it used to just plain work in earlier Xcode versions. Furthermore, there must be developers throughout Europe who use such characters in string literals, just typing in their native languages, straight off their keyboards.
I was thinking that there must be a Clang setting or something, but have been unable to find it, and an internet search turns up no solution except to cumbersomely escape each individual character. I can't imagine that a French programmer does that every time they want to type "è", "é", or "à"!
Any help? (Disclaimer: I'm an English speaker and only use such characters whimsically, but want to keep them for legacy's sake.)
Thanks....
p.s. using Xcode 15.3, and under Settings->Text Editing->Editing, "Western (Mac OS Roman)" is already selected as the default text encoding with "Convert existing files on save" checked.
When my app enter to background, I start a background task, and when Expiration happens, I end my background task. The code likes below:
backgroundTask = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (backgroundTask != UIBackgroundTaskInvalid) {
[[UIApplication sharedApplication] endBackgroundTask:backgroundTask];
backgroundTask = UIBackgroundTaskInvalid;
[self cancel];
}
});
}];
When the breakpoint is triggered at the endBackgroundTask line, I also get the following log:
[BackgroundTask] Background task still not ended after expiration handlers were called: <UIBackgroundTaskInfo: 0x282d7ab40>: taskID = 36, taskName = Called by MyApp, from MyMethod, creationTime = 892832 (elapsed = 26). This app will likely be terminated by the system. Call UIApplication.endBackgroundTask(:) to avoid this.
The log don't appear every time, so why is that? Is there something wrong with my code?
Good day, ladies and gents.
I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.)
I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice.
Here's the code used to set up the AudioUnit:
-(NSString*) configureAU
{
AudioComponent component = NULL;
AudioComponentDescription description;
OSStatus err = noErr;
UInt32 param;
AURenderCallbackStruct callback;
if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent
// Open the AudioOutputUnit
description.componentType = kAudioUnitType_Output;
description.componentSubType = kAudioUnitSubType_HALOutput;
description.componentManufacturer = kAudioUnitManufacturer_Apple;
description.componentFlags = 0;
description.componentFlagsMask = 0;
if( component = AudioComponentFindNext( NULL, &description ) )
{
err = AudioComponentInstanceNew( component, &audioUnit );
if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; }
}
// Configure the AudioOutputUnit:
// You must enable the Audio Unit (AUHAL) for input and output for the same device.
// When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement.
// When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'.
param = 1; // Enable input on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, ¶m, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)");
param = 0; // Disable output on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, ¶m, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)");
param = sizeof(AudioDeviceID); // Select the default input device
AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, ¶m, &inputDeviceID );
chkerr("Couldn't get default input device (ID=%d)");
// Set the current device to the default input unit
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) );
chkerr("Failed to hook up input device to our AudioUnit (ID=%d)");
callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data
callback.inputProcRefCon = self;
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
param = sizeof(AudioStreamBasicDescription); // get hardware device format
err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, ¶m );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking
actualOutputFormat.mChannelsPerFrame = audioChannels;
actualOutputFormat.mSampleRate = deviceFormat.mSampleRate;
actualOutputFormat.mFormatID = kAudioFormatLinearPCM;
actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 )
actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved;
#if __BIG_ENDIAN__
actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian;
#endif
actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8;
actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8;
actualOutputFormat.mFramesPerPacket = 1;
actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame;
// Set the AudioOutputUnit output data format
err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription));
chkerr("Could not change the stream format of the output device (ID=%d)");
param = sizeof(UInt32); // Get the number of frames in the IO buffer(s)
err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, ¶m );
chkerr("Could not determine audio sample size (ID=%d)");
err = AudioUnitInitialize( audioUnit ); // Initialize the AU
chkerr("Could not initialize the AudioUnit (ID=%d)");
// Allocate our audio buffers
audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame];
if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; }
return nil;
}
(...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.)
Thanks for your attention! ==Dave
[p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?]
{pps: of course, the code lines up prettier in a monospaced font!}