So I was trying to use an NSArrayController to bind the contents of a property , first I tried using NSDictionary and it worked great, here's what I did:
@interface ViewController : NSViewController
@property IBOutlet ArrayController * tableCities;
@end
...
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
NSString* filePath = @"/tmp/city_test.jpeg";
NSDictionary *obj = @{@"image": [[NSImage alloc] initByReferencingFile:filePath],
@"name": @"NYC",
@"filePath": filePath};
NSDictionary *obj2 = @{@"image": [[NSImage alloc] initByReferencingFile:filePath],
@"name": @"Chicago",
@"filePath": filePath};
NSDictionary *obj3 = @{@"image": [[NSImage alloc] initByReferencingFile:filePath],
@"name": @"Little Rock",
@"filePath": filePath};
[_tableCities addObjects:@[obj, obj2, obj3]];
}
@end
Now for an NSPopUpButton, binding the Controller Key to the ArrayController and the ModelKeyPath to "name" works perfectly and the popupbutton will show the cities as I expected.
But now, instead of using an NSDictionary I wanted to use a custom class for these cities along with an NSMutableArray which holds the objects of this custom class. I'm having some trouble going about this.
Objective-C
RSS for tagObjective-C is a programming language for writing iOS, iPad OS, and macOS apps.
Posts under Objective-C tag
199 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same.
There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears.
Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again.
This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted.
Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation?
Below is the code to demonstrate the described behavior.
I am also attaching an example of the microphone usage indicator icon.
Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <AVFoundation/AVFoundation.h>
#include <Foundation/NSString.h>
#include <Foundation/NSURL.h>
AVCaptureSession* m_captureSession = nullptr;
AVCaptureDeviceInput* m_audioInput = nullptr;
AVCaptureAudioDataOutput* m_audioOutput = nullptr;
std::condition_variable conditionVariable;
std::mutex mutex;
bool responseToAccessRequestReceived = false;
void receiveResponse()
{
std::lock_guard<std::mutex> lock(mutex);
responseToAccessRequestReceived = true;
conditionVariable.notify_one();
}
void waitForResponse()
{
std::unique_lock<std::mutex> lock(mutex);
conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; });
}
void requestPermissions()
{
responseToAccessRequestReceived = false;
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted)
{
const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl;
receiveResponse();
}];
waitForResponse();
}
void timer(int timeSec)
{
for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining)
{
std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
bool updateAudioInput()
{
[m_captureSession beginConfiguration];
if (m_audioOutput)
{
AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_captureSession removeConnection:lastConnection];
}
if (m_audioInput)
{
[m_captureSession removeInput:m_audioInput];
[m_audioInput release];
m_audioInput = nullptr;
}
AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]];
if (!audioInputDevice)
{
std::cout << "Error input audio device creating" << std::endl;
return false;
}
// m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil];
// NSError *error = nil;
NSError *error = [[NSError alloc] init];
m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error];
if (error)
{
const auto code = [error code];
const auto domain = [error domain];
const char* domainC = domain ? [domain UTF8String] : nullptr;
std::cout << code << " " << domainC << std::endl;
}
if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) {
[m_audioInput retain];
[m_captureSession addInput:m_audioInput];
}
else
{
std::cout << "Failed to create audio device input" << std::endl;
return false;
}
if (!m_audioOutput)
{
m_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput])
{
[m_captureSession addOutput:m_audioOutput];
}
else
{
std::cout << "Failed to add audio output" << std::endl;
return false;
}
}
[m_captureSession commitConfiguration];
return true;
}
void start()
{
std::cout << "Starting..." << std::endl;
const bool updatingResult = updateAudioInput();
if (!updatingResult)
{
std::cout << "Error, while updating audio input" << std::endl;
return;
}
[m_captureSession startRunning];
}
void stop()
{
std::cout << "Stopping..." << std::endl;
[m_captureSession stopRunning];
}
int main()
{
requestPermissions();
m_captureSession = [[AVCaptureSession alloc] init];
start();
timer(5);
stop();
timer(10);
start();
timer(5);
stop();
}
For a personal project, I have been writing some library to interface the CoreBluetooth API with the go language. Because of what it does, that library is written in objective-C with ARC disabled.
One function I had to write takes a memory buffer (allocated with malloc() on the go side), creates an NSData object out of it to pass the data to the writeValue:forCharacteristic:type: CoreBluetooth method, and then releases the NSData as well as the original memory buffer:
void Write(CBPeripheral *p, CBCharacteristic *c, void *bytes, int len) {
NSData *data = [NSData dataWithBytesNoCopy:bytes length:len freeWhenDone:true];
[p writeValue:data forCharacteristic:c type:CBCharacteristicWriteWithoutResponse];
[data release];
}
One thing I noticed is that the retainCount for data increases during the writeValue:forCharacteristic:type: API call. It is 1 before, and 2 after. This is surprising to me, because the documentation says "This method copies the data passed into the data parameter, and you can dispose of it after the method returns."
I suspects this results in a memory leak. Am I missing something here ?
I have an NSMutableArray defined as a property in my ViewController class:
@interface ViewController ()
@property NSMutableArray *tableCities;
@end
When the "Add" button is clicked, a new city is added:
NSString* filePath = @"/tmp/city_test.jpeg";
NSDictionary *obj = @{@"image": [[NSImage alloc] initByReferencingFile:filePath],
@"name": @"testCity",
@"filePath": filePath};
[_tableCities addObject: obj];
Okay, this is fine, it is adding a new element to this NSMutableArray, now what I want is to somehow bind the "name" field of each city to my NSPopUpButton.
So in storyboard I created an NSArrayController and tried to set its "Model Key Path" to my NSMutableArray as shown below:
Then I tried to bind the NSPopUpButton to the NSArrayController as follows:
but it doesn't work either, a city is added but the NSPopUpButton isn't displaying anything, what gives?
In genealogy software, I have an NSView that displays a family tree where each person is in a subview. Before Sequoia, the subviews would draw in the order I add them to the parent NSView. For some reason, Sequoia calls drawRect in reverse order.
For example, a tree with cells for son, father, and mother used to be drawn in that order, but Sequoia draws them as mother, father, and son.
For static cell placement it would not matter, but these trees dynamically adjust as users change data or expand and contract branches from the tree. Drawing them in the wrong order messes up all the logic and many trees are corrupted.
Has the new drawing order been implemented for some reason and can it be changed? Is it documented?
underlying Objective-C module 'FirebaseSharedSwift' not found
aymodazhnyneylcscdggrsgjocui/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/FirebaseSharedSwift.build/Objects-normal/x86_64/FirebaseSharedSwift.private.swiftinterface:5:19: underlying Objective-C module 'FirebaseSharedSwift' not found
Command SwiftCompile failed with a nonzero exit code
Hello everyone,
I'm working on an iOS application using Objective-C and UITabBarController. My app has more than 5 tabs, so the additional tabs are placed under the "More" tab. However, I've encountered an issue specific to iOS 18 where the first item in the "More" tab does not show up properly. This issue does not occur in iOS 17 or earlier versions.
Here's my setup method:
(void)mainTabbarSetUp {
NSMutableArray *tabItemArray = [NSMutableArray array];
UIViewController *viewController1, *viewController2, *viewController3, *viewController4, *viewController5, *viewController6, *viewController7;
UINavigationController *navviewController1, *navviewController2, *navviewController3, *navviewController4, *navviewController5, *navviewController6, *navviewController7;
viewController1 = [[UIViewController alloc] init];
navviewController1 = [[UINavigationController alloc] initWithRootViewController:viewController1];
navviewController1.tabBarItem.title = @"Watch List";
navviewController1.tabBarItem.image = [UIImage imageNamed:@"tab_icn_watchlist"];
[tabItemArray addObject:navviewController1];
// Similarly adding other view controllers...
viewController6 = [[UIViewController alloc] init];
navviewController6 = [[UINavigationController alloc] initWithRootViewController:viewController6];
navviewController6.tabBarItem.title = @"Cancelled";
navviewController6.tabBarItem.image = [UIImage imageNamed:@"tab_icn_cancelled"];
[tabItemArray addObject:navviewController6];
self.mainTabBarController.viewControllers = tabItemArray;
}
What I've Tried:
Verified that each view controller is correctly initialized and assigned to a UINavigationController before being added to the tab array.
Logged the contents of the moreNavigationController to confirm that it contains the correct view controllers.
Tested by reducing the number of view controllers to less than 5, and the issue does not occur.
Ensured that all UINavigationControllers are configured consistently (e.g., translucency, bar style, etc.).
My view controller has this property:
@property NSMutableArray *tableCities;
Whenever I press a button a new city object is added to this array, it consists of a dictionary of NSStrings containing the name and state of the city being added.
Now, I have an NSPopUpButton that I'd like to bind to the city name of all cities in this NSMutableArray, how exactly can I achieve this with the Bindings Inspector?
I understand the mvc thing kinda, theres an array as a model, a tableview as a control, and the interface as a control (right?)
so like, I drag an array controller on to the form display thing, assign a delegate to it the view and app module/header, add data to it and the tableview updates after I add a function to it? in vs the events are in the event browser, do I just copy and paste the code from the tutorial?
how do I add controls and do all the advanced stuff? the documentation on developer.apple.com isnt that detailed, and most books are outdated or n/a since its objective c
tutorials would be nice, some home grown method would be cool too
thank you
unidef warrell yashizzo
For a Cocoa project I have this table view controller:
@interface TableViewController : NSObject <NSTableViewDataSource, NSTableViewDelegate>
@property (nonatomic, strong) NSMutableArray *numbers;
@property (nonatomic, strong) NSMutableArray *letters;
@end
#import "TableViewController.h"
@implementation TableViewController
- (NSMutableArray *)numbers
{
if (!_numbers)
{
_numbers = [NSMutableArray arrayWithArray:@[@"1", @"2", @"3", @"4", @"5", @"6", @"7", @"8", @"9", @"10"]];
}
return _numbers;
}
- (NSMutableArray *)letters
{
if (!_letters)
{
_letters = [NSMutableArray arrayWithArray: @[@"a", @"b", @"c", @"d", @"e", @"f", @"g", @"h", @"i", @"j"]];
}
return _letters;
}
// NSTableViewDataSource Protocol Method
- (NSInteger)numberOfRowsInTableView:(NSTableView *)tableView
{
return self.numbers.count;
}
// NSTableViewDelegate Protocol Method
-(NSView *)tableView:(NSTableView *)tableView viewForTableColumn:(NSTableColumn *)tableColumn row:(NSInteger)row
{
NSString *identifier = tableColumn.identifier;
NSTableCellView *cell = [tableView makeViewWithIdentifier:identifier owner:self];
if ([identifier isEqualToString:@"numbers"])
{
cell.textField.stringValue = [self.numbers objectAtIndex:row];
}
else
{
cell.textField.stringValue = [self.letters objectAtIndex:row];
}
return cell;
}
In the storyboard I created a NSTableView and set up the connection, after running it I get what I expected:
okay, cool, but I have a question:
why is this working even though I haven't created a NSTableView as a property for my ViewController or for my TableViewController?
Also, there's another problem, now I want to add a button to this window such that, every time I click it, a new item should be added to this table(for now it can be anything)
So I created a button and a method for this and linked the action to the story board:
-(IBAction) addNewNumber:(id)sender
{
[_numbers addObject:@"1234"];
[_letters addObject:@"testing"];
}
Now, every time I click this button I can see that this method is indeed being called and that indeed IT IS adding a new member to these arrays, the thing is, the table is not being updated, what gives? Any advice on how I can fix this behavior?
I understand we can use MPSImageBatch as input to
[MPSNNGraph encodeBatchToCommandBuffer: ...]
method.
That being said, all inputs to the MPSNNGraph need to be encapsulated in a MPSImage(s).
Suppose I have an machine learning application that trains/infers on thousands of input data where each input has 4 feature channels. Metal Performance Shaders is chosen as the primary AI backbone for real-time use.
Due to the nature of encodeBatchToCommandBuffer method, I will have to create a MTLTexture first as a 2D texture array. The texture has pixel width of 1, height of 1 and pixel format being RGBA32f.
The general set up will be:
#define NumInputDims 4
MPSImageBatch * infBatch = @[];
const uint32_t totalFeatureSets = N;
// Each slice is 4 (RGBA) channels.
const uint32_t totalSlices = (totalFeatureSets * NumInputDims + 3) / 4;
MTLTextureDescriptor * descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatRGBA32Float
width: 1
height: 1
mipmapped: NO];
descriptor.textureType = MTLTextureType2DArray
descriptor.arrayLength = totalSlices;
id<MTLTexture> texture = [mDevice newTextureWithDescriptor: descriptor];
// bytes per row is `4 * sizeof(float)` since we're doing one pixel of RGBA32F.
[texture replaceRegion: MTLRegionMake3D(0, 0, 0, 1, 1, totalSlices)
mipmapLevel: 0
withBytes: inputFeatureBuffers[0].data()
bytesPerRow: 4 * sizeof(float)];
MPSImage * infQueryImage = [[MPSImage alloc] initWithTexture: texture
featureChannels: NumInputDims];
infBatch = [infBatch arrayByAddingObject: infQueryImage];
The training/inference will be:
MPSNNGraph * mInferenceGraph = /*some MPSNNGraph setup*/;
MPSImageBatch * returnImage = [mInferenceGraph encodeBatchToCommandBuffer: commandBuffer
sourceImages: @[infBatch]
sourceStates: nil
intermediateImages: nil
destinationStates: nil];
// Commit and wait...
// Read the return image for the inferred result.
As you can see, the setup is really ad hoc - a lot of 1x1 pixels just for this sole purpose.
Is there any better way I can achieve the same result while still on Metal Performance Shaders? I guess a further question will be: can MPS handle general machine learning cases other than CNN? I can see the APIs are revolved around convolution network, both from online documentations and header files.
Any response will be helpful, thank you.
I have a Cocoa application and I'm trying to set up an NSImageView without using story board (using storyboard works just fine tho).
Also, I'm kinda new to Cocoa so any advice is appreciated.
Anyway so here's the deal, I created a class to encapsulate this image:
@interface MyView : NSView
{
@private
NSImageView* imageView;
}
@end
@implementation MyView
-(id) initWithFrame:(NSRect)frameRect
{
self = [super initWithFrame:frameRect];
if(self)
{
NSRect rect = NSMakeRect(10, 10, 100, 200);
imageView = [[NSImageView alloc] initWithFrame:rect];
NSImage* image = [NSImage imageNamed:@"bob.jpeg"];
[imageView setImageScaling:NSImageScaleNone];
[imageView setImage: image];
[self addSubview: imageView];
}
return self;
}
-(id) init
{
return [self initWithFrame:NSMakeRect(10, 10, 100, 100)];
}
- (void)drawRect:(NSRect)dirtyRect
{
[super drawRect:dirtyRect];
// Drawing code here.
}
@end
Now, in the view controller file, in the viewDidLoad I tried to load add this as a subview to get it to display my image:
- (void)viewDidLoad
{
[super viewDidLoad];
MyView *customView = [[MyView alloc] init];
[self.view addSubview:customView];
}
This kinda works, except that it loads the image twice, I ended up with two images instead of just one like I intended, what gives? what am I doing wrong here?
We're trying to implement a backup/restore data feature in our business productivity iPad app using UIDocumentPickerViewController and AppleArchive, but discovered odd behavior of [UIDocumentPickerViewController initForOpeningContentTypes: asCopy:YES] when reading large archive files from a USB drive.
We've duplicated this behavior with iPadOS 16.6.1 and 17.7 when building our app with Xcode 15.4 targeting minimum deployment of iPadOS 16. We haven't tested this with bleeding edge iPadOS 18.
Here's our Objective-C code which presents the picker:
NSArray* contentTypeArray = @[UTTypeAppleArchive];
UIDocumentPickerViewController* docPickerVC = [[UIDocumentPickerViewController alloc] initForOpeningContentTypes:contentTypeArray asCopy:YES];
docPickerVC.delegate = self;
docPickerVC.allowsMultipleSelection = NO;
docPickerVC.shouldShowFileExtensions = YES;
docPickerVC.modalPresentationStyle = UIModalPresentationPopover;
docPickerVC.popoverPresentationController.sourceView = self.view;
[self presentViewController:docPickerVC animated:YES completion:nil];
The UIDocumentPickerViewController remains visible until the selected external archive file has been copied from the USB drive to the app's local tmp sandbox. This may take several seconds due to the slow access speed of the USB drive. During this time the UIDocumentPickerViewController does NOT disable its tableview rows displaying files found on the USB drive. Even the most patient user will tap the desired filename a second (or third or fourth) time since the user's initial tap appears to have been ignored by UIDocumentPickerViewController, which lacks sufficient UI feedback showing it's busy copying the selected file.
When the user taps the file a second time, UIDocumentPickerViewController apparently begins to copy the archive file once again. The end result is a truncated copy of the selected file based on the time between taps. For instance, a 788 MB source archive may be copied as a 56 MB file. Here, the UIDocumentPickerDelegate receives a 56 MB file instead of the original 788 MB of data.
Not surprisingly, AppleArchive fails to decrypt the local copy of the archive because it's missing data. Instead of failing gracefully, AppleArchive crashes in AAArchiveStreamClose() (see forums post 765102 for details).
Does anyone know if there's a workaround for this strange behavior of UIDocumentPickerViewController?
I am trying to set my UITextView as the first responder with [self.textView becomeFirstResponder] when my view controller called viewDidAppear.
But sometimes it will cause crash with the error: [__NSPlaceholderArray initWithObjects:count:]: attempt to insert nil object from objects[0]
all I did is just:
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
[self.textView becomeFirstResponder];
}
So if anyone can tell me what happened and how to do? when I call the [self.textView becomeFirstResponder], what will be insert into the responders list? self.textView itself?
Thanks very much!
I have an object that is an NSRuleEditorDelegate for an NSRuleEditor whose nestingMode is NSRuleEditorNestingModeList. There are 8 different possible criteria. Each criterion is optional but at least 1 is required (ruleEditor.canRemoveAllRows = NO). Each criterion should only be added once. How can I limit adding a criterion for a row if it is already in the editor at a different row?
Thanks!
I've implemented Push Provisioning, but am having trouble testing it.
When I try to add a payment pass with my activationData, encryptedPassData, and ephemeralPublicKey, I see the "Add Card to Apple Pay" screen, but then when I click "Add Card", I get a "Could Not Add Card" message.
When I inspect the error from didFinishAddingPaymentPass, it reads "The operation couldn’t be completed. (PKPassKitErrorDomain error 2.)".
Is this error PKPassKitError.Code.unsupportedVersionError? What does this error means?
Additional context:
We use cordova-apple-wallet to generate certificates and add payment pass.
We have developed an iOS app in objective-C and added integrations for shortcuts app - more specific the "automation" part of the app.
Basically, user opens shortcuts app, select automation and when scanning NFC tag, user can select an action and our app shows 3 different actions. User select action, opens the app and action is executed. That all done in Obj-C and was working very well with no complaints till users start to update to iOS18.
Now, when user starts to add automation, only thing they can do is select "open app". Our app actions are not showing any more. Is there something new in iOS 18 we have to update our app.
I know Obje-C is considered "old", but the cost of upgrading an existing app and time is not available at the moment.
Here's a code snippet of our shortcutIntent:
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.lw.grp1</string>
<string>group.lw.grp2</string>
</array>
</dict>
</plist>
Shortcut Info-plist
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>NSExtension</key>
<dict>
<key>NSExtensionAttributes</key>
<dict>
<key>IntentsRestrictedWhileLocked</key>
<array/>
<key>IntentsRestrictedWhileProtectedDataUnavailable</key>
<array/>
<key>IntentsSupported</key>
<array>
<string>LockIntent</string>
<string>TrunkIntent</string>
<string>UnlockIntent</string>
</array>
</dict>
<key>NSExtensionPointIdentifier</key>
<string>com.apple.intents-service</string>
<key>NSExtensionPrincipalClass</key>
<string>IntentHandler</string>
</dict>
</dict>
</plist>
IntentHandler
-(id)init{
self = [super init];
self.started = false;
self.handler = [[AppIntentHandler alloc] init];
self.handler.userInfo = [[NSMutableDictionary alloc] init];
return self;
}
- (id)handlerForIntent:(INIntent *)intent {
// This is the default implementation. If you want different objects to handle different intents,
// you can override this and return the handler you want for that particular intent.
if(!self.started){
self.started = YES;
if([intent isKindOfClass:[LockIntent class]]){
NSLog(@"*intent lock selected*");
self.handler.userInfo = [self assignUserInfo:@"lock"];
}else if([intent isKindOfClass:[UnlockIntent class]]){
NSLog(@"*intent unlock selected*");
self.handler.userInfo = [self assignUserInfo:@"unlock"];
}else if([intent isKindOfClass:[TrunkIntent class]]){
NSLog(@"*intent trunk selected*");
self.handler.userInfo = [self assignUserInfo:@"trunk"];
}
}else{
self.started = NO;
}
return self.handler;
}
A custom class to handle each intent
@implementation AppIntentHandler
#pragma mark - response handlers
- (void)handleLock:(nonnull LockIntent *)intent completion:(nonnull void (^)(LockIntentResponse * _Nonnull))completion {
LockIntentResponse *response = [[LockIntentResponse alloc] initWithCode:LockIntentResponseCodeContinueInApp userActivity:[self lockUserActivity]];
completion(response);
}
- (void)handleUnlock:(nonnull UnlockIntent *)intent completion:(nonnull void (^)(UnlockIntentResponse * _Nonnull))completion {
UnlockIntentResponse *response = [[UnlockIntentResponse alloc] initWithCode:UnlockIntentResponseCodeContinueInApp userActivity:[self unlockUserActivity]];
completion(response);
}
- (void)handleTrunk:(nonnull TrunkIntent *)intent completion:(nonnull void (^)(TrunkIntentResponse * _Nonnull))completion {
TrunkIntentResponse *response = [[TrunkIntentResponse alloc] initWithCode:TrunkIntentResponseCodeContinueInApp userActivity:[self trunkUserActivity]];
completion(response);
}
What have changed and what needs to be done to make our app's actions show in shortcuts app again. Could anyone point me to the right direction, documentations, blog post or code snippet?
Thanks in advance for taking the time to help!
In Xcode 16.0 and 16.1 beta 2, running on macOS 15.1 beta 4, Developer Documentation for Objective-C is (at least partially) missing in the Navigator. Many of the entries appear to be for Swift.
In Xcode 16.1 beta (i.e. beta 1), the documentation is normal.
I did not find any posts in this forum about this, which is surprising and makes me wonder if the issue is isolated to particular configurations.
In any event, I would appreciate any information anyone may have about this.
My project use manual reference counting and crash with UIAlertController when touch to Action Button:
UIAlertController alert = [[UIAlertController
alertControllerWithTitle:@"fsđs"
message:@"fsđs"
preferredStyle:UIAlertControllerStyleAlert
]autorelease];
UIAlertAction actionOk = [UIAlertAction actionWithTitle:@"Ok"
style:UIAlertActionStyleDefault
handler:nil];
[alert addAction:actionOk];
[self.window.rootViewController presentViewController:alert animated:YES
completion:^{
}];
Hello we have created a function that is expanding copy module to support html format. Everything inside that function works fine but on 17.4+ IOS version copying the html element strike-through tag is not working (other HTML elements are working fine) . Looking the logs seems like are getting stripped. Also list that have indents won't work on paste indent is missing. Here is the code:
void copyToClipboard(NSString *htmlContent) {
UIPasteboard *pasteboard = [UIPasteboard generalPasteboard];
[pasteboard setValue:htmlContent forPasteboardType:@"public.html"];
}
Does anyone know fix for this or when this will be fixed or will it be fixed in next update?