Objective-C

RSS for tag

Objective-C is a programming language for writing iOS, iPad OS, and macOS apps.

Posts under Objective-C tag

284 results found
Sort by:
Post not yet marked as solved
168 Views

Is there any Swift API that tells whether fast user switching is enabled or not

Is there any swift API that tells whether fast user switching is enabled or not? We want to do something when fast user switching is enabled and something not when it's not enabled. Looking for an API that tells whether it's enabled or not
Asked
by ajaysta12.
Last updated
.
Post marked as solved
90 Views

UIAccessibilityPrefersCrossFadeTransitionsStatusDidChangeNotification does not get triggered

I'm trying to listen for changes on the Prefers Cross-Fade Transitions accessibility option using UIKit in Objective-C but for some reason UIAccessibilityPrefersCrossFadeTransitionsStatusDidChangeNotification does not get triggered when I toggle this option in the accessibility settings, I've tested both using the simulator and a real device running iOS 15.2 According to the docs this should be fine https://developer.apple.com/documentation/uikit/uiaccessibilitypreferscrossfadetransitionsstatusdidchangenotification?language=objc Here is a little snapshot of the code I'm using - (void)prefersCrossFadeTransitionsStatusDidChange:(__unused NSNotification *)notification { NSLog(@"Prefers Cross-Fade Transitions changed"); } [[NSNotificationCenter defaultCenter] addObserver:self                             selector:@selector(prefersCrossFadeTransitionsStatusDidChange:)                               name:UIAccessibilityPrefersCrossFadeTransitionsStatusDidChangeNotification                              object:nil]; Wirdly enough UIAccessibilityPrefersCrossFadeTransitions returns the correct value when called
Asked Last updated
.
Post not yet marked as solved
107 Views

[Objective-C] Proper way of calling an overrided base interface method while using a category

Hi, For some of my use cases, I'm extending an interface using a category. I am overriding some of the methods in the base Interface, but want to call the base interface method after I'm done with the workflow inside the category method. Essentially, doing something similar to a "super" call. While I was looking into how to achieve this, have found something called Method Swizzling( https://nshipster.com/method-swizzling/ , https://newrelic.com/blog/best-practices/right-way-to-swizzle), but this looks too 'hacky'. Is there a better way to achieve this?
Asked Last updated
.
Post not yet marked as solved
29 Views

Drawing artefact when using drawInRect on NSImage created from bitmap

Hopefully someone can help me explain what is happening. In short, at certain scales, when drawing an NSImage created from a bitmap there is a stripe of black pixels near the right border of the image. This is when the image to be drawn is about half the size of the actual image. This artefact only occurs for certain scales. For smaller scales and larger scales the artefact is not there. This suggests that the problem is not with the bitmap image. For a specific scale the artefact occurs consistently. It occurs every time the image is drawn, and also when a newly created image is drawn. The artefact occurs for differently sized images (but similar relative drawing scales). To further complicate things, the problem occurs in my (relatively large) application project. When I run the exact same code in a toy project, created to reproduce the problem, the artefact is not there. This toy project is built and tested on the same machine (a Mac Mini with macOS Monterey 12.0 using XCode 12.4). I tried to keep the project settings similar (e.g. both projects target macOS 10.9) but obviously something is different which impacts how image are drawn. The code with the problem is as follows: - (NSImage *)createTestImageFromColor {   NSSize size = NSMakeSize(640, 640);   NSImage *image = [[[NSImage alloc] initWithSize: size] autorelease];   [image lockFocus];   [NSColor.blueColor drawSwatchInRect: NSMakeRect(0, 0, size.width, size.height)];   [NSColor.whiteColor drawSwatchInRect: NSMakeRect(10, 10, size.width - 20, size.height - 20)];   [image unlockFocus];   return image; } - (NSImage *)createTestImageFromBitmap {   int w = 640, h = 640;   NSBitmapImageRep  *bitmap;   bitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL     pixelsWide: w pixelsHigh: h     bitsPerSample: 8 samplesPerPixel: 3     hasAlpha: NO isPlanar: NO     colorSpaceName: NSDeviceRGBColorSpace     bytesPerRow: 0 bitsPerPixel: 32];   int bmw = (int)bitmap.bytesPerRow / sizeof(UInt32);   for (int y = 0; y < h; ++y) {       UInt32 *pos = (UInt32 *)bitmap.bitmapData + y * bmw;       for (int x = 0; x < w; ++x) {           *pos++ = 0xFF;       }   }   NSImage  *image = [[NSImage alloc] initWithSize: NSMakeSize(w, h)];   [image addRepresentation: bitmap];   [bitmap release];   return [image autorelease]; } - (void) drawScaledImages {   if (testImage == nil) {     testImage = [[self createTestImageFromColor] retain];   }   if (testImage2 == nil) {     testImage2 = [[self createTestImageFromBitmap] retain];   }   [NSColor.whiteColor setFill];   NSRectFill(self.bounds);   [testImage drawInRect: NSMakeRect(50, 50, 320, 320)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(50, 100, 320, 320)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0];   [testImage drawInRect: NSMakeRect(450, 50, 480, 480)                fromRect: NSZeroRect               operation: NSCompositeCopy                fraction: 1.0];   [testImage2 drawInRect: NSMakeRect(450, 100, 480, 480)                 fromRect: NSZeroRect                operation: NSCompositeCopy                 fraction: 1.0]; } It draws two images. Both have the same size, but are created differently. The artefact only occurs for the image created from bitmap (testImage2), and only when drawn at size 320x320. At size 480x480 the artefact is not there. This results in the following view. The black pixels at the right of the left red square are the artefact. It may be difficult to reproduce the problem, as the same code in a minimal project works fine. This results in the following: So does anyone have any pointers on how to troubleshoot this? I cannot step into the drawInRect code, so I am unable to determine where the code paths diverge, and what causes this. Could it be that somehow my application is linking to a different (older) version of the framework that does the drawing, a version that contains a bug with scaled image drawing? If so, how to prevent that? Should anyone wish to see the project where the actual problem occurs, that's possible, as it's Open Source. The code is on the branch scaled-image-bug in the following git repository: https://git.code.sf.net/p/grandperspectiv/source
Asked
by eriban.
Last updated
.
Post not yet marked as solved
36 Views

Getting OpenCV ObjectiveC++ errors while converting our swift app into Framework

We are trying to convert our Swift app into framework, In our app we are using OpenCV library. So far we have resolved many compiler errors. Now we are stuck in resolving below OpenCV Objective C++ compiler error. /Users/***/Desktop/Projects/**Framework/**Framework/OpenCV/OpenCV Neon/cvneon.h:15:29: No type named 'Mat' in namespace 'cv' Below is our framework's header #ifdef __cplusplus #import "opencv2/opencv.hpp" #import "exposure_compensate.hpp" #endif #import <Foundation/Foundation.h> //! Project version number for NewFrameworkV36. FOUNDATION_EXPORT double NewFrameworkV36VersionNumber; //! Project version string for NewFrameworkV36. FOUNDATION_EXPORT const unsigned char NewFrameworkV36VersionString[]; #import "RenderingModel.h" #import "Helper.h" #import "TJSpinner.h" #ifdef __OBJC__ #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #endif
Asked Last updated
.
Post marked as solved
44 Views

SecItemAdd One or more parameters passed to a function were not valid

I'm trying to add a public key to the KeyChain but I'm getting an errror -50 . This is my code: OSStatus error = noErr; CFTypeRef persistPeer = NULL; NSData *refTag = [[NSData alloc] initWithBytes:(const void *)[keyTag UTF8String] length:[keyTag length]]; NSMutableDictionary *keyAttr = [[NSMutableDictionary alloc] init]; [keyAttr setObject:(__bridge id)kSecClassKey forKey:(__bridge id)kSecClass]; [keyAttr setObject:(__bridge id)kSecAttrKeyTypeRSA forKey:(__bridge id)kSecAttrKeyType]; [keyAttr setObject:refTag forKey:(__bridge id)kSecAttrApplicationTag]; [keyAttr setObject:(__bridge id)kSecAttrKeyClassPublic forKey:(id)kSecAttrKeyClass]; error = SecItemDelete((CFDictionaryRef) keyAttr); [keyAttr setObject:extractedKey forKey:(__bridge id)kSecValueData]; [keyAttr setObject:[NSNumber numberWithBool:YES] forKey:(__bridge id)kSecReturnPersistentRef]; [keyAttr setObject:(__bridge id)kSecAttrAccessible forKey:(__bridge id)kSecAttrAccessibleAfterFirstUnlock]; error = SecItemAdd((CFDictionaryRef) keyAttr, (CFTypeRef *)&amp;persistPeer); If I comment out the kSecAttrAccessible, I don't get any errors and it works as expected. According to SecItem.h, the kSecClassKey can have the kSecAttrAccessible attribute. Am I missing something? Is there a required attribute when using kSecAttrAccessible?
Asked
by AdonisD.
Last updated
.
Post not yet marked as solved
91 Views

How to deduce from NSMethodSignature that a struct argument is passed by pointer?

How to deduce from NSMethodSignature that a struct argument is passed by pointer? Specifically on ARM. For example if I have: @protocol TestProtocol <NSObject> - (void)time:(CMTime)time; - (void)rect:(CGRect)point; @end And then I do: struct objc_method_description methodDescription1 = protocol_getMethodDescription(@protocol(TestProtocol), @selector(time:), YES, YES); struct objc_method_description methodDescription2 = protocol_getMethodDescription(@protocol(TestProtocol), @selector(rect:), YES, YES); NSMethodSignature *sig1 = [NSMethodSignature signatureWithObjCTypes:methodDescription1.types]; NSMethodSignature *sig2 = [NSMethodSignature signatureWithObjCTypes:methodDescription2.types]; const char *arg1 = [sig1 getArgumentTypeAtIndex:2]; const char *arg2 = [sig2 getArgumentTypeAtIndex:2]; NSLog(@"%s %s", methodDescription1.types, arg1); NSLog(@"%s %s", methodDescription2.types, arg2); The output is: v40@0:8{?=qiIq}16 {?=qiIq} v48@0:8{CGRect={CGPoint=dd}{CGSize=dd}}16 {CGRect={CGPoint=dd}{CGSize=dd}} Both look similar, no indication that CMTime will be actually passed as a pointer. But when I print the debug description: NSLog(@"%@", [sig1 debugDescription]); NSLog(@"%@", [sig2 debugDescription]); The first prints: ... argument 2: -------- -------- -------- -------- type encoding (^) '^{?=qiIq}' flags {isPointer} ... While the second prints: ... argument 2: -------- -------- -------- -------- type encoding ({) '{CGRect={CGPoint=dd}{CGSize=dd}}' flags {isStruct} ... So this information is indeed stored in the method signature, but how do I retrieve it without parsing the debug description? Are there rules I can use to deduce this myself? I tried to experiment with different structs but it is hard to spot a pattern.
Asked
by artium.
Last updated
.
Post not yet marked as solved
82 Views

What is `classNamed(_:)` for?

There is a such method in Bundle: func classNamed(_ className: String) -&gt; AnyClass? The description says it loads the Class object for className. It's, obviously, an Objective-C stuff. I started from Objective-C but didn't used it, preferring NSClassFromString. Now I suddenly tested it in various applications. I was surprised that it doesn't work in iOS apps neither in Playground: import Foundation class TalkingFruit {   func greet() {     print("Hello, playground")   } } @objc class LagacyFruit: NSObject { } print(Bundle.main.classNamed("TalkingFruit") ?? "no class") // no class print(Bundle.main.classNamed("LegacyFruit") ?? "no class") // no class print(Bundle.main.classNamed("NSObject") ?? "no class either") // no class either And now I have a question: Does it even work? And how it's supposed to be used? Working use case example would be great.
Asked
by kelin.
Last updated
.
Post not yet marked as solved
159 Views

How to use aes-256-gcm in objective-c

We used ecb mode before, but now we need to change to aes-gcm algorithm to encrypt and decrypt messages and verify signatures. I know that there is “/AES/GCM/NoPadding” in java to achieve gcm. Does Apple provide corresponding function libraries?
Asked
by sz-oic.
Last updated
.
Post not yet marked as solved
72 Views

EXC_ARITHMETIC (code=EXC_I386_DIV, subcode=0x0) in ios15 xcode13

I have a mlmodel prediction function in the script. During building (ios 15), the error shows up, and the app crashes. The whole build works under ios 14. The current Xcode version is 13.
Asked Last updated
.
Post not yet marked as solved
20k Views

What is the future of Objective-C?

Will Apple continue to support it, or will we wake up one day to find that Swift is the only viable language?It's a serious question. Careers depend on it. I don't accept the "No comment" approach that Apple usually takes. It's cruel.I'm willing to put the time into learning Swift if I have to. I'm not going to do it if I don't. I want to know.Frank
Asked
by flarosa.
Last updated
.
Post not yet marked as solved
435 Views

Xcode 13 or iOS15 BUG?

I use "LSApplicationQueriesSchemes" in the project Info.plist, which contains 219 url schemes. I use the urlscheme contained in this file to determine if an app is installed on our user's phone, which worked fine before iOS15, but recently I had a problem after submitting an app update using xcode13, when I went to check if an app was installed, it prompted me "-canOpenURL:failed for URL: "xxxx://" - error: "This app is not allowed to query for scheme xxxx". I have added xxxx to the LSApplicationQueriesSchemes, but I still get this error, I tested changing the location or reducing the number of urlscheme included in the LSApplicationQueriesSchemes and found that The first 35 or so of the LSApplicationQueriesSchemes are working fine, and the first 35 or so after that will prompt this error. I don't know if this is a bug in xcode13 or a problem with ios15, it's still not right.
Asked
by James2368.
Last updated
.
Post not yet marked as solved
239 Views

Display a PDF does not work correctly in iOS 15 using CoreGraphics libraries

I have an application coded in objective-c that are using CoreGraphics and CGPDFDocument, it's a PDF reader. With the release of iOS 15 i'm having problems with the rendering of certain pages in certain PDF files. The problem is not present with PDFKit. I have also downloaded the ZoomingPDFViewer example (https://developer.apple.com/library/archive/samplecode/ZoomingPDFViewer/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010281) from the official apple documentation page and i see that the same thing happens. See the problem
Asked Last updated
.
Post not yet marked as solved
69 Views

Crash loading nib

I have a view with xib created in 2014. no changes have been made but lately users are having several crashes, and in the last exception backtrace the last thing the application was trying to do was to load the xib. Below the crashlog SingleApp2013 14-01-22, 08-23.crash Customer care got in touch with some users to try to be able to replicate the problem, but no information provided was useful. I tried several times to crash it but never succeeded. The code to load the nib is the follow: (instancetype) containerInstance {     ConsoleContainer *consoleView = [[[NSBundle mainBundle] loadNibNamed:@"ConsoleContainer" owner:nil options:nil] objectAtIndex:0];     return consoleView; } thank you in advance for any ideas you can give me.
Asked Last updated
.
Post not yet marked as solved
167 Views

Can `MTLTexture` be used to store 5-D input tensor?

I'm trying to implement a pytorch custom layer [grid_sampler] (https://pytorch.org/docs/1.9.1/generated/torch.nn.functional.grid_sample.html) on GPU. Both of its inputs, input and grid can be 5-D. My implementation of encodeToCommandBuffer, which is MLCustomLayer protocol's function, is shown below. According to my current attempts, both value of id<MTLTexture> input and id<MTLTexture> grid don't meet expectations. So i wonder can MTLTexture be used to store 5-D input tensor as inputs of encodeToCommandBuffer? Or can anybody help to show me how to use MTLTexture correctly here? Thanks a lot! - (BOOL)encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer             inputs:(NSArray<id<MTLTexture>> *)inputs            outputs:(NSArray<id<MTLTexture>> *)outputs             error:(NSError * _Nullable *)error {   NSLog(@"Dispatching to GPU");   NSLog(@"inputs count %lu", (unsigned long)inputs.count);   NSLog(@"outputs count %lu", (unsigned long)outputs.count);   id<MTLComputeCommandEncoder> encoder = [commandBuffer       computeCommandEncoderWithDispatchType:MTLDispatchTypeSerial];     assert(encoder != nil);       id<MTLTexture> input = inputs[0];   id<MTLTexture> grid = inputs[1];   id<MTLTexture> output = outputs[0];   NSLog(@"inputs shape %lu, %lu, %lu, %lu", (unsigned long)input.width, (unsigned long)input.height, (unsigned long)input.depth, (unsigned long)input.arrayLength);   NSLog(@"grid shape %lu, %lu, %lu, %lu", (unsigned long)grid.width, (unsigned long)grid.height, (unsigned long)grid.depth, (unsigned long)grid.arrayLength);   if (encoder)   {     [encoder setTexture:input atIndex:0];     [encoder setTexture:grid atIndex:1];     [encoder setTexture:output atIndex:2];           NSUInteger wd = grid_sample_Pipeline.threadExecutionWidth;     NSUInteger ht = grid_sample_Pipeline.maxTotalThreadsPerThreadgroup / wd;     MTLSize threadsPerThreadgroup = MTLSizeMake(wd, ht, 1);     MTLSize threadgroupsPerGrid = MTLSizeMake((input.width + wd - 1) / wd, (input.height + ht - 1) / ht, input.arrayLength);     [encoder setComputePipelineState:grid_sample_Pipeline];     [encoder dispatchThreadgroups:threadgroupsPerGrid threadsPerThreadgroup:threadsPerThreadgroup];     [encoder endEncoding];         }   else     return NO;   *error = nil;   return YES; }
Asked
by stx-000.
Last updated
.