Hi,
I develop a Mac application, initially on Catalina/Xcode12, but I recently upgrade to Monterey/Xcode13. I'm about to publish a new version: on Monterey all works as expected, but when I try the app on Sequoia, as a last step before uploading to the App Store, I encountered some weird security issues:
The main symptom is that it's no longer possible to save any file from the app using the Save panel, although the User Select File entitlement is set to Read/Write.
I've tried reinstalling different versions of the app, including the most recent downloaded from TestFlight. But, whatever the version, any try to save using the panel (e.g. on the desktop) results in a warning telling that I don't have authorization to record the file to that folder.
Moreover, when I type spctl -a -t exec -v /Applications/***.app in the terminal, it returns rejected, even when the application has been installed by TestFlight.
An EtreCheck report tells that my app is not signed, while codesign -dv /Applications/***.app returns a valid signature. I'm lost...
It suspect a Gate Keeper problem, but I cannot found any info on the web about how this system could be reset. I tried sudo spctl --reset-default, but it returns This operation is no longer supported...
I wonder if these symptoms depend on how the app is archived and could be propagated to my final users, or just related to a corrupted install of Sequoia on my local machine. My feeling is that a signature problem should have been detected by the archive validation, but how could we be sure?
Any idea would be greatly appreciated, thanks!
Objective-C
RSS for tagObjective-C is a programming language for writing iOS, iPad OS, and macOS apps.
Posts under Objective-C tag
150 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm working on an old iOS app that started with objective-C + UIKit and has being migrated to Swift + SwiftUI. Currently its code is mostly Swift + SwiftUI but it has still some objective-C and some UIKit ViewControllers.
One of the SwiftUI views uses fileImporter to open Files App and select a file from the device. This has been working well until iOS 18 is launched. With iOS 18 the file picker is not launching correctly and is frozen in every simulator (the unique real device I've could test with iOS 18 seemed to work correctly).
I managed to clone my project and leave it with the minimal amount of files to reproduce this error. This is the code:
AppDelegate.h
#import <UIKit/UIKit.h>
@interface AppDelegate : UIResponder <UIApplicationDelegate> {}
@property (strong, nonatomic) UIWindow *window;
@end
AppDelegate.m
#import "AppDelegate.h"
#import "MyApp-Swift.h"
@interface AppDelegate ()
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.window.backgroundColor = [UIColor whiteColor];
[self.window makeKeyAndVisible];
FirstViewBuilder *viewBuilder = [[FirstViewBuilder alloc] init];
[viewBuilder show];
return YES;
}
@end
FirstViewBuilder.swift
import SwiftUI
@objc class FirstViewBuilder: NSObject {
private var view: UIHostingController<FirstView>
@objc override init() {
self.view = MyHostingController(rootView: FirstView())
}
@objc func show() {
let app = UIApplication.shared.delegate as? AppDelegate
let window = app?.window
window?.backgroundColor = .white
// Use navigationController or view directly depending on use
window?.rootViewController = view
}
}
FirstView.swift
import SwiftUI
struct FirstView: View {
@State var hasToOpenFilesApp = false
var body: some View {
VStack(alignment: .leading, spacing: 0) {
Button("Open Files app") {
hasToOpenFilesApp = true
}.fileImporter(isPresented: $hasToOpenFilesApp, allowedContentTypes: [.text]) { result in
switch result {
case .success(let url):
print(url.debugDescription)
case .failure(let error):
print(error.localizedDescription)
}
}
}
}
}
And finally, MyHostingController
import SwiftUI
class MyHostingController<Content>: UIHostingController<Content> where Content: View {
override init(rootView: Content) {
super.init(rootView: rootView)
}
@objc required dynamic init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
navigationItem.hidesBackButton = true
}
}
Launching this in an iPhone 13 Pro (18.2) simulator I click on Open Files App, it takes 2 seconds to open it, and it opens full screen (not like a modal). Buttons on the top are behind the status bar and buttons at the bottom are behind the Home indicator. But it's worse because the user can't interact with this view, it's frozen.
I created a fresh SwiftUI project just with this unique view and the fileimport worked as expected so I thought the problem was due to embed the SwiftUI view inside the UIHostingController. So I made these modifications to the minimal project:
Remove the files AppDelegate, FirstViewBuilder and MyHostingController.
Create this SwiftUI App file
import SwiftUI
@main
struct MyApp: App {
var body: some Scene {
WindowGroup {
FirstView()
}
}
}
And again the same problem with iOS 18.
But if I launch this exact project in an iPhone 13 Pro (17.4) simulator and open the files apps (now it opens almost instantly) it works OK and shows the file picker as a modal, as expected, and I can interact with it and select files.
Last thing I've tried is removing LaunchScreen.xib from my project and Launch screen interface file base name key from my info.plist but the problem keeps happening.
I guess it must be due to my project configuration (too old) but I have no more ideas of where to look at.
The possibility of having a fresh SwiftUI project and "move" the old project to the new one could take me several weeks and I discard it by the moment.
Could I use another method to select files from SwiftUI views with iOS 18?
I need to read data from the user. For convenience, the data will be in a property list, so it's easy to get a dictionary containing the property list data. But, since it's coming from outside, I need to validate that the data is in the required format, i.e. it has the right keys and the right sort of data for each key, e.g. <name> has a string, <keys> has an array of appropriate values.
Since this is part of a long-established product, and targets 10.13, I want to do this in Objective-C if possible. I've been working mostly with Swift in recent years, so I've forgotten a lot of what I used to know about Objective-C, I'm sure.
My first thought was to obtain the value for each key and check the class type with isa, but I see that's deprecated in macOS 13 with no replacement. I don't see another way to check the class.
I'm sure other people have solved the same problem, but my searches have not turned up any answers.
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device.
What I Have Tried So Far
I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings:
Google Cast SDK:
Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time.
ReplayKit for Screen Capture:
RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant.
RTSP/UDP Streaming:
Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult.
My Questions:
Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
Hello everyone,
There is one thing about Objective-C's memory management that confuses me, which is a returned object's lifetime from methods with names doesn't start with "alloc", "new", "copy", or "mutableCopy".
Take this as an example, when using NSBitmapImageRep's representationUsingType:properties: method, it returns an NSData object (reference: https://developer.apple.com/documentation/appkit/nsbitmapimagerep/representation(using:properties:)?language=objc).
While testing this out, the NSData seemed to be an owned object (it doesn't get released until the end of the program).
From what I understand, this may be an auto-released object which is released at the end of an autorelease pool block.
Could someone explain this in more detail? What if I want to release that NSData object before the end of the autorelease pool block? How can I know which object is autoreleased, borrowed, or owned?
Pardon my ignorance but it's been about 13 years since I have last used the Xcode Editor and I don't know where to start entering code.
Could someone take pity on me and inform me where?
Thanks!
Hello. I am attempting to display the music inside of my app in Now Playing. I've tried a few different methods and keep running into unknown issues. I'm new to Objective-C and Apple development so I'm at a loss of how to continue.
Currently, I have an external call to viewDidLoad upon initialization. Then, when I'm ready to play the music, I call playMusic. I have it hardcoded to play an mp3 called "1". I believe I have all the signing set up as the music plays after I exit the app. However, there is nothing in Now Playing. There are no errors or issues that I can see while the app is running. This is the only file I have in Xcode relating to this feature.
Please let me know where I'm going wrong or if there is another object I need to use!
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import <MediaPlayer/MediaPlayer.h>
#import <AVFoundation/AVFoundation.h>
@interface ViewController : UIViewController <AVAudioPlayerDelegate>
@property (nonatomic, strong) AVPlayer *player;
@property (nonatomic, strong) MPRemoteCommandCenter *commandCenter;
@property (nonatomic, strong) MPMusicPlayerController *controller;
@property (nonatomic, strong) MPNowPlayingSession *nowPlayingSession;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
NSLog(@"viewDidLoad started.");
[self setupAudioSession];
[self initializePlayer];
[self createNowPlayingSession];
[self configureNowPlayingInfo];
NSLog(@"viewDidLoad completed.");
}
- (void)setupAudioSession {
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
if (![audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError]) {
NSLog(@"Error setting category: %@", [setCategoryError localizedDescription]);
} else {
NSLog(@"Audio session category set.");
}
NSError *activationError = nil;
if (![audioSession setActive:YES error:&activationError]) {
NSLog(@"Error activating audio session: %@", [activationError localizedDescription]);
} else {
NSLog(@"Audio session activated.");
}
}
- (void)initializePlayer {
NSString *soundFilePath = [NSString stringWithFormat:@"%@/base/game/%@",[[NSBundle mainBundle] resourcePath], @"bgm/1.mp3"];
if (!soundFilePath) {
NSLog(@"Audio file not found.");
return;
}
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
self.player = [AVPlayer playerWithURL:soundFileURL];
NSLog(@"Player initialized with URL: %@", soundFileURL);
}
- (void)createNowPlayingSession {
self.nowPlayingSession = [[MPNowPlayingSession alloc] initWithPlayers:@[self.player]];
NSLog(@"Now Playing Session created with players: %@", self.nowPlayingSession.players);
}
- (void)configureNowPlayingInfo {
MPNowPlayingInfoCenter *infoCenter = [MPNowPlayingInfoCenter defaultCenter];
CMTime duration = self.player.currentItem.duration;
Float64 durationSeconds = CMTimeGetSeconds(duration);
CMTime currentTime = self.player.currentTime;
Float64 currentTimeSeconds = CMTimeGetSeconds(currentTime);
NSDictionary *nowPlayingInfo = @{
MPMediaItemPropertyTitle: @"Example Title",
MPMediaItemPropertyArtist: @"Example Artist",
MPMediaItemPropertyPlaybackDuration: @(durationSeconds),
MPNowPlayingInfoPropertyElapsedPlaybackTime: @(currentTimeSeconds),
MPNowPlayingInfoPropertyPlaybackRate: @(self.player.rate)
};
infoCenter.nowPlayingInfo = nowPlayingInfo;
NSLog(@"Now Playing info configured: %@", nowPlayingInfo);
}
- (void)playMusic {
[self.player play];
[self createNowPlayingSession];
[self configureNowPlayingInfo];
}
- (void)pauseMusic {
[self.player pause];
[self configureNowPlayingInfo];
}
@end
Hi,
I detect dark mode on macOS like following:
NSAppearance *appearance = NSApp.mainWindow.effectiveAppearance;
NSString *interface_style = appearance.name;
NSAppearanceName basicAppearance = [appearance bestMatchFromAppearancesWithNames:@[
NSAppearanceNameAqua,
NSAppearanceNameDarkAqua
]];
if([basicAppearance isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}
if([interface_style isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameVibrantDark]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastAqua]){
theme = "HighContrast";
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastDarkAqua]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastVibrantDark]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}
But this doesn't work if my window is in background. As the application window is put into background, it loses dark mode. Howto fix it?
regards, Joël
Hi, we are developing a cross-platform library for creating desktop applications in C++: https://github.com/aseprite/laf
For this reason, in macOS, we cannot rely on the default NSApplication.run() event loop, so we decided to implement our custom event loop using the nextEventMatchingMask method.
Then, when a window is in fullscreen mode, for some reason the window stops receiving mouseMove events when the mouse pointer enters an area at the top of the window.
You can see this issue in action by trying the following example project:
https://github.com/martincapello/custom-event-loop-issue
This project just opens one window and uses a custom event loop, it displays the current mouse position at every mouseMove event received, and when the aforementioned area is entered it suddenly stops updating.
There is also a video showing how to reproduce it.
I was able to see that when the position stops updating, we still receive mouseMove events, but for a different window, a borderless window that is added to the NSApplication.windows collection when switching to fullscreen, and which seems to be taken the mouseMove events before reaching the main window.
Also, this issue doesn't happen when using the default NSApplication.run method, despite the borderless windows being added as well.
Consider this Swift struct:
public struct Example
{
public func foo(callback: ()->Void)
{
....
}
public func blah(i: Int)
{
....
}
....
}
Using Swift/C++ interop, I can create Example objects and call methods like blah. But I can't call foo because Swift/C++ interop doesn't currently support passing closures (right?).
On the other hand, Swift/objC does support passing objC blocks to Swift functions. But I can't use that here because Example is a Swift struct, not a class. So I could change it to a class, and update everything to work with reference rather than value semantics; but then I also have to change the objC++ code to create the object and call its methods using objC syntax. I'd like to avoid that.
Is there some hack that I can use to make this possible? I'm hoping that I can wrap a C++ std::function in some sort of opaque wrapper and pass that to swift, or something.
Thanks for any suggestions!
How do I implement the same Navigation split view with a side bar in Appkit?
Basically I have this code:
import SwiftUI
struct ContentView: View {
var body: some View {
NavigationSplitView {
// Sidebar
List {
NavigationLink("Item 1", value: "Item 1 Details")
NavigationLink("Item 2", value: "Item 2 Details")
NavigationLink("Item 3", value: "Item 3 Details")
}
.navigationTitle("Items")
} content: {
// Main content (detail view for selected item)
Text("Select an item to see details.")
.padding()
} detail: {
// Detail view (for the selected item)
Text("Select an item from the sidebar to view details.")
.padding()
}
}
}
struct MyApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
and wanted to somehow convert it to Appkit. I tried to use an NSSplitViewController but I still don't have that side bar and that button to collapse it, how do I go about this?
I found when I put a webView on the screen and then remove it, several properties in TableView including firstResponderView, FirstResponderIndexPath, and FirstResponderViewType have changed. These properties are hidden and I cannot change them. firstResponderView strong holds my cell, resulting in my cell not being able to call didEndDisplayCell when it slides out of the screen as expected. What should I do to avoid firstResponderView from strong holding my cell, or what should I do to release it?
I'm primarily an AppleScript writer and only a novice programmer, using ChatGPT to help me with the legwork. It has helped me to write a functioning app that builds a menu structure based on the scripts I have in the Scripts directory used in the script menu and then runs the applescripts. When I distribute the app to my desktop and run it, the scripts that access other apps, like InDesign will cause it to launch, but not actually do anything. I included the ids for each app in the entitlements dictionary and have given the app full disk access in system settings, but it's not functioning as I'd expect.
I know there are apps like Alfred that allow you to run scripts from a keystroke, but I'm building this for others I work with so they can also access info about each script, what it does, and how to use it from the menu, as well as key commands to run them.
Not sure what else to say, but if this sounds like a simple fix to anyone, please let me know.
The Apple documentation for [UIViewController willMoveToParentViewController:] states,
Called just before the view controller is added or removed from a container view controller.
Hence, the view controller being popped should appear at the top of the navStack when willMoveToParentViewController: is called.
In iPadOS 16 and 17 the view controller being popped appears at the top of the navStack when willMoveToParentViewController: is called.
In iPadOS 18 the view controller being popped has already been (incorrectly) removed from the navStack. We confirmed this bug using iPadOS 18.2 as well as 18.0.
Our app relies upon the contents of the navStack within willMoveToParentViewController: to track the user's location within a folder hierarchy.
Our app works smoothly on iPadOS 16 and 17. Conversely, our customers running iPadOS 18 have lost client data and corrupted their data folders as a result of this bug! These customers are angry -- not surprisingly, some have asked for refunds and submitted negative app reviews.
Why doesn't willMoveToParentViewController: provide the correct navStack in iPadOS 18.2?
I'm working on a cross-platform AI app. It is a CMake project. The inference part should be built as a library separately on Windows and MacOS. On MacOS it should be built with objective-c and CoreML.
Here's my step roughly:
Create a XCode Project for CoreML inference and build it as static library. Models are compiled to ".mlmodelc", and codes are compile to binary ".a" lib.
Create a CMake Project for the app, and use the ".a" lib built by XCode.
Run the App.
I initialize the CoreML model like this(just for demostration):
#include "det.h" // the model header generated by xcode
auto url = [[NSURL alloc] initFileURLWithPath:[NSString stringWithFormat:@"%@/%@", dir, @"det.mlmodelc"]];
auto model = [[det alloc] initWithContentsOfURL:url error:&error]; // no error
The url is valid, and the initialization doesn't report any error. However, when I tried to do inference using codes like this:
auto cvPixelBuffer = createCVPixelBuffer(960, 960); // util function
auto preds = [model predictionFromImage:cvPixelBuffer error:NULL];
The output preds will be null and I got these errors:
2024-12-10 14:52:37.678201+0800 望言OCR[50204:5615023] [e5rt] E5RT encountered unknown exception.
2024-12-10 14:52:37.678237+0800 望言OCR[50204:5615023] [coreml] E5RT: E5RT encountered an unknown exception. (11)
2024-12-10 14:52:37.870739+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2
2024-12-10 14:52:37.870758+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2
2024-12-10 14:52:37.870760+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device
2024-12-10 14:52:37.870769+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17
2024-12-10 14:52:37.870845+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2
2024-12-10 14:52:37.870848+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2
2024-12-10 14:52:37.870849+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device
2024-12-10 14:52:37.870853+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17
2024-12-10 14:52:37.870857+0800 望言OCR[50204:5615023] [common] start: ANEDeviceOpen() failed : ret=23 :
It seems that CoreML failed to find ANE device. Is there anything need to be done before we use a CoreML Model as a library in a CMake or other non-XCode project?
By the way, codes like above will work on an XCode Native App with CoreML (I tested this before) . So I guess I missed some environment initializations in my non-XCode project?
I have an ios application where I have a tab and a container view.
Container view changes according to the tab I have selected.
Issue I am facing is that that my table view shows all cells properly in older iphones(iphone with home button), while in newer iphones(iphone with home bar), table view doesn't scroll till the end and hides bottom cell.
I have tried adding all necessary constraints and content insets, but still it is giving me same issue
I have different class for tabBar, tabBar+containerView, and all related view controllers based on tab selected.
Please help me find the solution to this.
Thanks
Hi everyone,
I’m working on building a passwordless login system on macOS using the NameAndPassword module. As part of the implementation, I’m verifying if the password provided by the user is correct before passing it to the macOS login window.
Here’s the code snippet I’m using for authentication:
// Create Authorization reference
AuthorizationRef authorization = NULL;
// Define Authorization items
AuthorizationItem items[2];
items[0].name = kAuthorizationEnvironmentPassword;
items[0].value = (void *)password;
items[0].valueLength = (password != NULL) ? strlen(password) : 0;
items[0].flags = 0;
items[1].name = kAuthorizationEnvironmentUsername;
items[1].value = (void *)userName;
items[1].valueLength = (userName != NULL) ? strlen(userName) : 0;
items[1].flags = 0;
// Prepare AuthorizationRights and AuthorizationEnvironment
AuthorizationRights rights = {2, items};
AuthorizationEnvironment environment = {2, items};
// Create the authorization reference
[Logger debug:@"Authorization creation start"];
OSStatus createStatus = AuthorizationCreate(NULL, &environment, kAuthorizationFlagDefaults, &authorization);
if (createStatus != errAuthorizationSuccess) {
[Logger debug:@"Authorization creation failed"];
return false;
}
// Set authorization flags (disable interaction)
AuthorizationFlags flags = kAuthorizationFlagDefaults | kAuthorizationFlagExtendRights;
// Attempt to copy rights
OSStatus status = AuthorizationCopyRights(authorization, &rights, &environment, flags, NULL);
// Free the authorization reference
if (authorization) {
AuthorizationFree(authorization, kAuthorizationFlagDefaults);
}
// Log the result and return
if (status == errAuthorizationSuccess) {
[Logger debug:@"Authentication passed"];
return true;
} else {
[Logger debug:@"Authentication failed"];
return false;
}
}
This implementation works perfectly when the password is correct. However, if the password is incorrect, it tries to re-call the macOS login window, which is already open. even i though i did not used the kAuthorizationFlagInteractionAllowed flag. This causes the process to get stuck and makes it impossible to proceed.
I’ve tried logging the flow to debug where things go wrong, but I haven’t been able to figure out how to stop the system from re-calling the login window.
Does anyone know how to prevent this looping behavior or gracefully handle an incorrect password in this scenario? I’d appreciate any advice or suggestions to resolve this issue.
Thanks in advance for your help!
Hello,
I'm currently working on an authorization plugin for macOS. I have a custom UI implemented using SFAuthorizationPluginView (NameAndPassword), which prompts the user to input their password. The plugin is running in non-privileged mode, and I want to store the password securely in the system keychain.
However, I came across this article that states the system keychain can only be accessed in privileged mode. At the same time, I read that custom UIs, like mine, cannot be displayed in privileged mode.
This presents a dilemma:
In non-privileged mode: I can show my custom UI but can't access the system keychain.
In privileged mode: I can access the system keychain but can't display my custom UI.
Is there any workaround to achieve both? Can I securely store the password in the system keychain while still using my custom UI, or am I missing something here?
Any advice or suggestions are highly appreciated!
Thanks in advance!
Hello developers,
I'm currently working on an authorization plugin for macOS. I have a custom UI implemented using SFAuthorizationPluginView, which prompts the user to input their password. The plugin is running in non-privileged mode, and I want to store the password securely in the system keychain.
However, I came across an article that states the system keychain can only be accessed in privileged mode. At the same time, I read that custom UIs, like mine, cannot be displayed in privileged mode.
This presents a dilemma:
In non-privileged mode: I can show my custom UI but can't access the system keychain.
In privileged mode: I can access the system keychain but can't display my custom UI.
Is there any workaround to achieve both? Can I securely store the password in the system keychain while still using my custom UI, or am I missing something here?
Any advice or suggestions are highly appreciated!
Thanks in advance! 😊
I want to understand how it manages memory allocation if i need more memory later than the memory i specified during initialisation . Does it allocates new chunk of memory and dellocate older memory or does it already allocated more memory than i asked for in first place? Just want to understand how exactly this calculation is done ?
And i do initialisation of NSMutableString in swift , will these same principle of expension applied there ?