in iOS, user can set focus on UItextField and tapping a key in the virtual keyboard updates the text in the textfield. This user action causes the relevant delegates of UITextFieldDelegate to get invoked, i.e the handlers associated with action of user entering some text in the textfield.
I m trying to simulate this user action where I am trying to do this programatically. I want to simulate it in a way such that all the handlers/listeners which otherwise would have been invoked as a result of user typing in the textfield should also get invoked now when i am trying to do it programatically. I have a specific usecase of this in my application.
Below is how I m performing this simulation.
I m manually updating the text field associated(UITextField.text) and updating its value.
And then I m invoking the delegate manually as textField.delegate?.textField?(textField, shouldChangeCharactersIn: nsRange, replacementString: replacementString)
I wanted to know If this is the right way to do this. Is there something better available that can be used, such that simulation has the same affect as the user performing the update?
Post
Replies
Boosts
Views
Activity
I wanted to perform simulation in my application as a self tour guide for my user. For this I want to programatically simulate various user interaction events like button click, keypress event in the UITextField or moving the cursor around in the textField. These are only few examples to state, it can be any user interaction event or other events.
I wanted to know what is the apple recommendation on how should these simulations be performed? Is there something that apple offers like creating an event which can be directly executed for simulations. Is there some library available for this purpose?
I understand two key concepts from desktop platforms:
Screen Mirroring – The same content is displayed on both the primary and external screens.
Screen Extension – The external display shows different content that complements what's on the main screen.
My question pertains to the second point: Is it possible to extend the display on iOS and iPadOS devices?
I'm referring to this Apple documentation, which explains how to extend content from an iOS/iPadOS device to an external display.
I tested this in a sample iOS Xcode project. In the iOS Simulator, I was able to detect an "external display" and present a separate UIWindow on it. However, when I tried the same on a real device (iPhone 15 connected to a MacBook Pro via cable), the external display connection was not detected.
I’d like to confirm whether screen extension is possible on a real iOS device. From my research, it appears that extension is only supported on iPadOS via Stage Manager, but I want to verify if there’s any way to achieve this on an iPhone. If so, are there any known apps that currently utilize extended display functionality on iOS?
If extension is not possible on iOS, what does the documentation mentions iOS?
I am working on a SwiftUI project where I need to dynamically update the UI by adding or removing components based on some event. The challenge is handling complex UI structures efficiently while ensuring smooth animations and state management.
Example Scenario:
I have a screen displaying a list of items.
When a user taps an item, additional details (like a subview or expanded section) should appear dynamically.
If the user taps again, the additional content should disappear.
The UI should animate these changes smoothly without causing unnecessary re-renders.
My Current Approach:
I have tried using @State and if conditions to toggle views, like this:
struct ContentView: View {
@State private var showDetails = false
var body: some View {
VStack {
Button("Toggle Details") {
showDetails.toggle()
}
if showDetails {
Text("Additional Information")
.transition(.slide) // Using animation
}
}
.animation(.easeInOut, value: showDetails)
}
}
However, in complex UI scenarios where multiple components need to be shown/hidden dynamically, this approach is not maintainable and could cause performance issues. I need help with the below questions.
Questions:
State Management: Should I use @State, @Binding, or @ObservedObject for handling dynamic UI updates efficiently?
Best Practices: What are the best practices for structuring SwiftUI views to handle dynamic updates without excessive re-renders?
Performance Optimization: How can I prevent unnecessary recomputations when updating only specific UI sections?
Animations & Transitions: What is the best way to apply animations smoothly while toggling visibility of multiple components?
Advanced Approaches: Are there better techniques using @EnvironmentObject, ViewBuilder, or even GeometryReader for dynamically adjusting UI layouts?
Any insights, code examples, or resources would be greatly appreciated.
I have an xcode project which has both cpp and swift code. In one of my usecase I am passing primitive type variables from swift to cpp by reference( primitives types list here as per the new cpp-swift interop documentation)
swift code:
// primitive check code:Bool
var x : Bool = true
// When we are passing a variable as a Reference, we need to use explicitly use'&'
student.PassBoolAsReferenceType (&x) // interop call to cpp code
print (x)
Cpp code:
void
Student::PassBoolAsReferenceType(bool &pValue) noexcept
{
std::cout << pValue << std::endl;
pValue = false;
}
The above code fails during compilation with no clear error message "Command SwiftCompile failed with a nonzero exit code"
However, all the other primitive types that I tested worked for the above code like Int, Float, Double etc. Only the Bool interop fails. Can someone explain why is it not possible for bool? I m using the new interop introduced in swift 5.9.
I have a UIKit application and it contains multiple UI components like UIWindow, UIView, UIButton, etc. I wanted to perform error handling for different OS calls in my application.
For example, when creating a UIImage using init(named:) initialiser, the documentation clearly states that if the UIImage object cannot be created then the initialiser returns nil value.
However, there are other UI components like UIButton (or like UIView), which when created using init(frame:)
initialiser, the documentation does not mention for any return value.
I wanted to know how to identify If the UIButton initialisation has failed?
How is it that apple recommends should we handle these api's, If they fail to create a button. suppose If there is a case where it fails due to insufficient memory.
Or is it that apple guarantees the Api's never fail?Is there some exception that is throw? I wanted somewhat detailed answer to these questions.
I was looking out for the error handling for rendering the Widgets(like UIButton, UIVIew etc) on the screen in iOS. I am painting the screen programmatically using swift.
Considering a simple Widget(like for say UIButton) when we try to create using its initializer and set some properties like 'setTitle' . These functions neither return any value upon success/failure nor in documentation they have mentioned about any exceptions which would be raised upon failure.
https://developer.apple.com/documentation/uikit/uibutton/settitle(_:for:)
So, how to do error handling here in this scenarios, in case the apis fail to due some reason, like memory issue? There must be some scenarios for these api failure.
Hi, need some help with an iOS application we are trying to make future safe. Basically, we know that our app would require SwiftUI so the app is made in that framework, however we require some important elements that are available only in UIKit, so we've made a bridge that allows us to pass UIKit views to SwiftUI to display them. So most of the app actually has UI made in UIKit, however, we now need to use the Charts framework present in SwiftUI, we've used SwiftUI buttons in our UIKit before by passing them through a HostingController (Passing SwiftUI buttons to UIKit to use). And we are currently considering to the same for SwiftUI Charts. Just to recap, it's a SwiftUI iOS app, that is mostly made in UIKit (through a bridge) but also has other SwiftUI elements injected into it. What we want to know that, is this the best way to do this? Or is there a better way to have UIKit and SwiftUI work more comfortably with eachother. The reason for such looping around is also because we interoping our C++ code to Swift for making this application, since we are making it for many other platforms and the business logic is in C++. Let me know if there are better ways to go about this!
I have a UIKit application and I have some swiftUI views(like button widget etc) that I m using in this application which are added as a subview using UIHostingController.
I wanted to understand what is the right way as per the apple recommendation on how to perform some updates on these views, since the UIKit and SwiftUI have a different way of operating.
In a pure swiftUI application we use the @State variables which when modified the view are re-rendered. However, in an UIKit application we can directly modify the widget property like color or font from the object.
So, my question is should I get the hostingController object from the swiftUI view and then perform any update on that UIKit view. Is this the right way?
If not, what is the correct way? can someone provide a detailed explanation?
I am creating a UIKit application but that contains SwiftUI Views embedded using the hostingcontroller.
I have a particular approach for it..but it requires instantiating a swiftUI view, creating a hostingcontroller object from it and storing a reference to it. So that later If I wanted to update the view, I can simply get the reference back and update the swiftUI view using it.
I wanted to understand what does apple recommends on this. Can we store a swiftUI instance? Does it cause any issue or it is okay to do so?
I have created a progress indicator to simulate some progressing download task in the dock icon. However, I can see the progress bar appearing in the dock icon but it is not getting updated when I invoked the updateProgress() method. Ideally it should have updated it, and I m not able to figure out the reason?
I have creating the same NSProgressIndicator on an NSWindow and it works to update the progress bar with the same code. Anything that I m missing to understand here? Below is the code I m using:
class AppDelegate: NSObject, NSApplicationDelegate {
var progressIndicator: NSProgressIndicator!
let dockTile = NSApp.dockTile
func applicationWillFinishLaunching(_ notification: Notification) {
// Step 1: Create a progress bar (NSProgressIndicator)
progressIndicator = NSProgressIndicator(frame: NSRect(x: 10, y: 10, width: 100, height: 20))
progressIndicator.isIndeterminate = false
progressIndicator.minValue = 0.0
progressIndicator.maxValue = 100.0
progressIndicator.doubleValue = 0.0
progressIndicator.style = .bar
dockTile.contentView = progressIndicator
dockTile.display()
//// Update the progress bar for demonstration
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
self.updateProgress(50)
}
}
func updateProgress(_ value: Double) {
progressIndicator.doubleValue = value
NSApp.dockTile.display()
}
}
I have created a NSView inside the NSWindow. I m trying to identify when the view gets clicked by the user. For this I m using NSClickGestureRecognizer, but the registered method is not getting invoked. I have tried adding this for other widgets like button but it does not work either. Am I missing something?
class SelectionList :NSObject, NSTextFieldDelegate{
let containerView = NSView()
func createSelectionList (pWindow: NSWindow) {
// created container View
...
let clickRecognizer = NSClickGestureRecognizer()
clickRecognizer.target = self
clickRecognizer.buttonMask = 0x2 // right button
clickRecognizer.numberOfClicksRequired = 1
clickRecognizer.action = #selector(ClickGestured)
containerView .addGestureRecognizer(clickRecognizer)
}
@objc
func clickRecognizer() {
print("clicked")
}
}
I wanted to create a bundled macOS application that can be run in background. This application should also be capable of running in a non-gui environment.
How should I create the application with the only condition that it should be bundled and can be launched using multiple ways like double click the bundle app or launching as a daemon using the unix executable?
I have a bundled macOS application. This is a non-interactive application where I m performing some task on the worker thread while the main thread waits for this task to be completed. Sometimes this task can be time consuming.
I have observed that when I run the application using the bundle( like double click or open command) I can see the OS marking my application as not responding( this is evident as the app icon toggles in the dock and then it states not responding).
Although If I run the unix executable in the bundle, the app runs and I do not see the not responding status anywhere.
I wanted to understand If this is happening because my main thread is in a waiting state? If yes, what could I do to resolve it because my application logic demands the main thread to wait for the worker thread to complete its task. Is there some way to use some event loop like GCD?
Note: I cannot use the delegates(Appkit) event loop because my application will be run in non-GUI context.
I have a .app file that I want to run as a daemon. Now there are two ways of running .app as a daemon/agent in macOS.
using .app file : I can specify this in the daemon plist as:
<key>ProgramArguments</key>
<array>
<string>/usr/bin/open</string>
<string>/Applications/myApp.app</string>
</array>
using unix exe within .app file
<key>ProgramArguments</key>
<array>
<string>myApp.app/Content/MacOS/MyApp</string>
</array>
Basically I wanted to know what is the Apple recommendation on how we should be creating daemon plist.
For point 2, is it appropriate to use the unix executable within bundle?Will it not cause any issue in the running application?
Is will be helpful if there is some apple documentation to support this.