I want to be able to dynamically update the phrase dictionary in an AppShortcut. However, whenever I abstract the phrases, the shortcut fails to display. That is, I am trying to do:
static var phrases: [AppShortcutPhrase<MyIntent>] = ["\(.applicationName) hello world"]
AppShortcut(
intent: MyIntent(),
phrases: phrases,
shortTitle: "hello world",
systemImageName: ""
)
However, the following works:
AppShortcut(
intent: MyIntent(),
phrases: "\(.applicationName) hello world",
shortTitle: "hello world",
systemImageName: ""
)
So, what gives?
General
RSS for tagExplore the various UI frameworks available for building app interfaces. Discuss the use cases for different frameworks, share best practices, and get help with specific framework-related questions.
Post
Replies
Boosts
Views
Activity
I am working on a React Native application where I want to modify the native text selection menu (the menu that appears when you long-press on text). Specifically, I want to add a custom option alongside the default ones like Copy, Look Up, Translate, Search Web, and Share.
Is there a way to modify the native text selection menu inside a WebView on iOS?
How can I add a custom menu option to the default text selection menu while keeping all the default options intact?
We were using below delegate methods from QuickLook to get modified PDF file URL after the sketching But we are not able see the multi line text properly laid out on PDF and part of text missing. Same time Other pencil kit tools are working as expected.
`func previewController(_ controller: QLPreviewController, didSaveEditedCopyOf previewItem: QLPreviewItem, at modifiedContentsURL: URL)
func previewController(_ controller: QLPreviewController, didUpdateContentsOf previewItem: any QLPreviewItem)`
We tested all code in iOS 18.2.
Please let us know if the text edited URL on PDF can be retrieved in any possible way without tampering text
I am implementing a new Intents UI Extension and am noticing that the viewWillDisappear, viewDidDisappear, and deinit methods are not being called on my UIViewController that implements INUIHostedViewControlling, when pressing the "Done" button and dismissing the UIViewController.
This causes the memory for the UI Extension to slowly increase each time I re-run the UI Extension until it reaches the 120MB limit and crashes.
Any ideas as to what's going on here and how to solve this issue?
Worth noting that while the memory does continuously increase on iOS versions before iOS 17, only in 17 and later does the 120MB memory limit kick in and crash the extension.
After uploading the app to App Store Connect, Apple automatically generated a Default App Clip Link. However, the App Clip card only opens successfully if the main app is already installed on the device. If the main app is not installed, the App Clip card displays an image and the message "App Clip Unavailable"
What could cause this behavior, and how do I ensure the App Clip works without requiring the main app to be installed?
With "Requires full screen" Split View and Slide Over are disabled but the line on the bottom of the screen remains.
How can that line removed as when a video is displayed full screen?
Hi.
I know to know which window gets hardware keyboard events (such as shortcut key) currently on iPad.
Until iPadOS 15.0, UIApplication.shared.keyWindow, which was deprecated on iPadOS 13.0 and didBecomeKeyNotification/didResignKeyNotification.
But after iPadOS 15.0, a keyWindow is managed by UIScene, not by UIApplication.
Each scene of my app always has just one window.
For my purpose, checking deprecated UIApplication.shared.keyWindow is still effective but didBecomeKeyNotification and didResignKeyNotification don't work because they are fired when a change happens only inside the scene.
So my questions are,
What is the new alternative of UIApplication.shared.keyWindow?
I know a wrong hack like
UIApplication.shared.connectedScenes.compactMap { $0 as? UIWindowScene }.first?.windows.filter { $0.isKeyWindow }.first
does not work since the order of connectedScenes is not related with getting hardware keyboard events.
What are the new alternatives of didBecomeKeyNotification/didResignKeyNotification which work on inter-scene?
The second question is more crucial.
Because about the first question, I can still use deprecated UIApplication.shared.keyWindow.
Thanks.
By setting the PKCanvasView background color to blue, I can tell that the PKCanvasView for each PDFPage is created normally, but it does not respond to touch. Specifically, whether it is finger or applepencil, all the responses of the page occur from PDFView(such as zoom and scroll), and PKCanvasView can not draw, please how to solve?
class PDFAnnotatableViewController: UIViewController, PDFViewDelegate {
private let pdfView = PDFView()
private var pdfDocument: PDFDocument?
let file: FileItem
private var userSettings: UserSettings
@Binding var selectedPage: Int
@Binding var currentMode: Mode
@Binding var latestPdfChatResponse: LatestPDFChatResponse
@State private var pdfPageCoordinator = PDFPageCoordinator()
@ObservedObject var userMessage: ChatMessage
init(file: FileItem,
userSettings: UserSettings,
drawDataList: Binding<[DrawDataItem]>,
selectedPage: Binding<Int>,
currentMode: Binding<Mode>,
latestPdfChatResponse: Binding<LatestPDFChatResponse>,
userMessage: ChatMessage) {
self.file = file
self.userSettings = userSettings
self._selectedPage = selectedPage
self._currentMode = currentMode
self._latestPdfChatResponse = latestPdfChatResponse
self.userMessage = userMessage
super.init(nibName: nil, bundle: nil)
DispatchQueue.global(qos: .userInitiated).async {
if let document = PDFDocument(url: file.pdfLocalUrl) {
DispatchQueue.main.async {
self.pdfDocument = document
self.pdfView.document = document
self.goToPage(selectedPage: selectedPage.wrappedValue - 1)
}
}
}
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
setupPDFView()
}
private func setupPDFView() {
pdfView.delegate = self
pdfView.autoScales = true
pdfView.displayMode = .singlePage
pdfView.displayDirection = .vertical
pdfView.backgroundColor = .white
pdfView.usePageViewController(true)
pdfView.displaysPageBreaks = false
pdfView.displaysAsBook = false
pdfView.minScaleFactor = 0.8
pdfView.maxScaleFactor = 3.5
pdfView.pageOverlayViewProvider = pdfPageCoordinator
if let document = pdfDocument {
pdfView.document = document
goToPage(selectedPage: selectedPage)
}
pdfView.frame = view.bounds
pdfView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.addSubview(pdfView)
NotificationCenter.default.addObserver(
self,
selector: #selector(handlePageChange),
name: .PDFViewPageChanged,
object: pdfView
)
}
// Dealing with page turning
@objc private func handlePageChange(notification: Notification) {
guard let currentPage = pdfView.currentPage, let document = pdfView.document else { return }
let currentPageIndex = document.index(for: currentPage)
if currentPageIndex != selectedPage - 1 {
DispatchQueue.main.async {
self.selectedPage = currentPageIndex + 1
}
}
}
func goToPage(selectedPage: Int) {
guard let document = pdfView.document else { return }
if let page = document.page(at: selectedPage) {
pdfView.go(to: page)
}
}
// Switch function
func togglecurrentMode(currentMode: Mode){
DispatchQueue.main.async {
if self.currentMode == .none{
self.pdfView.usePageViewController(true)
self.pdfView.isUserInteractionEnabled = true
} else if self.currentMode == .annotation {
if let page = self.pdfView.currentPage {
if let canvasView = self.pdfPageCoordinator.getCanvasView(forPage: page) {
canvasView.isUserInteractionEnabled = true
canvasView.tool = PKInkingTool(.pen, color: .red, width: 20)
canvasView.drawingPolicy = .anyInput
canvasView.setNeedsDisplay()
}
}
}
}
}
}
class MyPDFPage: PDFPage {
var drawing: PKDrawing?
func setDrawing(_ drawing: PKDrawing) {
self.drawing = drawing
}
func getDrawing() -> PKDrawing? {
return self.drawing
}
}
class PDFPageCoordinator: NSObject, PDFPageOverlayViewProvider {
var pageToViewMapping = [PDFPage: PKCanvasView]()
func pdfView(_ view: PDFView, overlayViewFor page: PDFPage) -> UIView? {
var resultView: PKCanvasView? = nil
if let overlayView = pageToViewMapping[page] {
resultView = overlayView
} else {
let canvasView = PKCanvasView(frame: view.bounds)
canvasView.drawingPolicy = .anyInput
canvasView.tool = PKInkingTool(.pen, color: .systemYellow, width: 20)
canvasView.backgroundColor = .blue
pageToViewMapping[page] = canvasView
resultView = canvasView
}
if let page = page as? MyPDFPage, let drawing = page.drawing {
resultView?.drawing = drawing
}
return resultView
}
func pdfView(_ pdfView: PDFView, willEndDisplayingOverlayView overlayView: UIView, for page: PDFPage) {
guard let overlayView = overlayView as? PKCanvasView, let page = page as? MyPDFPage else { return }
page.drawing = overlayView.drawing
pageToViewMapping.removeValue(forKey: page)
}
func savePDFDocument(_ pdfDocument: PDFDocument) -> Data {
for i in 0..<pdfDocument.pageCount {
if let page = pdfDocument.page(at: i) as? MyPDFPage, let drawing = page.drawing {
let newAnnotation = PDFAnnotation(bounds: drawing.bounds, forType: .stamp, withProperties: nil)
let codedData = try! NSKeyedArchiver.archivedData(withRootObject: drawing, requiringSecureCoding: true)
newAnnotation.setValue(codedData, forAnnotationKey: PDFAnnotationKey(rawValue: "drawingData"))
page.addAnnotation(newAnnotation)
}
}
let options = [PDFDocumentWriteOption.burnInAnnotationsOption: true]
if let resultData = pdfDocument.dataRepresentation(options: options) {
return resultData
}
return Data()
}
func getCanvasView(forPage page: PDFPage) -> PKCanvasView? {
return pageToViewMapping[page]
}
}
Is there an error in my code? Please tell me how to make PKCanvasView painting normally?
After the user clicks the Open button in the AppClip card, the AppClip launches, but the card keeps appearing whenever the user clicks Open. It doesn’t disappear until the user clicks the Close button on the card.
This issue started appearing two months ago and only occurs on iOS 17.6 and 17.7.
[demo video](https://drive.google.com/file/d/1vJ-7E5JSdO_xoVkDYxBJDkj8sm-dvxv1/view?usp=share_link
We have a IOS app and we are using the same app for Mac Catalyst.
In IOS we are able to detect when user take the screenshot using UIApplicationUserDidTakeScreenshotNotification. But in MACCatalyst it is not working.
As per docs UIApplicationUserDidTakeScreenshotNotification and UIScreenCapturedDidChangeNotification are both supported for MacCatalyst 13.1+. But I am not getting screen shot notifications using both
Hi
I am drawing TextKit2 managed NSAttributedStrings into a NSBitmapImageRep successfully, enumerating the Text Layout Fragments is giving me bogus background drawing
This is the core drawing code, its pretty simple: I manage the flipped property myself since NSTextLayoutManager assumes a flipped coordinate.
if let context = NSGraphicsContext(bitmapImageRep: self.textImageRep!)
{
NSGraphicsContext.current = context
let rect = NSRect(origin: .zero, size: self.outputSize)
NSColor.clear.set()
rect.fill()
// Flip the context
context.cgContext.saveGState()
context.cgContext.translateBy(x: 0, y: self.outputSize.height)
context.cgContext.scaleBy(x: 1.0, y: -1.0)
let textOrigin = CGPoint(x: 0.0, y: 0.0 )
let titleRect = CGRect(origin: textOrigin, size: self.themeTextContainer.size)
NSColor.orange.withAlphaComponent(1).set()
titleRect.fill()
self.layoutManager.enumerateTextLayoutFragments(from: nil, using: { textLayoutFragment in
// Get the fragment's rendering bounds
let fragmentBounds = textLayoutFragment.layoutFragmentFrame
print("fragmentBounds: \(fragmentBounds)")
// Render the fragment into the context
textLayoutFragment.draw(at: fragmentBounds.origin, in: context.cgContext)
return true
})
context.cgContext.restoreGState()
}
NSGraphicsContext.restoreGraphicsState()
I have a mutable string which has various paragraph styles which I add to the layout manager / text storage like so
let titleParagraphStyle = NSMutableParagraphStyle()
titleParagraphStyle.alignment = .center
titleParagraphStyle.lineBreakMode = .byWordWrapping
titleParagraphStyle.lineBreakStrategy = .standard
var range = NSMakeRange(0, self.attributedProgrammingBlockTitle.length)
self.attributedProgrammingBlockTitle.addAttribute(.foregroundColor, value: NSColor(red: 243.0/255.0, green: 97.0/255.0, blue: 97.0/255.0, alpha: 1.0), range:range)
self.attributedProgrammingBlockTitle.addAttribute(.backgroundColor, value: NSColor.cyan, range:range)
self.attributedProgrammingBlockTitle.addAttribute(.font, value: NSFont.systemFont(ofSize: 64), range:range)
self.attributedProgrammingBlockTitle.addAttribute(.paragraphStyle, value:titleParagraphStyle, range:range)
range = NSMakeRange(0, self.attributedThemeTitle.length)
self.attributedThemeTitle.addAttribute(.foregroundColor, value: NSColor.white, range:range )
self.attributedThemeTitle.addAttribute(.backgroundColor, value: NSColor.purple, range:range)
self.attributedThemeTitle.addAttribute(.font, value: NSFont.systemFont(ofSize: 48), range:range)
self.attributedThemeTitle.addAttribute(.paragraphStyle, value:NSParagraphStyle.default, range:range)
range = NSMakeRange(0, self.attributedText.length)
self.attributedText.addAttribute(.foregroundColor, value: NSColor.white, range:range )
self.attributedText.addAttribute(.backgroundColor, value: NSColor.yellow, range:range)
self.attributedText.addAttribute(.font, value: NSFont.systemFont(ofSize: 36), range:range)
self.attributedText.addAttribute(.paragraphStyle, value:NSParagraphStyle.default, range:range)
let allText = NSMutableAttributedString()
allText.append(self.attributedProgrammingBlockTitle)
allText.append(NSAttributedString(string: "\n\r"))
allText.append(self.attributedThemeTitle)
allText.append(NSAttributedString(string: "\n\r"))
allText.append(self.attributedText)
self.textStorage.textStorage?.beginEditing()
self.textStorage.textStorage?.setAttributedString(allText)
self.textStorage.textStorage?.endEditing()
self.layoutManager.ensureLayout(for: self.layoutManager.documentRange)
however, i get incorrect drawing for the background color font attributes. Its origin is zero, and not correctly aligned at all with the text.
How can I get correct rendering of backgrounds from TextKit2?
Here is an image of my output:
One of my clients is interested in developing a system similar to BrowserStack for internal team usage. Could you please guide me on how to approach the development of this system?
Specifically, the project requires:
Full iPhone screen recording.
Capturing and executing click events on the iPhone.
Do I need to obtain permission from Apple for these functionalities?
We are trying to implement live caller id lookup, there is lack of documentation and we would like to understand few blockers that we have:
Based on this documentation page: https://developer.apple.com/documentation/identitylookup/setting-up-the-http-endpoints-for-live-caller-id-lookup:
What exactly should accept/return these endpoints?. There is no detailed description of the response/request schema for any endpoint.
What are the usage scenarios of the "/key" method? If it is possible to get a more detailed explanation regarding Evaluation Key usage as it is not clear from the available documentation.
Based on this step by step implementation description: https://github.com/apple/live-caller-id-lookup-example/blob/main/Sources/PIRService/PIRService.docc/Authentication.md:
The 6th step: "Authentication server returns the list of public keys (potentially through a proxy)". What is the endpoint path and response format expected by the iOS system? This step refers to calling the '/config' endpoint ?
The 8th step: "Authentication server verifies the User Token and returns the public key that is associated with the User Tier. ..." What is the the endpoint path and request/response format expected by the iOS system?
The 9th step: "The system constructs a Privacy Pass token request using the specific public key. The token request is sent along with the User Token to the authentication server". What is the request/response format expected by the iOS system?
The 11th step: "When a PIR request is made, the system attached an unused Privacy Pass token to the request. The PIR node can use the public key to verify that the token is valid and that assures that the request is authorized". What is the request/response format expected by the iOS system?
On executing refreshPIRParameters fom app I get this error:
LiveCallerIDLookupManager.shared.refreshPIRParameters(forExtensionWithIdentifier: LiveCallerIdExtensionName)
Unable to query status due to errors: server error (Access DeniedAccess DeniedYou don't have permission to access "http://www.example.com/config" on this server.Reference #18.cdde00d4.1738074526.3a38870https://errors.edgesuite.net/18.cdde00d4.1738074526.3a38870)
Why it tries to reach example.com/config but not our serviceURL or tokenIssuerURL?
Any help would be appreciated.
Hi all!
Based on documentation: https://developer.apple.com/documentation/carplay/cplistitem/handler
If you need to perform asynchronous tasks, dispatch them to a background queue and call the completion closure or completionBlock when they complete.
In a normal case, it works perfectly. But, if it takes "too much", for example, 10 seconds (streaming with retries, app business logic), when I call the "completionBlock" inside "handler" doesn't do anything.
Exists a timeout in "completionBlock"?
Thanks!
According to the MVVM design pattern, one of my views depends on many properties in my model. Can I use logic like @published var model = MyModel()? Will there be a large performance loss? Will the UI be refreshed when other unrelated properties in the model are modified? What is the best practice in this case?
I am working on the EndPoint DLP solution project. So I want to monitor the paste operation peformed by the user. So when he uses the keyboard keys then I can monitor them using the event callback. But if user uses the GUI for pasting the data then how can I monitor that ?
I filed FB16332997 about the VERY high snowfall estimates I'm seeing in WeatherKit and iOS Weather. I initially thought something was wrong with my weather app but I verified the numbers with the iOS Weather app and another third party weather app.
For Atlanta last week it was saying 7.5" when it ended up being 2" (which I can live with).
Two days ago it reported there could be 16" of snow in northern Florida. That's impossible!
This morning it was reporting that Niceville could have 6-7" of snow, which would be significantly more than highest amount in recorded history for Florida (where snow is extremely rare).
It almost makes me wonder if the liquid precipitation value is actually the snowfall amount in reality. And then that is incorrectly being converted to the snowfall amount.
Hello,
Please guide me if there is a way to show a simple toast to a user that the action has been performed. When a user taps on a button, an api returns a status based on which I need to show appropriate message as a toast. Is this possible in CarPlay? If not, why? Please suggest any alternative for this.
Awaiting your response
Thanks a lot!!
Problem Description
We are developing a app for iOS and iPadOS that involves extensive custom drawing of paths, shapes, texts, etc. To improve drawing and rendering speed, we use CARenderer to generate cached images (CGImage) on a background thread. We adopted this approach based on this StackOverflow post: https://stackoverflow.com/a/75497329/9202699.
However, we are experiencing frequent crashes in our production environment that we can hardly reproduce in our development environment. Despite months of debugging and seeking support from DTS and the Apple Feedback platform, we have not been able to fully resolve this issue. Our recent crash reports indicate that the crashes occur when calling CATransaction.commit().
We suspect that CATransaction may not be functioning properly outside the main thread. However, based on feedback from the Apple Feedback platform, we were advised to use CATransaction.begin() and CATransaction.commit() on a background thread.
If anyone has any insights, we would greatly appreciate it.
Code Sample
The line CATransaction.commit() is causing the crash: [EXC_BREAKPOINT: com.apple.root.****-qos.cooperative]
private let transactionLock = NSLock() // to ensure one transaction at a time
private let device = MTLCreateSystemDefaultDevice()!
@inline(never)
static func drawOnCGImageWithCARenderer(
layerRect: CGRect,
itemsToDraw: [ItemsToDraw]
)
-> CGImage? {
// We have encapsulated everything related to CALayer and its
// associated creations and manipulations within CATransaction
// as suggested by engineers from Apple Feedback Portal.
transactionLock.lock()
CATransaction.begin()
// Create the root layer.
let layer = CALayer()
layer.bounds = layerRect
layer.masksToBounds = true
// Add one sublayer for each item to draw
itemsToDraw.forEach { item in
// We have thousands or hundred thousands of drawing items to add.
// Each drawing item may produce a CALayer, CAShapeLayer or CATextLayer.
// This is also why we want to utilise CARenderer to leverage GPU rendering.
layer.addSublayer(
item.createCALayerOrCATextLayerOrCAShapeLayer()
)
}
// Create MTLTexture and CARenderer.
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .rgba8Unorm,
width: Int(layer.frame.size.width),
height: Int(layer.frame.size.height),
mipmapped: false
)
textureDescriptor.usage = [MTLTextureUsage.shaderRead, .shaderWrite, .renderTarget]
let texture = device.makeTexture(descriptor: textureDescriptor)!
let renderer = CARenderer(mtlTexture: texture)
renderer.bounds = layer.frame
renderer.layer = layer.self
/* ********************************************************* */
// From our crash report, this is where the crash happens.
CATransaction.commit()
/* ********************************************************* */
transactionLock.unlock()
// Rendering layers onto MTLTexture using CARenderer.
renderer.beginFrame(atTime: 0, timeStamp: nil)
renderer.render()
renderer.endFrame()
// Draw MTLTexture onto image.
guard
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let ciImage = CIImage(mtlTexture: texture, options: [.colorSpace: colorSpace]) else {
return nil
}
// Convert CIImage to CGImage.
let context = CIContext()
return context.createCGImage(ciImage, from: ciImage.extent)
}
In TextKit 1, I can override drawBackground(forGlyphRange:at:) in NSLayoutManager to do custom background drawing. However, I'm not too sure what the designated way of doing background drawing is in TextKit 2.
One thing I've tried is to do custom drawing in my own CALayer that's used in the configureRenderingSurface delegate callback, but I'm unsure if we are suppose to use this API and steal the textViewportLayoutController.delegate away from _UITextLayoutcanvasView?