I would like to visualize a point cloud taken from a lidar. Assuming I can get the XYZ values of every point (of which there may be hundreds or thousands), what is the most efficient way for me to create a point cloud using this information?
Overview
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi everyone! I'm building a tvOS 18.5 app using SwiftUI in Xcode 16.4, and I'm having trouble reliably detecting button clicks from the 2nd generation Siri Remote.
.onMoveCommand(perform: handleMoveCommand)
func handleMoveCommand(_ direction: MoveCommandDirection) {
switch direction {
case .right: nextQuote()
case .left: previousQuote()
default: break
}
}
Swiping left or right on the remote's touch surface works as expected, the callback fires every time.
However, when I press the physical left/right buttons (outside the touch surface), the input is unreliable. Sometimes it registers, other times I need to press multiple times or nothing happens at all.
I’ve confirmed the view is .focusable(true) and bound with @FocusState.
Is this a known limitation or bug with .onMoveCommand on recent tvOS versions? Or is there a more robust way to handle physical Siri Remote button presses?
I have really enjoyed looking through the code and videos related to Metal 4. Currently, my interest is to update a ReSTIR Project and take advantage of more robust ways to refit acceleration Structures and more powerful ways to access resources.
I am working in Swift and have encountered a couple of puzzles:
What is the 'accepted' way to create a MTL4BufferRange to store indices and vertices?
How do I properly rewrite Swift code to build and compact an Acceleration Structure?
I do realize that this is all in Beta and will happily look through Code Samples this Fall. If other guidance is available earlier, that would be fabulous!
Thank you
After installing my notarized 3rd party app in a Tahoe VM, its embedded Automator actions can not be configured in Automator while defining a workflow: After adding the actions (enabling 3rd party extensions), their views / UI elements do not respond to any mouse event.
When enabling „show this action when running“, the options can be changed during execution of the workflow. Needless to say: Adjusting these action settings in Automator was working for years, macOS 12 - 15 and before.
Reported via Feedback Assistent (FB19015185).
Can anybody confirm this issue with Automator actions?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable.
I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:).
The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below).
Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely.
Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same).
When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected.
Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else?
Can the drifting be stopped?
func onDisplayLinkUpdate(displayLink: CADisplayLink) {
// Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease
let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime()
}
When preparing SwiftUI code that uses List with .listStyle(.plain) for iOS 26, the by-default sticky section headers combined with the new translucent top-bars often causes unpleasantly overlapping text:
Two questions here:
Is there a modifier to make section headers non-sticky?
This would be helpful for cases where the translucent bar is a good fit and the section titles don't need to be sticky/pinned.
I found .listStyle(.grouped) can be an alternative in some cases, but this adds a gray background / additional padding to the section titles.
Is there a way to get a blurry material behind the section headers when they are sticking to the top bar?
This would be good for cases where the section header is important content-wise (like in the two-column example above or for a data list categorized using sections that should be always visible as a point of reference)
I found the scroll edge effects and .scrollEdgeEffectStyle(.hard, for: .top) does the trick for the top bar but doesn't affect attached sticky section headers (maybe it should?). Also I played around with .toolbarBackground(...) but this didn't do anything useful for a nav bar in my experiments.
Topic:
UI Frameworks
SubTopic:
SwiftUI
With Let's Encrypt having completely dropped support for OCSP recently [1], I wanted to ask if macOS has a means of keeping up to date with their CRLs and if so, roughly how often this occurs?
I first observed an issue where a revoked-certificate test site, "revoked.badssl.com" (cert signed by Let's Encrypt), was not getting blocked on any browser, when a revocation policy was set up using the SecPolicyCreateRevocation API, in tandem with the kSecRevocationUseAnyAvailableMethod and kSecRevocationPreferCRL flags.
After further investigation, I noticed that even on a fresh install of macOS, Safari does not block this test website, while Chrome and Firefox (usually) do, due to its revoked certificate. Chrome and Firefox both have their own means of dealing with CRLs, while I assume Safari uses the system Keychain and APIs.
I checked cert info for the site here [2]. It was issued on 2025-07-01 20:00 and revoked an hour later.
[1] https://letsencrypt.org/2024/12/05/ending-ocsp/
[2] https://www.ssllabs.com/ssltest/analyze.html?d=revoked.badssl.com
Our APP has integrated 3D function, in order to reduce the memory occupation of the APP in the background, we will uninstall the 3D after the APP enters the background. However, the uninstall also causes problems. When the uninstall process is executed in the background, the app will briefly trigger the background GPU rendering error warning with the error warning code:
OGPUMetalError: Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU Work from background) (00000006: kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted) excuse me this warning system will tighten APP permissions background? For example, limit or shorten the background survival time of the APP. In addition, will the background refresh function fail, resulting in the failure of Bluetooth Ibeacon activation?
Topic:
Graphics & Games
SubTopic:
Metal
I'm building a visionOS app where users can place canvases into the 3D environment. These canvases are RealityKit entities that render web content using a WKWebView. The web view is regularly snapshotted, and its image is applied as a texture on the canvas surface.
When the user taps on a canvas, a WindowGroup is opened that displays the same shared WKWebView instance. This works great: both the canvas in 3D and the WindowGroup reflect the same live web content.
The canvas owns the WKWebView and keeps a strong reference to it.
Problem:
Once the user closes the WindowGroup, the 3D canvas stops receiving snapshot updates. The snapshot loop task is still running (verified via logs), but WKWebView.takeSnapshot(...) never returns — the continuation hangs forever. No error is thrown, no image is returned.
If the user taps the canvas again (reopening the window), snapshots resume and everything works again.
@MainActor
private func getSnapshot() async -> UIImage? {
guard !webView.isLoading else { return nil }
let config = WKSnapshotConfiguration()
config.rect = webView.bounds
config.afterScreenUpdates = true
return await withCheckedContinuation { continuation in
webView.takeSnapshot(with: config) { image, _ in
continuation.resume(returning: image)
}
}
}
What I’ve already verified:
The WKWebView is still in memory.
The snapshot loop (Task) is still running; it just gets stuck inside takeSnapshot(...).
I tried keeping the WKWebView inside a hidden UIWindow (with .alpha = 0.01, .windowLevel = .alert + 1, and isHidden = false).
Suspicion
It seems that once a WKWebView is passed into SwiftUI and rendered (e.g., via UIViewRepresentable), SwiftUI takes full control over its lifecycle. SwiftUI tells WebKit "This view got closed" and WebKit reacts by stopping the WKWebView. Even though it still exists in memory and is being hold by a strong reference to it in the Canvas-Entity.
Right now, this feels like a one-way path:
Starting from the canvas, tapping it to open the WindowGroup works fine.
But once the WindowGroup is closed, the WebView freezes, and snapshots stop until the view is reopened.
Questions
Is there any way under visionOS to:
Keep a WKWebView rendering after its SwiftUI view (WindowGroup) is dismissed?
Prevent WebKit from suspending or freezing the view?
Embed the view in a persistent system-rendered context to keep snapshotting functional?
For context, here's the relevant SwiftUI setup
struct MyWebView: View {
var body: some View {
if let webView = WebViewManager.shared.getWebView() {
WebViewContainer(webView: webView)
}
}
}
struct WebViewContainer: UIViewRepresentable {
let webView: WKWebView
func makeUIView(context: Context) -> UIView {
return webView
}
func updateUIView(_ uiView: UIView, context: Context) {}
}
Background: For iOS, I have built a custom video record app that records video at the highest possible FPS (supposed to be around 240 FPS but more like ~200 in practice). When I save this video to the user's camera roll, I notice this dynamic playback speed bar. This bar will speed up or slow down the video.
My question: How can I save my video such that this playback speed bar is constantly slow or plays at real time speed?
For reference I have include the playback speed bar that I am talking about in the screenshot, you can find this in the photo app when you record slow motion video.
Topic:
Media Technologies
SubTopic:
Video
App Store Connect rejects the build with the following error:
Please correct the following issues and upload a new binary to App Store Connect. ITMS-90058: This bundle is invalid - The value for key CFBundleVersion [iphoneos] in the Info.plist file must be a period-separated list of at most three non-negative integers. Please find more information about CFBundleVersion at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleversion
But we double-checked at the app level; in all places the CFBundleVersion and versions is mentioned in the correct format.
Even though we tried to upload via Xcode and Transporter, both show success. but the build is not reflecting in TestFlight. Receive the above error message via mail.
data: {
"attributes": {
"cfBundleShortVersionString": "3.25.9",
"cfBundleVersion": "25",
"p
Validated with the delivery log, but the delivery log also shows correct versioning only.
When creating an icon using icon composer, I cant upload a build to testflight/App Store connect.
Running on device from Xcode works fine, but as soon as I archive and upload to App Store Connect, I get an error saying the icon contains an alpha channel
Helllo, I'd like to change my username. Is there a way I can do that? The one I have is one I made by accident and I would prefer not to create an entirely new apple account.
Topic:
Developer Tools & Services
SubTopic:
Developer Forums
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text.
However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer.
//
// TVView.swift
// Mistery
//
// Created by Joje on 31/07/25.
//
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import AVFoundation
final class TVView: UIView {
// propriedades animacao texto
private var textAnimationTimer: Timer?
private var fullTextToAnimate: String = ""
private var currentCharIndex: Int = 0
// propriedades video estatica
private var player: AVQueuePlayer?
private var playerLayer: AVPlayerLayer?
private var playerLooper: AVPlayerLooper?
var onNextButtonTap: () -> Void = {}
// MARK: - Subviews
// imagem da TV
private(set) lazy var tvImageView: UIImageView = {
let imageView = UIImageView()
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.image = UIImage(named: "tvFinal")
imageView.contentMode = .scaleAspectFit
return imageView
}()
// texto que passa dentro da TV
private(set) lazy var storyLabel: UILabel = {
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
//label.backgroundColor = .gray
label.textColor = .red
label.font = UIFont(name: "MeltedMonster", size: 30)
label.textAlignment = .left
label.numberOfLines = 0
label.text = ""
return label
}()
private(set) lazy var nextButton: UIButton = {
let button = UIButton(type: .system)
button.translatesAutoresizingMaskIntoConstraints = false
//button.backgroundColor = .darkGray
button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside)
return button
}()
// MARK: - Lifecycle
override init(frame: CGRect) {
super.init(frame: frame)
backgroundColor = .black
setupVideoPlayer()
addSubviews()
setupConstraints()
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18)
setupFisheyeEffect()
}
private func setupFisheyeEffect() {
// cria o filtro
guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")}
storyLabel.layer.shouldRasterize = true
storyLabel.layer.rasterizationScale = UIScreen.main.scale
// define os parametros
filter.setDefaults()
// centro do efeito
let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY)
filter.setValue(center, forKey: kCIInputCenterKey)
// raio de distorção
filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey)
// intensidade de distorção
filter.setValue(7, forKey: kCIInputScaleKey)
storyLabel.layer.filters = [filter]
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - Button actions
@objc private func didPressNextButton() {
onNextButtonTap()
}
@objc private func animateNextCharacter() {
guard currentCharIndex < fullTextToAnimate.count else {
textAnimationTimer?.invalidate()
return
}
let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex)
let partialText = String(fullTextToAnimate[...currentTextIndex])
storyLabel.text = partialText
currentCharIndex += 1
}
public func updateStoryText(with text: String) {
textAnimationTimer?.invalidate()
storyLabel.text = ""
fullTextToAnimate = text
currentCharIndex = 0
textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true)
}
// MARK: - Setup methods
private func setupVideoPlayer() {
guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else {
print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov")
return
}
let playerItem = AVPlayerItem(url: videoURL)
player = AVQueuePlayer(playerItem: playerItem)
// LINHA COM POSSIVEL ERRO
playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = .resizeAspectFill
if let layer = playerLayer {
self.layer.addSublayer(layer)
}
player?.play()
}
private func addSubviews() {
self.addSubview(storyLabel)
self.addSubview(tvImageView)
self.addSubview(nextButton)
}
private func setupConstraints() {
NSLayoutConstraint.activate([
// TV Image
tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor),
tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
tvImageView.widthAnchor.constraint(equalTo: widthAnchor),
// TV Text
storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50),
storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35),
storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42),
//TV Button
nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190),
nextButton.widthAnchor.constraint(equalToConstant: 100),
nextButton.heightAnchor.constraint(equalToConstant: 160)
])
}
}
#Preview{
ViewController()
}
Since upgrading to Xcode 26 beta 4 and using the iOS 26 simulator for testing our app, we've stopped being able to receive device tokens for the simulator from the development APNS environment.
The APNS environment is able to return meta device information (e.g. model, type, manufacturer) but there are no device tokens present. When running the same app using the iOS 18.5 simulator, we are able to register the device with the same APNS environment and receive a valid device token.
I am creating a shooting game which uses Game Center for multiplayer and I want to make the Matchmaker view look different. How can I do that?
Dear Sirs,
I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook.
I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side.
We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem.
Best regards,
Johannes
I was very excited to see the addition of push notifications for widgets. However upon further inspection, the way it is implemented seems too limiting for real life apps.
I have an app for time tracking with my own backend. The app syncs with my backend in the main executable (main target). My widgets are more lightweight as they only access data in the shared app container, but they don't perform sync with the server directly to avoid race conditions with the main app.
I was under the impression that the general direction of the platform is to be doing most things in the main app target (also App Intents work that way for the most part), so the fact that the WidgetPushHandler just calls the widget's method to reload the timeline is very unfortunate. In an ideal scenario I also need the main app to be 'woken up' to perform the sync with the server, and once that's done I'd update the widget's timeline and where I would just read data from the shared app container.
So, my questions are:
What is the recommended way of updating the widgets when this push notification arrives in the case that the main app target needs to perform the sync first?
Is there any way how to detect that the method
func timeline(for configuration: InteractiveTrackingWidgetConfigurationAppIntent, in context: Context)
was called as a result of the push notification being received?
Can I somehow schedule a background task from the widget's reloadTimeline() function?
How can I get the push token later, in case that I don't save it right away the first time the WidgetPushHandler's pushTokenDidChange() is called?
Thank you for your work on this and hopefully for your answers.
FB19356256
I've been experimenting with Liquid Glass quite a bit and watched all the WWDC videos. I'm trying to create a glassy segmented picker, like the one used in Camera:
however, it seems that no matter what I do there's no way to recreate a truly clear (passthrough) bubble that just warps the light underneath around the edges. Both Glass.regular and Glass.clear seem to add a blur that can not be evaded, which is counter to what clear ought to mean.
Here are my results:
I've used SwiftUI for my experiment but I went through the UIKit APIs and there doesn't seem to be anything that suggests full transparency.
Here is my test SwiftUI code:
struct GlassPicker: View {
@State private var selected: Int?
var body: some View {
ScrollView([.horizontal], showsIndicators: false) {
HStack(spacing: 0) {
ForEach(0..<20) { i in
Text("Row \(i)")
.id(i)
.padding()
}
}
.scrollTargetLayout()
}
.contentMargins(.horizontal, 161)
.scrollTargetBehavior(.viewAligned)
.scrollPosition(id: $selected, anchor: .center)
.background(.foreground.opacity(0.2))
.clipShape(.capsule)
.overlay {
DefaultGlassEffectShape()
.fill(.clear) // Removes a semi-transparent foreground fill
.frame(width: 110, height: 50)
.glassEffect(.clear)
}
}
}
Is there any way to achieve the above result or does Apple not trust us devs with more granular control over these liquid glass elements?
Hello,
We are trying to add tap to pay to our app built with react native through expo.
When running the command eas build --profile development --platform ios --local I get this error:
But when I open the project in XCode I can see that the Tap to Pay capability is enabled:
And when I look at my profiles in the apple developer platform, I can see that the profile the error mentioned above has the Tap to Pay entitlement contrary to what it tells me:
I don't understand what's happening, can somebody help me?
I'm here if you need more informations.
P.S.: not sure if it's relevant but, the way we did thing with expo, we have a different bundle identifier and app name according to which environment we are building for (development/preview/production)