With Swift being brought to new places, is anyone working on interoperability with PHP? I'd love to replace much of my PHP and Javascript web code with Swift (and ideally SwiftUI for UI design). Are there any projects/people working in this space?
Swift
RSS for tagSwift is a powerful and intuitive programming language for Apple platforms and beyond.
Posts under Swift tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm very unpleased to have found out the hard way, that
vDSP.FFT<T> where T : vDSP_FourierTransformable
and its associated vDSP_FourierTransformFunctions,
only does real-complex conversion!!!
This is horribly misleading - the only hint that it calls vDSP_fft_zrop() (split-complex real-complex out-of-place) under the hood, not vDSP_fft_zop()[1], is "A 1D single- and double-precision fast Fourier transform." - instead of "a single- and double-precision complex fast Fourier transform". Holy ******* ****.
Just look at how people miss-call this routine. Poor him, realizing he had to log2n+1 lest only the first half of the array was transformed, not understanding why [2].
And poor me, having taken days investigating why a simple Swift overlay vDSP.FFT.transform may execute at 1.4x the speed of vDSP_fft_zopt(out-of-place with temporary buffer)! [3]
[1]: or better, vDSP_fft_zopt with the temp buffer alloc/dealloc taken care of, for us
[2]: for real-complex conversion, say real signal of length 16. log2n is 4 no problem, but then the real and imaginary vectors are both length 8. Also, vDSP_fft only works on integer powers of 2, so he had to choose next integer power of 2 (i.e. 1<<(log2n-1)) instead of plain length for his internal arrays.
[3]: you guessed it. fft_zrop(log2n-1, ...) vs. fft_zop(log2n, ...). Half the problem size lol.
Now we have vDSP.DiscreteFourierTransform, which wraps vDSP_DFT routines and "calls FFT routines when the size allows it", and works too for interleaved complexes. Just go all DFT, right?
if let setup_f = try? vDSP.DiscreteFourierTransform(previous: nil, count: 8, direction: .forward, transformType: .complexComplex, ofType: DSPComplex.self) {
// Done forward transformation
// and scaled the results with 1/N
// How about going reverse?
if let setup_r = try? vDSP.DiscreteFourierTransform(previous: setup_f, count: 8, direction: .inverse, transformType: .complexComplex, ofType: DSPComplex.self) {
// Error: cannot convert vDSP.DiscreteFourierTransform<DSPComplex> to vDSP.DiscreteFourierTransform<Float>
// lolz
}
}
This API appeared in macOS 12. 3 years later nobody have ever noticed. I'm at a loss for words.
The project builds successfully but unable to archive project on Xcode 26.
error: "Command SwiftCompile failed with a nonzero exit code"
I'm running into a seemingly unsolvable compile problem with the new Xcode 26 and Swift 6.2.
Here's the issue.
I've got this code that was working before:
NSAnimationContext.runAnimationGroup({(context) -> Void in
context.duration = animated ? 0.5 : 0
clipView.animator().setBoundsOrigin(p)
}, completionHandler: {
self.endIgnoreFrameChangeEvents()
})
It's very simple. The clipView is a scrollView.contentView, and "animated" is a bool, and p is an NSPoint
It captures those things, scrolls the clip view (animating if needed) to the point, and then calls a method in self to signal that the animation has completed.
I'm getting this error:
Call to main actor-isolated instance method 'endIgnoreFrameChangeEvents()' in a synchronous nonisolated context
So, I don't understand why so many of my callbacks are getting this error now, when they worked before, but it is easy to solve. There's also an async variation of runAnimationGroup. So let's use that instead:
Task {
await NSAnimationContext.runAnimationGroup({(context) -> Void in
context.duration = animated ? 0.5 : 0
clipView.animator().setBoundsOrigin(p)
})
self.endIgnoreFrameChangeEvents()
}
So, when I do this, then I get a new error. Now it doesn't like the first enclosure. Which it was perfectly happy with before.
Here's the error:
Sending value of non-Sendable type '(NSAnimationContext) -> Void' risks causing data races
Here are the various overloaded definitions of runAnimationGroup:
open class func runAnimationGroup(_ changes: (NSAnimationContext) -> Void, completionHandler: (@Sendable () -> Void)? = nil)
@available(macOS 10.7, *)
open class func runAnimationGroup(_ changes: (NSAnimationContext) -> Void) async
@available(macOS 10.12, *)
open class func runAnimationGroup(_ changes: (NSAnimationContext) -> Void)
The middle one is the one that I'm trying to use. The closure in this overload isn't marked sendable. But, lets try making it sendable now to appease the compiler, since that seems to be what the error is asking for:
Task {
await NSAnimationContext.runAnimationGroup({ @Sendable (context) -> Void in
context.duration = animated ? 0.5 : 0
clipView.animator().setBoundsOrigin(p)
})
self.endIgnoreFrameChangeEvents()
}
So now I get errors in the closure itself. There are 2 errors, only one of which is easy to get rid of.
Call to main actor-isolated instance method 'animator()' in a synchronous nonisolated context
Call to main actor-isolated instance method 'setBoundsOrigin' in a synchronous nonisolated context
So I can get rid of that first error by capturing clipView.animator() outside of the closure and capturing the animator. But the second error, calling setBoundsOrigin(p) - I can't move that outside of the closure, because that is the thing I am animating! Further, any property you're going to me animating in runAnimationGroup is going to be isolated to the main actor.
So now my code looks like this, and I'm stuck with this last error I can't eliminate:
let animator = clipView.animator()
Task {
await NSAnimationContext.runAnimationGroup({ @Sendable (context) -> Void in
context.duration = animated ? 0.5 : 0
animator.setBoundsOrigin(p)
})
self.endIgnoreFrameChangeEvents()
}
Call to main actor-isolated instance method 'setBoundsOrigin' in a synchronous nonisolated context
There's something that I am not understanding here that has changed about how it is treating closures. This whole thing is running synchronously on the main thread anyway, isn't it? It's being called from a MainActor context in one of my NSViews. I would expect the closure in runAnimationGroup would need to be isolated to the main actor, anyway, since any animatable property is going to be marked MainActor.
How do I accomplish what I am trying to do here?
One last note: There were some new settings introduced at WWDC that supposedly make this stuff simpler - "Approchable Concurrency". In this example, I didn't have that turned on. Turning it on and setting the default to MainActor does not seem to have solved this problem.
(All it does is cause hundreds of new concurrency errors in other parts of my code that weren't there before!)
This is the last new error in my code (without those settings), but I can't see any way around this one. It's basically the same error as the others I was getting (in the callback closures), except with those I could eliminate the closures by changing APIs.
When I try to use an entity created in a CoreData, it gives me: 'PlayerData' is ambiguous for type lookup in this context
I have the following function
private func SetupLocaleObserver ()
{
NotificationCenter.default.addObserver (
forName: NSLocale.currentLocaleDidChangeNotification,
object: nil,
queue: .main
) {_ in
print ("Locale changed to: \(Locale.current.identifier)");
}
}
I call this function inside the viewDidLoad () method of my view controller. The expectation was that whenever I change the system or app-specific language preference, the locale gets changed, and this change triggers my closure which should print "Locale changed to: " on the console.
However, the app gets terminated with a SIGKILL whenever I change the language from the settings. So, it is observed that sometimes my closure runs, while most of the times it does not run - maybe the app dies even before the closure is executed.
So, the question is, what is the use of this particular notification if the corresponding closure isn't guaranteed to be executed before the app dies? Or am I using it the wrong way?
In WWDC25 video 284: Build a UIKit app with the new design, there is mention of a cornerConfiguration property on UIVisualEffectView. But this properly isn't documented and Xcode 26 isn't aware of any such property.
I'm trying to replicate the results of that video in the section titled Custom Elements starting at the 19:15 point. There is a lot of missing details and typos in the code associated with that video.
My attempts with UIGlassEffect and UIViewEffectView do not result in any capsule shapes. I just get rectangles with no rounded corners at all.
As an experiment, I am trying to recreate the capsule with the layers/location buttons in the iOS 26 version of the Maps app.
I put the following code in a view controller's viewDidLoad method
let imgCfgLayer = UIImage.SymbolConfiguration(hierarchicalColor: .systemGray)
let imgLayer = UIImage(systemName: "square.2.layers.3d.fill", withConfiguration: imgCfgLayer)
var cfgLayer = UIButton.Configuration.plain()
cfgLayer.image = imgLayer
let btnLayer = UIButton(configuration: cfgLayer, primaryAction: UIAction(handler: { _ in
print("layer")
}))
var cfgLoc = UIButton.Configuration.plain()
let imgLoc = UIImage(systemName: "location")
cfgLoc.image = imgLoc
let btnLoc = UIButton(configuration: cfgLoc, primaryAction: UIAction(handler: { _ in
print("location")
}))
let bgEffect = UIGlassEffect()
bgEffect.isInteractive = true
let bg = UIVisualEffectView(effect: bgEffect)
bg.contentView.addSubview(btnLayer)
bg.contentView.addSubview(btnLoc)
view.addSubview(bg)
btnLayer.translatesAutoresizingMaskIntoConstraints = false
btnLoc.translatesAutoresizingMaskIntoConstraints = false
bg.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
btnLayer.leadingAnchor.constraint(equalTo: bg.contentView.leadingAnchor),
btnLayer.trailingAnchor.constraint(equalTo: bg.contentView.trailingAnchor),
btnLayer.topAnchor.constraint(equalTo: bg.contentView.topAnchor),
btnLoc.centerXAnchor.constraint(equalTo: bg.contentView.centerXAnchor),
btnLoc.topAnchor.constraint(equalTo: btnLayer.bottomAnchor, constant: 15),
btnLoc.bottomAnchor.constraint(equalTo: bg.contentView.bottomAnchor),
bg.centerXAnchor.constraint(equalTo: view.safeAreaLayoutGuide.centerXAnchor),
bg.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 40),
])
The result is pretty close other than the complete lack of capsule shape.
What changes would be needed to get the capsule shape? Is this even the proper approach?
Hi,
I’m trying to use the new InlineArray type, but noticed that it is unfortunately only available on macOS 26 and not on macOS 15 and others. As this is quite an essential type, I was wondering if this is intended or will this change in later beta’s? Not having it available on older Darwin platforms would severily limit it’s usage in the coming years.
Thanks!
Last night my iPhone game crashed while running in debug mode on my iPhone. I just plugged it into my Mac, and was able to find the ips file. The stack trace shows the function in my app where it crashed, and then a couple of frames in libswiftCore.dylib before an assertion failure.
My question is - I've got absolutely no idea what the assertion failure actually was, all I have is...
0 libswiftCore.dylib 0x1921412a0 closure #1 in closure #1 in closure #1 in _assertionFailure(_:_:file:line:flags:) + 228
1 libswiftCore.dylib 0x192141178 closure #1 in closure #1 in _assertionFailure(_:_:file:line:flags:) + 327
2 libswiftCore.dylib 0x192140b4c _assertionFailure(_:_:file:line:flags:) + 183
3 MyGame.debug.dylib 0x104e52818 SentryBrain.takeTurn(actor:) + 1240
...
How do I figure out what the assertion failure was that triggered the crash? How do I figure out what line of code in takeTurn(...) triggered the failing assertion failure?
The swift syntax compilation reported an error.
as follows
How should I be compatible
Hi all,
I’m running into a persistent build issue with my Swift project ORSOFINAL after migrating from Xcode stable to Xcode-beta.app (June 2025 version).
⸻
💥 Errors displayed:
1. C99 was enabled in PCH file but is currently disabled
2.
module file .../ModuleCache.noindex/SwiftShims-AXUM98L131W4...pcm
cannot be loaded due to a configuration mismatch with the current compilation
3. missing required module 'SwiftShims'
⸻
🛠 What I’ve already tried:
• xcode-select -s /Applications/Xcode-beta.app/Contents/Developer
• Deleted ~/Library/Developer/Xcode/DerivedData, ModuleCache.noindex, Archives, and Products
• Ran sudo xcodebuild -runFirstLaunch
• Clean Build Folder in Xcode-beta
• Verified Command Line Tools setting points to Xcode-beta
⸻
❓Looking for guidance on:
• Whether this is a known bug in Xcode-beta
• If SwiftShims/PCM conflicts are expected between versions
• Best practices to safely migrate from Xcode stable to beta for Swift-based projects
Any advice is much appreciated.
Thanks,
Mathéo
Hi all,
I’m running into a persistent build issue with my Swift project ORSOFINAL after migrating from Xcode stable to Xcode-beta.app 26 (June 2025 version).
⸻
💥 Errors displayed:
1. C99 was enabled in PCH file but is currently disabled
2.
module file .../ModuleCache.noindex/SwiftShims-AXUM98L131W4...pcm
cannot be loaded due to a configuration mismatch with the current compilation
3. missing required module 'SwiftShims'
⸻
🛠 What I’ve already tried:
• xcode-select -s /Applications/Xcode-beta.app/Contents/Developer
• Deleted ~/Library/Developer/Xcode/DerivedData, ModuleCache.noindex, Archives, and Products
• Ran sudo xcodebuild -runFirstLaunch
• Clean Build Folder in Xcode-beta
• Verified Command Line Tools setting points to Xcode-beta
⸻
❓Looking for guidance on:
• Whether this is a known bug in Xcode-beta
• If SwiftShims/PCM conflicts are expected between versions
• Best practices to safely migrate from Xcode stable to beta for Swift-based projects
Any advice is much appreciated.
Thanks,
Mathéo
I found that the aggregated device correctly obtains input channels in the standard microphone mode. However, in voice isolation mode, it only retrieves channels from the first sub-device in the aggregated device's list. If I want to properly obtain channel information in voice isolation mode, how should I do it?
Just testing an existing app with Xcode 26.
I notice that content of UIBarButtonItem (either text or image) disappears when tapped (and reappear on release).
Those are custom, bordered buttons.
Attribute inspector:
Buttons in Xcode:
When selected in Xcode, we see a rectangle inside the rounder rect of iOS 26
In simulator:
When tapped in simulator:
I have edited code from
backButton.setTitleTextAttributes([
.font : boldFont,
.foregroundColor : UIColor.systemBlue,
], for: .normal)
to
backButton.setTitleTextAttributes([
.font : boldFont,
.foregroundColor : UIColor.systemBlue,
], for: [.normal, .focused, .selected, .highlighted])
to no avail.
What am I missing ?
This is a hybrid app built with JavaScript (Vue) + Capacitor. It is a reader app and has been authorized by Apple to use the External Link Account Entitlement, allowing users to manage their subscriptions outside of the app.
I have implemented the External Link Account API. When I click on "Gerenciar Assinatura em...", I use the External Link Account API to check if the modal is available (using ExternalLinkAccount.canOpen()). I always get "false".
my plugin in swift:
my app:
I believe this is due to the fact that I am in a development environment. My project is configured correctly in the following files: info.plist and App.entitlements. I also have the authorization in my profile visible in Xcode. I have attached screenshots for validation. The question is: should the External Link Account API work in a test environment? I am testing the build in Xcode with a physical iPhone with iOS 18.
file info.plist:
file App.entitlements:
xcode with authorization in my profile:
If you could let me know if I am doing something wrong, I would greatly appreciate it.
The other day I was playing with iBeacon and found out that CLBeaconIdentityConstraint will be deprecated after iOS 18.5. So I've written code with BeaconIdentityCondition in reference to this Apple's sample project.
import Foundation
import CoreLocation
let monitorName = "BeaconMonitor"
@MainActor
public class BeaconViewModel: ObservableObject {
private let manager: CLLocationManager
static let shared = BeaconViewModel()
public var monitor: CLMonitor?
@Published var UIRows: [String: [CLMonitor.Event]] = [:]
init() {
self.manager = CLLocationManager()
self.manager.requestWhenInUseAuthorization()
}
func startMonitoringConditions() {
Task {
print("Set up monitor")
monitor = await CLMonitor(monitorName)
await monitor!.add(getBeaconIdentityCondition(), identifier: "TestBeacon")
for identifier in await monitor!.identifiers {
guard let lastEvent = await monitor!.record(for: identifier)?.lastEvent else { continue }
UIRows[identifier] = [lastEvent]
}
for try await event in await monitor!.events {
guard let lastEvent = await monitor!.record(for: event.identifier)?.lastEvent else { continue }
if event.state == lastEvent.state {
continue
}
UIRows[event.identifier] = [event]
UIRows[event.identifier]?.append(lastEvent)
}
}
}
func updateRecords() async {
UIRows = [:]
for identifier in await monitor?.identifiers ?? [] {
guard let lastEvent = await monitor!.record(for: identifier)?.lastEvent else { continue }
UIRows[identifier] = [lastEvent]
}
}
func getBeaconIdentityCondition() -> CLMonitor.BeaconIdentityCondition {
CLMonitor.BeaconIdentityCondition(uuid: UUID(uuidString: "abc")!, major: 123, minor: 789)
}
}
It works except that my sample app can take as long as 90 seconds to see event changes. You would get an instant update with an fashion (CLBeacon and CLBeaconIdentityConstraint). Is there anything that I can do to see changes faster? Thanks.
I'm currently working on implementing a character limit for Korean text input using UITextField, but I've encountered two key issues.
1. How can I determine if Korean input is complete?
I understand that markedTextRange represents provisional (composing) text during multistage text input systems (such as Korean, Japanese, Chinese).
While testing with Korean input, I expected markedTextRange to reflect the composing state.
However, it seems that markedTextRange remains nil throughout the composition process.
2. Problems limiting character count for Korean input
I’ve tried two methods to enforce a character limit. Both lead to incorrect behavior due to how Korean characters are composed.
Method 1 – Before replacement:
func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
guard let text = textField.text else { return true }
return text.count <= 5
}
This checks the text length before applying the replacementString.
The issue is that when the user enters a character that is meant to combine with the previous one to form a composed character, the input should result in a single, combined character.
However, because the character limit check is based on the state before the replacement is applied, the second character does not get composed as expected.
Method 2 – After change:
textField.addTarget(self, action: #selector(editingChanged), for: .editingChanged)
@objc private func editingChanged(_ sender: UITextField) {
guard var text = sender.text else { return }
if text.count > limitCount {
text.removeLast()
sender.text = text
}
}
This removes the last character if the count exceeds the limit after the change.
But when a user keeps typing past the limit, the last character is overwritten by new input.
I suspect this happens because the .editingChanged event occurs before the multistage input is finalized,
and the final composed character is applied after that event.
My understanding of the input flow:
Standard input:
shouldChangeCharactersIn is called
replacementString is applied
.editingChanged is triggered
With multistage input (Korean, etc.):
shouldChangeCharactersIn is called
replacementString is applied
.editingChanged is triggered
Final composed character is inserted (after all the above)
Conclusion
Because both approaches lead to incorrect character count behavior with Korean input,
I believe I need a new strategy.
Is there an officially recommended way to handle multistage input properly with UITextField in this context?
Any advice or clarification would be greatly appreciated.
MacOS 15.5(24F74)
Xcode 16.4 (16F6)
In reference to this webpage, I'm turning my iPad to an iBeacon device.
class BeaconViewModel: NSObject, ObservableObject, CBPeripheralManagerDelegate {
private var peripheralManager: CBPeripheralManager?
private var beaconRegion: CLBeaconRegion?
private var beaconIdentityConstraint: CLBeaconIdentityConstraint?
//private var beaconCondition: CLBeaconIdentityCondition?
override init() {
super.init()
if let uuid = UUID(uuidString: "abc") {
beaconIdentityConstraint = CLBeaconIdentityConstraint(uuid: uuid, major: 123, minor: 456)
beaconRegion = CLBeaconRegion(beaconIdentityConstraint: beaconIdentityConstraint!, identifier: "com.example.myDeviceRegion")
peripheralManager = CBPeripheralManager(delegate: self, queue: nil, options: nil)
}
}
func peripheralManagerDidUpdateState(_ peripheral: CBPeripheralManager) {
switch peripheral.state {
case .poweredOn:
startAdvertise()
case .poweredOff:
peripheralManager?.stopAdvertising()
default:
break
}
}
func startAdvertise() {
guard let beaconRegion = beaconRegion else { return }
let peripheralData = beaconRegion.peripheralData(withMeasuredPower: nil)
peripheralManager?.startAdvertising(((peripheralData as NSDictionary) as! [String: Any]))
}
func stopAdvertise() {
peripheralManager?.stopAdvertising()
}
}
In Line 10, I'm using CLBeaconidentityConstraint to constrain the beacon. Xcode says that this class is deprecated and suggests that we use CLBeaconIdentityCondition. But if I try to use it, Xcode says
Cannot find type 'CLBeaconIdentityCondition' in scope
I've just updated Xcode to 16.4. I still get the same error. So how do we use CLBeaconIdentityCondition to constrain the beacon? My macOS version is Sequoia 15.5. Thanks.
Hi, I am making a AI-Powered app that makes api requests to the openai API. However, for security, I set up a vercel backend that handles the API calls securely, while my frontend makes a call to my vercel-hosted https endpoint. Interestingly, whenever I try to make that call on my device, an iPhone, I get this error:
Task <91AE4DE0-2845-4348-89B4-D3DD1CF51B65>.<10> finished with error [-1003] Error Domain=NSURLErrorDomain Code=-1003 "A server with the specified hostname could not be found." UserInfo={_kCFStreamErrorCodeKey=-72000, NSUnderlyingError=0x1435783f0 {Error Domain=kCFErrorDomainCFNetwork Code=-1003 "(null)" UserInfo={_kCFStreamErrorDomainKey=10, _kCFStreamErrorCodeKey=-72000, _NSURLErrorNWResolutionReportKey=Resolved 0 endpoints in 3ms using unknown from query, _NSURLErrorNWPathKey=satisfied (Path is satisfied), interface: pdp_ip0[lte], ipv4, ipv6, dns, expensive, uses cell}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <91AE4DE0-2845-4348-89B4-D3DD1CF51B65>.<10>, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalDataTask <91AE4DE0-2845-4348-89B4-D3DD1CF51B65>.<10>"
), NSLocalizedDescription=A server with the specified hostname could not be found., NSErrorFailingURLStringKey=https://[my endpoint], NSErrorFailingURLKey=https://[my endpoint], _kCFStreamErrorDomainKey=10}
I'm completely stuck because when I directly make https requests to other api's like openai's endpoint, without the proxy, it finds the server completely fine. Running my endpoint on terminal with curl also works as intended, as I see api key usages. But for some reason, on my project, it does not work. I've looked through almost every single post I could find online, but a lot all of the solutions are outdated and unhelpful.
I'm willing to schedule a call, meeting, whatever to resolve this issue and get help more in depth as well.
Problem Description
I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0
Problem Details
In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling:
"passes": [
"box1_pass": [
"draw": "DRAW_SCENE",
"includeCategoryMask": 1,
"colorStates": [
"clear": true,
"clearColor": "0 0 0 0" // Expecting transparent black
],
"depthStates": [
"clear": true,
"enableWrite": true
],
"outputs": [
"depth": "box1_depth",
"color": "box1_color"
],
],
"box2_pass": [
"draw": "DRAW_SCENE",
"includeCategoryMask": 2,
"colorStates": [
"clear": true,
"clearColor": "0 0 0 0" // Also expecting transparent black
],
"depthStates": [
"clear": false,
"enableWrite": false
],
"outputs": [
"depth": "box1_depth", // Sharing the same depth buffer
"color": "box2_color",
],
],
"final_quad": [
"draw": "DRAW_QUAD",
"metalVertexShader": "myVertexShader",
"metalFragmentShader": "myFragmentShader",
"inputs": [
"box1_color": "box1_color",
"box2_color": "box2_color",
],
"outputs": [
"color": "COLOR"
]
]
]
And the metal shader used to display box1_color and box2_color with splitting:
fragment half4 myFragmentShader(VertexOut in [[stage_in]],
texture2d<half, access::sample> box1_color [[texture(0)]],
texture2d<half, access::sample> box2_color [[texture(1)]]) {
half4 color1 = box1_color.sample(s, in.texcoord);
half4 color2 = box2_color.sample(s, in.texcoord);
if (in.texcoord.x < 0.5) {
return color1;
}
return color2;
};
Expected Behavior
Both passes should clear their color targets to transparent black (0, 0, 0, 0)
The depth buffer should be shared between passes for proper occlusion
Actual Behavior
Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image)
This happens even when I explicitly set clearColor: "0 0 0 0" for both passes
Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes
What I've Tried
Setting different clearColor values - all are ignored when sharing depth buffer
Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue
Creating a separate pass to capture the background - the background still appears in the other passes
Various combinations of clear flags and render orders
Environment
iOS/macOS, running with "My Mac (Designed for iPad)"
Xcode 16.2
Question
Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing?
The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings.
Any insights or workarounds would be greatly appreciated. Thank you!