Post

Replies

Boosts

Views

Activity

@State memory leak Xcode 16.2
Hi, A class initialized as the initial value of an @State property is not released until the whole View disappears. Every subsequent instance deinitializes properly. Am I missing something, or is this a known issue? struct ContentView: View { // 1 - init first SimpleClass instance @State var simpleClass: SimpleClass? = SimpleClass(name: "First") var body: some View { VStack { Text("Hello, world!") } .task { try? await Task.sleep(for: .seconds(2)) // 2 - init second SimpleClass instance and set as new @State // "First" should deinit simpleClass = SimpleClass(name: "Second") // 3 - "Second" deinit just fine simpleClass = nil } } } class SimpleClass { let name: String init(name: String) { print("init: \(name)") self.name = name } deinit { print("deinit: \(name)") } } output: init: First init: Second deinit: Second Thanks
0
0
179
1w
MTLRenderCommandEncoder in ARView postProcess
Hi, I need to render thousands of simple shapes in our AR experience. I'm using ARView to render 3D models, handling raycast, lighting etc. and I'm looking for a correct way to "inject" Metal code in to ARView. I implemented rendering code inside ARView's renderCallbacks.postProcess: let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() let renderDescriptor = MTLRenderPassDescriptor() ... ... let commandEncoder = context.commandBuffer.makeRenderCommandEncoder(descriptor: renderDescriptor)! ... ... commandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: count) commandEncoder.endEncoding() This solution seems to works for me but sometimes some strange things happens, like camera image reneded by blitEncoder starts to jitter or FPS drops to 20 and sametimes it just works fine. Question 1: Is my solution correct way to draw simple shapes in ARView? Question 2: What can cause those problems during rendering? Thanks :)
1
0
725
May ’23
Custom RealityKit occlusion based on Depth map
Hi,  Im trying to determine if point in 3D space is covered by other objects like human hand or a wall. I do not want to use raycast, so my idea is to calculate two things: 1) distance between iPad camera and this point. 2) position of this 3D point projected to 2D arView and then find depth information from depthMap at this point If depth is smaller than distance to point I can assume that point is covered by something. My code works well when iPad is facing our 3D point straight, but when we rotate iPad a little then calculation 2 (based on depth) gain an error. It looks like calculation 1 and 2 take two different points on iPad as a reference (camera position) but I could not find any logic in it. This is my code: let viewSize = arView.bounds.size let frame = arView.session.currentFrame! // Transform to translate between arView and depth map let displayTransform = frame.displayTransform(for: arView.interfaceOrientation, viewportSize: viewSize) guard let depthPixelBuffer = frame.sceneDepth?.depthMap else { return } let depthWidth = CVPixelBufferGetWidth(depthPixelBuffer) let depthWidthFloat = CGFloat(depthWidth) let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer) let depthHeightFloat = CGFloat(depthHeight) // Point in 3D space (our point, red square on images) let object3Dposition = self.position // Calculate distance between camera and point in 3D space // this always works good let distanceToObject = distance(object3Dposition, arView.cameraTransform.translation) // 2D point on ArView projected from 3D position (find where this point will be visible on arView) guard let pointOnArView = arView.project(object3Dposition) else { return } // Normalize 2D point (0-1) let pointOnArViewNormalized = CGPoint(x: pointOnArView.x/viewSize.width, y: pointOnArView.y/viewSize.height) // Transform form ArView position to depthMap position let pointOnDepthMapNormalized = CGPointApplyAffineTransform(pointOnArViewNormalized, displayTransform.inverted()) // Point on depth map (from normalized coordinates to true coordinates) let pointOnDepthMap = CGPoint(x: pointOnDepthMapNormalized.x * depthWidthFloat, y: pointOnDepthMapNormalized.y * depthHeightFloat) guard     pointOnDepthMap.x >= 0 && pointOnDepthMap.y >= 0 && pointOnDepthMap.x < depthWidthFloat && pointOnDepthMap.y < depthHeightFloat else {     // Point not visible, outside of screen     isVisibleByCamera = false     return } // Read depth from buffer let depth: Float32 CVPixelBufferLockBaseAddress(depthPixelBuffer, CVPixelBufferLockFlags(rawValue: 2)) let floatBuffer = unsafeBitCast(     CVPixelBufferGetBaseAddress(depthPixelBuffer),     to: UnsafeMutablePointer<Float32>.self ) // Get depth in 'pointOnDepthMap' coordinates (convert from X,Y coordinates to buffer index) let depthBufferIndex = depthWidth * Int(pointOnDepthMap.y) + Int(pointOnDepthMap.x) // This depth is incorrect when iPad is rotated depth = floatBuffer[depthBufferIndex]          CVPixelBufferUnlockBaseAddress(depthPixelBuffer, CVPixelBufferLockFlags(rawValue: 2)) if distanceToObject > depth + 0.05 {     isVisibleByCamera = false } else {     isVisibleByCamera = true } Thank you :)
2
1
1.7k
Feb ’23
How to enable purple warnings for SwiftUI in old Xcode project
Hi, When writing SwiftUI code in fresh Xcode project with "Interface: SwiftUI" I can see really useful SwiftUI warnings like: [SwiftUI] Publishing changes from within view updates is not allowed, this will cause undefined behavior. but I'm also developing an application which was created with "interface: Storyboard" and I would like to enable those warnings too, how can I do it? Thanks:)
1
1
727
Dec ’22
Certificate chain in SecIdentity vs sec_identity_t
Hi, I want to connect to our MQTT broker which validates client certificate. Client need to provide its own certificate with chain because broker does not know intermediate certificates. A library I am using has API to provide client certificate as SecIdentity. The library uses SecIdentiy like so: public var clientIdentity: SecIdentity? . . . let secIdentity = sec_identity_create(clientIdentity) sec_protocol_options_set_local_identity(options.securityProtocolOptions, secIdentity) As far as I know SecIdentity contains leaf certificate and private key, there is no space for certificate chain. I have edited this library API to use sec_identity_t directly and then provide it this way: let secIdentity = sec_identity_create_with_certificates(clientIdentity, certs as CFArray And then everything works, broker receives client certificate and chain. So, is there a way to provide certificate chain with SecIdentity or only sec_identity_t can handle it? Thanks :)
2
0
827
Sep ’22
Getting specific error when validating certificate, SecTrustEvaluateWithError
Hi, I'm trying to validate certificate that I know has two problems: Too long expiration date Hostname mismatch I'm ok with those problems so when they occurs I want to accept this certificate as valid. When using: success = SecTrustEvaluateWithError(trust, &error) let dictionary = SecTrustCopyResult(trust) this dictionary contains validation failure reasons under key "TrustResultDetails" - key : SSLHostname - value : 0 - key : ValidityPeriodMaximums - value : 0 but none of those keys are documented and not really sutable for production code? My question is how to know validation failure reasons or how to change validation method to pass validation when those problems occurs? Thanks :)
4
0
906
Jun ’22