SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

84 Posts
Sort by:
Post not yet marked as solved
2 Replies
112 Views
Dear all, I'm new in coding, for maybe the fifth time :), and I hope I find the right words. Right now I'm prototyping a 3D game with scenekit for IOS devices. At the moment the prototyp of the MainScene.scn (the first game scene) is ready. The 3D objects are located in the assetcatalog, and the game behavior with object movements, gyro data, 3D Text and so on, is codet in the GameViewController.swift. Now I’m at a point where I think I can do many things wrong, at the cost of days, weeks or months of work, to reconstruct my app, afterwards. So I want to understand, what's the „sexiest“ way to structure my project Scenes, Views, Controller, etc. Simplified, my game will have a user interface to store the name, change gameplay setting and so on, and it will have multiple 3D scenes where the game takes place. At first I thought, it would be a good idea to arrange multiple scenes, all controlled by the GameViewController, due to not having to duplify recurring methods or so. But as I thought of, I saw a GameViewController file bigger and bigger and I had the fear to more and more loosing the focus the more scenes I added. The thought of multiple Controller for each or a fiew Scenes are not as „****“ too, because by changes on a recurring method, I maybe have to change it for every Controller. I then thought of instancing the GameViewController but in no case I had the feeling „that’s the way to go“. So long things short: How would you arrange a game project like this? Thank you in advance, Ray
Posted
by raycord.
Last updated
.
Post not yet marked as solved
4 Replies
697 Views
Hi there - Where would a dev go these days to get an initial understanding of SceneKit? The WWDC videos linked in various places seem to be gone?! For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/ Any advice..? Cheers, Jan
Posted
by JayMcBee.
Last updated
.
Post not yet marked as solved
0 Replies
153 Views
Hi, i want to place a object in 3d world space without the use of hittest or plane detection in ios swift code. Suggest the best method. Now, I take the camera center matrix and use simd_mul to place the object, it works but the object gets placed at the centre of the mobile screen. I want to select the x and y positino on the screen 2d coordinate and place the object. I tried using the unprojectpoint function, to get the AR scene world coordinate of the point i touch on the mobile screen. I get the x, y,z values, they are very close to the values from camera center matrix. When i try to replace the unprojectpoint values in the cameracenter matrix, i dont see a difference in the location of the placed object. The below code always place object from center screen with specified depth, But i need to place object in user specified position(x,y) of the screen with depth. 2D pixel coordinate system of the renderer to the 3D world coordinate system of the scene. /* Create a transform with a translation of 0.2 meters in front of the camera. */ var translation = matrix_identity_float4x4 translation.columns.3.z = -0.2 let transform = simd_mul(view.session.currentFrame.camera.transform, translation) Refer from : [https://developer.apple.com/documentation/arkit/arskview/providing_2d_virtual_content_with_spritekit) The code i used for replacing the camera center matrix with the unprojectpoint is let vpWithZ = SCNVector3(x: 100.0, y: 100.0, z: -1.0) let worldPoint = sceneView.unprojectPoint(vpWithZ) var translation = matrix_identity_float4x4 translation.columns.3.z = Float(Depth) var translation2 = sceneView.session.currentFrame!.camera.transform translation2.columns.3.x = worldPoint.x translation2.columns.3.y = worldPoint.y translation2.columns.3.z = worldPoint.z let new_transform = simd_mul(translation2, translation) /* add object name you wanted in your project*/ let sphere = SCNSphere(radius: 0.03) let objectNode = SCNNode(geometry: sphere) objectNode.position = SCNVector3(x: transform.columns.3.x, y: transform.columns.3.y, z: transform.columns.3.z) The below image shows outline of my idea.
Posted Last updated
.
Post not yet marked as solved
1 Replies
194 Views
Hello, I am currently working on a project where I am creating a bookstore visualization with racks and shelves(Full immersive view). I have an array of names, each representing a USDZ object that is present in my working directory. Here’s the enum I am trying to iterate over: enum AssetName: String, Codable, Hashable, CaseIterable { case book1 = "B1" case book2 = "B2" case book3 = "B3" case book4 = "B4" } and the code for adding objects I wrote: import SwiftUI import RealityKit struct LocalAssetRealityView: View { let assetName: AssetName var body: some View { RealityView { content in if let asset = try? await ModelEntity(named: assetName.rawValue) { content.add(asset) } } } } Now I get the error, when I try to add multiple objects on Button click: Unable to present another Immersive Space when one is already requested or connected please suggest any solutions. Also suggest if anything can be done to add positions for the objects as well programatically.
Posted
by Code2aum.
Last updated
.
Post not yet marked as solved
0 Replies
228 Views
I'm trying to add dynamic shadows by adding a directional light to the scene. I implemented a POC based on the latest documentation. Basically, the way shadows are being rendered in RealityKit is by a adding a ModelEntity into an AnchorEntity with a target of type planes. The result is that I'm getting shadows that are terribly flickering. I'd add that in SceneKit, there are many more shadow-related properties that let you tweak the look and feel of the shadows, and it's not hard to get a decent shadow there. I'm wondering if having accurate dynamic shadows is possible in RealityKit and if not, if there's a plan to fix it in the next RealityKit version.
Posted
by nativ18.
Last updated
.
Post not yet marked as solved
1 Replies
274 Views
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately. When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
Posted
by mfstanton.
Last updated
.
Post not yet marked as solved
1 Replies
198 Views
Hi, I'm an experienced developer on Apple platforms (having worked on iOS/tvOS projects for more than 10 years now). However, I've only worked on applications, or simple games which didn't require more than using UIKit or SwiftUI. Now, I'd like to start a new project, recreating an old game on tvOS with 3D graphics. This is not a project I plan to release, only a simple personal challenge. I'm torn with starting this project with either SceneKit, or Unity. On one hand, I love Apple frameworks and tools, so I guess I could easily progress with SceneKit. Also, I don't know Unity very well, but even if it's a simple project, I've seen that there are several restrictions for free plans (no custom splash screen, etc). On the other hand, I've read several threads (i.e. this one) making it look like that SceneKit isn't going anywhere, and clearly recommending Unity due to the fact that its documentation is way better, and the game more easily portable to other platforms. Also, if I'm going to learn something new, maybe I could learn more with Unity (using a completely different platform, software and language) than I would with SceneKit and Swift stuff. What's your opinion about this? Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
200 Views
I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_Head node, contains dozens of morph targets which are crucial for my project. Here's what I've done so far: I verified the morph targets in Blender (I can see all the morph targets correctly when opening both the original .glb file and the converted .dae file in Blender). Given that Xcode does not support .glb file format directly, I converted the model to .dae format, aiming to use it in my Xcode project. After importing the .dae file into Xcode, I noticed that Xcode does not show any morph targets for the Wolf3D_Head node or any other node in the model. I've already attempted using tools like ColladaMorphAdjuster and another version by JakeHoldom to adjust the .dae file, hoping Xcode would recognize the morph targets, but it didn't resolve the issue. My question is: How can I make Xcode recognize and display the morph targets present in the .dae file exported from Blender? Is there a specific process or tool that I need to use to ensure Xcode properly imports all the morph target information from a .dae file? Tools tried: https://github.com/JonAllee/ColladaMorphAdjuster, https://github.com/JakeHoldom/ColladaMorphAdjuster Thanks in advance!
Posted
by PilsenUK.
Last updated
.
Post not yet marked as solved
0 Replies
182 Views
I am working on a game that involves piloting a ship through a long tunnel. I want to be able to have the tunnel the same size as the screen always so that the four edges represent the "tunnel". I'm using a SCNShapeNode to create each of the four "walls", but I can can't seem to figure out how to always have them pinned to the edges of the screen. I've tried converting the screen size to the location on the scene, but it most often doesn't really work at all or at the very best only gets close. Any ideas on how to make this work regardless of device would be greatly appreciated. Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
204 Views
I have created an App with SceneKit where I have a series of walls (lines made with a SCNShapeNode) and a ball (SCNNode using an SCNSphere). The ball is supposed to continue with a constant speed regardless of where it goes, just changing directions whenever it hits a wall. However, the ball slows down and comes to a stop after a short while. If I start it close to a wall, it bounces off the wall in the correct direction, but then still ends up slowing down to a stop after a short while. Here is how the balls are created: let sphere = SCNSphere(radius: 1.0) var sphereNode1: SCNNode? let sphereNode1 = SCNNode(geometry: sphere) let theBody1 = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(geometry: sphere)) sphereNode1.worldPosition = SCNVector3(0,20,0) sphereNode1.physicsBody = theBody1 //sphereNode1.physicsBody!.velocity = SCNVector3(0.0, 2.0, 0.0) sphereNode1.physicsBody!.applyForce(SCNVector3(0.0, 2.0, 0.0), asImpulse: true) sphereNode1.physicsBody!.physicsShape = SCNPhysicsShape(geometry: sphere) sphereNode1.physicsBody!.isAffectedByGravity = false sphereNode1.physicsBody!.restitution = 1.0 sphereNode1.geometry?.materials = [sphereMaterial] I tried both giving the ball a velocity and applying a force to it. Both resulted in the above mentioned actions. I can't seem to find anywhere in the documentation that suggests a way to resolve this. I actually did this same thing in the SpriteKit environment and everything worked fine. Hopefully someone can tell me what I'm missing. Thanks, Michael
Posted Last updated
.
Post not yet marked as solved
2 Replies
332 Views
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
Posted
by akash-ar.
Last updated
.
Post not yet marked as solved
1 Replies
326 Views
Hi, My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes. Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes. This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open. Thank you
Posted
by zer0x.
Last updated
.
Post not yet marked as solved
0 Replies
254 Views
I experience an issue with SceneKit that is driving me crazy ;( I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store) Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
Posted
by Gil.
Last updated
.
Post not yet marked as solved
0 Replies
203 Views
An SCNNode is created and used for either an SCNView or an SKView. SceneKit and SpriteKit are using default values. The SceneView has an SCNScene with a rootNode of the SCNNode. The SpriteKitView has a SpriteKitScene with an SK3DNode that has an SCNScene with a rootNode of the SCNNode. There is no other code changing or adding values. Why are the colors for the SCNView less vibrant than the colors for the SKView? Is there a default to change to make them equivalent, or another value to add? I have tried changing the default SCNMaterial but only succeeded in making the image black or dark. Any help is appreciated.
Posted
by philk1701.
Last updated
.
Post not yet marked as solved
0 Replies
399 Views
I'm porting a scenekit app to RealityKit, eventually offering an AR experience there. I noticed that when I run it on my iPhone 15 Pro and iPad Pro with the 120Hz screen, the framerate seems to be limited to 60fps. Is there a way to increase the target framerate to 120 like I can with sceneKit? I'm setting up my arView like so: @IBOutlet private var arView: ARView! { didSet { arView.cameraMode = .nonAR arView.debugOptions = [.showStatistics] } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
220 Views
I have a human-like rigged 3D model in a DAE file. I want to programmatically build a scene with several instances of this model in different poses. I can extract the SCNSkinner and skeleton chain from the DAE file without problem. I have discovered that to have different poses, I need to clone the skeleton chain, and clone the SCNSkinner as well, then modify the skeletons position. Works fine. This is done this way: // Read the skinner from the DAE file let skinnerNode = daeScene.rootNode.childNode(withName: "toto-base", recursively: true)! // skinner let skeletonNode1 = skinnerNode.skinner!.skeleton! // Adding the skinner node as a child of the skeleton node makes it easier to // 1) clone the whole thing // 2) add the whole thing to a scene skeletonNode1.addChildNode(skinnerNode) // Clone first instance to have a second instance var skeletonNode2 = skeletonNode1.clone() // Position and move the first instance skeletonNode1.position.x = -3 let skeletonNode1_rightLeg = skeletonNode1.childNode(withName: "RightLeg", recursively: true)! skeletonNode1_rightLeg.eulerAngles.x = 0.6 scene.rootNode.addChildNode(skeletonNode1) // Position and move the second instance skeletonNode2.position.x = 3 let skeletonNode2_leftLeg = skeletonNode2.childNode(withName: "LeftLeg", recursively: true)! skeletonNode2_leftLeg.eulerAngles.z = 1.3 scene.rootNode.addChildNode(skeletonNode2) It seems the boneWeights and boneIndices sources are duplicated for each skinner, so if I have let's say 100 instances, I eat a huge amount of memory, for something that is constant. Is there any way to avoid the duplication of the boneWeights and boneIndices ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
263 Views
Hello fellow developers here is something that I don t fully grasp : 1/ I have a fake SceneKit with two nodes both having light 2/ I have a small widget to explore those lights and tweak some param -> in the small widget I can t update a toggle item when a new light is selected while other params are updated ! here is a short sample that illustrate what I am trying to resolve import SwiftUI import SceneKit class ShortScene { var scene = SCNScene() var lightNodes : [SCNNode] { get {scene.rootNode.childNodes(passingTest: { current, stop in current.light != nil} ) } } init() { let light1 = SCNLight() light1.castsShadow = false light1.type = .omni light1.intensity = 100 let nodelight1 = SCNNode() nodelight1.light = light1 nodelight1.name = "nodeLight1" scene.rootNode.addChildNode(nodelight1) let light2 = SCNLight() light2.castsShadow = false light2.type = .ambient light2.intensity = 300 let nodelight2 = SCNNode() nodelight2.light = light2 nodelight2.name = "nodeLight2" scene.rootNode.addChildNode(nodelight2) } } extension SCNLight : ObservableObject {} extension SCNNode : ObservableObject {} struct LightViewEx : View { @ObservedObject var lightParam : SCNLight @ObservedObject var lightNode : SCNNode var bindCol : Binding<Color> @State var castShadows : Bool init( _ _lightNode : SCNNode) { if let _light = _lightNode.light { lightParam = _light lightNode = _lightNode bindCol = Binding<Color>( get: { if let _lightcol = _lightNode.light!.color as! NSColor? { return Color(_lightcol)} else { return Color.red } }, set: { newCol in _lightNode.light!.color = NSColor(newCol) } ) castShadows = _lightNode.light!.castsShadow print( "For \(lightNode.name!) : CShadows \(castShadows)") } else { fatalError("No Light attached to Node") } } var body : some View { VStack(alignment: .leading) { Text("Light Params") Picker("Type",selection : $lightParam.type) { Text("IES").tag(SCNLight.LightType.IES) Text("Ambient").tag(SCNLight.LightType.ambient) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Omni").tag(SCNLight.LightType.omni) Text("Probe").tag(SCNLight.LightType.probe) Text("Spot").tag(SCNLight.LightType.spot) Text("Area").tag(SCNLight.LightType.area) } ColorPicker("Light Color", selection: bindCol) Text("Intensity") TextField("Intensity", value: $lightParam.intensity, formatter: NumberFormatter()) Divider() // Toggle("shadows", isOn: $lightParam.castsShadow ).onChange(of: lightParam.castsShadow, { lightParam.castsShadow.toggle() }) Toggle("CastShadows", isOn: $castShadows ) .onChange(of: castShadows) { lightParam.castsShadow = castShadows;print("castsShadows changed to \(castShadows)") } } } } struct sceneView : View { @State var _lightIdx : Int = 0 @State var shortScene = ShortScene() var body : some View { VStack(alignment: .leading) { if shortScene.lightNodes.isEmpty == false { Picker("Lights", selection: $_lightIdx) { ForEach(0..<shortScene.lightNodes.count, id: \.self) { index in Text(shortScene.lightNodes[index].name ?? "NoName" ).tag(index) } } GridRow(alignment: .top) { LightViewEx(shortScene.lightNodes[_lightIdx]) } } } } } struct testUIView: View { var body: some View { sceneView() } } #Preview { testUIView() } Something is obviously not right ! Anyone has some idea ?
Posted
by Pom73.
Last updated
.