Post not yet marked as solved
Hello!I have been using this piece of code in order to create a custom plane geometry using the SCNGeometryPrimitiveType option ".polygon".extension SCNGeometry {
static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry {
var indices: [Int32] = [Int32(vertices.count)]
var index: Int32 = 0
for _ in vertices {
indices.append(index)
index += 1
}
let vertexSource = SCNGeometrySource(vertices: vertices)
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout.size)
let geometry = SCNGeometry(sources: [vertexSource], elements: [element])
let material = SCNMaterial()
material.diffuse.contents = UIColor.blue
material.isDoubleSided = true
geometry.firstMaterial = material
return geometry
}
}This works by sending in vertex coordinates in an order which represent the outline of the desired plane. As an example I might make an array with vertices representing a rectangle geometry as follows: [lowerLeft, upperLeft, upperRight, lowerRight].This method seems to work well for simpler shapes, but I sometimes get an error which I haven't been able to find the cause of when using more complex shapes or vertex coordinates which are randomly scattered in a plane (eg. when the method recieves an array where the vertices order is not outlining a shape. In the rectangle case it could look like this: [lowerLeft, upperRight, lowerRight, upperLeft] ). The error seems more likely to occur when the number of vertices used increases.I'm using this method to allow the user of my app to "paint" an outline of the desired plane, and as I can't control how the user chooses to do so I want this method to be able to handle those cases as well.This is the error print i recieve after calling this method:-[MTLDebugDevice validateNewBufferArgs:options:]:467: failed assertion `Cannot create buffer of zero length.'(lldb)And this is what appears in the debug navigator:libsystem_kernel.dylib`__pthread_kill:
0x219cad0c4 <+0>: mov x16, #0x148
0x219cad0c8 <+4>: svc #0x80
-> 0x219cad0cc <+8>: b.lo 0x219cad0e4 ; <+32>
0x219cad0d0 <+12>: stp x29, x30, [sp, #-0x10]!
0x219cad0d4 <+16>: mov x29, sp
0x219cad0d8 <+20>: bl 0x219ca25d4 ; cerror_nocancel
0x219cad0dc <+24>: mov sp, x29
0x219cad0e0 <+28>: ldp x29, x30, [sp], #0x10
0x219cad0e4 <+32>: ret Where line 4 has the error: com.apple.scenekit.scnview-renderer (16): signal SIGABRTAny help or explanation for this error would be greatly appriciated!
Post not yet marked as solved
Summary:
I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face.
Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera.
Problem:
When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases.
My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue.
Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here.
Sample of Vision Request Setup:
// Setup Vision request options
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
// Setup Camera Intrinsics
let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
// Set EXIF orientation
let exifOrientation = self.exifOrientationForCurrentDeviceOrientation()
// Setup vision request handler
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: exifOrientation,
options: requestHandlerOptions)
// Setup the completion handler
let completion: VNRequestCompletionHandler = {request, error in
let observations = request.results as! [VNFaceObservation]
// Draw faces
DispatchQueue.main.async {
drawFaceGeometry(observations: observations)
}
}
// Setup the image request
let request = VNDetectFaceLandmarksRequest(completionHandler: completion)
// Handle the request
do {
try handler.perform([request])
} catch {
print(error)
}
Sample of SCNView Setup:
// Setup SCNView
let scnView = SCNView()
scnView.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(scnView)
scnView.showsStatistics = true
NSLayoutConstraint.activate([
scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor),
scnView.topAnchor.constraint(equalTo: self.view.topAnchor),
scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor),
scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor)
])
// Setup scene
let scene = SCNScene()
scnView.scene = scene
// Setup camera
let cameraNode = SCNNode()
let camera = SCNCamera()
cameraNode.camera = camera
scnView.scene?.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 16)
// Setup light
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = SCNLight.LightType.ambient
ambientLightNode.light?.color = UIColor.darkGray
scnView.scene?.rootNode.addChildNode(ambientLightNode)
Sample of "face processing"
func drawFaceGeometry(observations: [VNFaceObservation]) {
// An array of face nodes, one SCNNode for each detected face
var faceNode = [SCNNode]()
// The origin point
let projectedOrigin = sceneView.projectPoint(SCNVector3Zero)
// Iterate through each found face
for observation in observations {
// Setup a SCNNode for the face
let face = SCNNode()
// Setup the found bounds
let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height))
// Verify we have landmarks
if let landmarks = observation.landmarks {
// Landmarks are relative to and normalized within face bounds
let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y)
.scaledBy(x: faceBounds.size.width, y: faceBounds.size.height)
// Add all points as vertices
var vertices = [SCNVector3]()
// Verify we have points
if let allPoints = landmarks.allPoints {
// Iterate through each point
for (index, point) in allPoints.normalizedPoints.enumerated() {
// Apply the transform to convert each point to the face's bounding box range
_ = index
let normalizedPoint = point.applying(affineTransform)
let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z))
let unprojected = sceneView.unprojectPoint(projected)
vertices.append(unprojected)
}
}
// Setup Indices
var indices = [UInt16]()
// Add indices
// ... Removed for brevity ...
// Setup texture coordinates
var coordinates = [CGPoint]()
// Add texture coordinates
// ... Removed for brevity ...
// Setup texture image
let imageWidth = 2048.0
let normalizedCoordinates = coordinates.map { coord -> CGPoint in
let x = coord.x / CGFloat(imageWidth)
let y = coord.y / CGFloat(imageWidth)
let textureCoord = CGPoint(x: x, y: y)
return textureCoord
}
// Setup sources
let sources = SCNGeometrySource(vertices: vertices)
let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates)
// Setup elements
let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles)
// Setup Geometry
let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements])
geometry.firstMaterial?.diffuse.contents = textureImage
// Setup node
let customFace = SCNNode(geometry: geometry)
sceneView.scene?.rootNode.addChildNode(customFace)
// Append the face to the face nodes array
faceNode.append(face)
}
// Iterate the face nodes and append to the scene
for node in faceNode {
sceneView.scene?.rootNode.addChildNode(node)
}
}
Post not yet marked as solved
Does anyone have a working example on how to play OGG files with swift?
I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes.
I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac)
I also tried using Cricket Audio framework below.
https://github.com/sjmerel/ck
It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue
https://github.com/sjmerel/ck/issues/3
Right now I believe every player that can play OGGs on mac is written in Objective-C or C++.
Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
Post not yet marked as solved
I'm not sure which combination of iOS/XCode/Mac OS is causing this issue, but all of a sudden when I try to run our SceneKit app and the "Scheme -> Diagnostics -> Metal -> API Validation" setting is turned off the scene won't render and the console is just full of the following errors:
Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource)
[SceneKit] Error: Main command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=9 "Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource)"
)
If you run the app outside of xcode it's fine, also enabling the "API Validation" option stops the issue.
One of my schemes has this option disabled since the project began and never had an issue before. Just throwing this out there incase someone else has spent hours of their life trying to figure out why this is not working for them.
Also you can just create a new SceneKit project and turn that diagnostic option off and the app won't render anything.
Post not yet marked as solved
I am attempting to build an AR app using Storyboard and SceneKit. When I went to run an existing app I have already used it runs but nothing would happen. I thought this behavior was odd so I decided to start from scratch on a new project. I started with the default AR project for Storyboard and SceneKit and upon run it immediately fails with an unwrapping nil error on the scene. This scene file is obviously there. I am also given four build time warnings:
Could not find bundle inside /Library/Developer/CommandLineTools
failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0]
Conversion failed, will simply copy input to output.
Copy failed file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn -> file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn error:Error Domain=NSCocoaErrorDomain Code=516 "“ship.scn” couldn’t be copied to “art.scnassets” because an item with the same name already exists." UserInfo={NSSourceFilePathErrorKey=/Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn, NSUserStringVariant=(
I currently am unsure how to fix these errors? It appears as if they must be in the command line tools because after moving the device support files back to a stable version of Xcode the same issue is present. Is anyone else having these issues?
Post not yet marked as solved
I am in need to create a 2D floor plan with the dimensions mentioned, from the generated 3D result of RoomPlan. Is there a way to create it in a little easy-to-understand manner? Or will it require manual elaborate coding?
Post not yet marked as solved
SceneKit has started filling my console with this log message:
"Pass FloorPass is not linked to the rendering graph and will be ignored check it's input/output"
Feels like I'm the only one on the planet using SceneKit, but if anyone can guess at what is happening, or the reason for this - I'm thankful.
Post not yet marked as solved
Dear Apple Team and everyone who has experience with MapKit.
I am building an app where I need to hide some 3D models and replace them with my custom 3D meshes using SceneKit.
Up until now I was using Mapbox it allows to get mesh row data to reconstruct all maps 3D.
Is there something like this possible with MapKit?
Use cases
Say you navigated to Kennedy Space Center Launch Complex 39 and there is no 3D model of actual building. I would like to be able to hide simple massing and replace it with my model.
In 3D Satellite VIew some areas have detailed meshes. Say London The Queen's Walk. I would like to make specific area flat so I can place my 3D model on top of Satellite 3D View to illustrate new structure or building.
Last one. Is it possible to change existing buildings colours? I know it is possible transparency
Thank you
@apple
Post not yet marked as solved
Xcode template "iOS game swift SceneKit" run with error as shown. It might be related to scntool. Any idea how to fix this?
Post not yet marked as solved
I am using Xcode 14.3 and when Issue that says "Could not find bundle inside /Library/Developer/CommandLineTools under scntool". I searched this up and tried reinstalling CommandLineTools, reinstalling xcode, and reseting xcode.
Here are screenshots of the issue: https://docs.google.com/document/d/1H6HsoZoJhISMo-5cXG-kN0hat6jftcujL1iYK2TRkeA/edit
And here is the code: https://github.com/EnderRobber101/Xcode
Is there a solution? Please inform me of the solution. Thank you for reading.
Post not yet marked as solved
Is there a way to mimic this functionality found in UIKit in SwiftUI?
Long story short, I am creating an interactive SceneKit view (currently in UIKit) that anchors 2D UIViews over nodes, for navigation and labelling.
I would like to migrate over to SwiftUI, but I am having difficulties mimicking this functionality.
let subViews = self.view.subviews.compactMap{$0 as? UIButton}
if let view = subViews.first(where: {$0.currentTitle == label}) {
view.center = self.getScreenPoint(renderer: renderer, node: node)
}
Can anyone help me with this?
Thanks!
Post not yet marked as solved
I am using Scenekit.write method to export a scene data to DAE(Collada) format. File is exported correctly but it's not a collada XML file rather a binary file. With more investigation I found an Apple binary pList file, And while renaming it's extension to pList, I can view geometry data as standard plist.
This file is opening in XCode but not in any other DAE viewer.
I am on Xcode 14.3 and IOS 16.
Post not yet marked as solved
Hi, my app displays a video as texture on SceneKit's SCNNode. Now I've just found that some videos looks different in iOS 16.4 from previous versions. The videos look more pale than they should be. I looked up some documents and set SCNDisableLinearSpaceRendering as true in info.plist, they look exactly what they should be but then the problem is that the other videos which already looked fine now turned different. But anyway it seems to relate to the linear or gamma color spaces regarding to this answer (https://developer.apple.com/forums/thread/710643).
Definitely those problematic videos have some different color space setting or something. I am not really expert in these field, where should I start to dig in?
Or how can I just make iOS 16.4 behave same as previous versions? It worked well for all the videos then. What was actually updated?
Post not yet marked as solved
Hi everybody,
I am an Engineering Student and at the University we have to create a little
AR-App.
Now, in Xcode I want to make an Image Tracking and above that Image, it should show my 3D Object. I followed this Video: "https://www.youtube.com/watch?v=VmPHE8M2GZI" until the minute 39:18. After that, it doesnt work.
The simulator detects the Image and shows a light grey Plane above it, even if I move around. But the 3D Model doesn't show up.
I imported the ns.obj file in art.scnassets
Converted to SceneKit file .scn
changed the texture "diffusion" to green
I tried to scale it, but still no result
I tried also with an 3D Object downloaded from the internet
Long Story short.... it doesn't work. Does anyone knows what the Problem could be?
Thank you very much.
Greetings, Rosario
PS:
I use the Xcode Version 14.3.
Thats my code in the ViewController.swift file:
import SwiftUI
import RealityKit
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
var nsNode: SCNNode?
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.autoenablesDefaultLighting = true
let nsScene = SCNScene(named: "art.scnassets/ns.scn")
nsNode = nsScene?.rootNode
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARImageTrackingConfiguration()
if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: Bundle.main) {
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 2
}
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let size = imageAnchor.referenceImage.physicalSize
let plane = SCNPlane(width: size.width, height: size.height)
plane.firstMaterial?.diffuse.contents = UIColor.white.withAlphaComponent(0.5)
plane.cornerRadius = 0.005
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
if let shapeNode = nsNode {
node.addChildNode(shapeNode)
}
}
return node
}
}
Post not yet marked as solved
I can't post a video I don't think. But in the screen shots, I'm sightly making a circle with the phone and the green lines will disappear and then reappear. Those green lines are drawn via .addChildNode().
We're using RoomPlan to detect cabinets(.storage), and then we outline the cabinets with SCNNodes. We have other methods to capture cabinets that don't use RoomPlan. And the lines for those cabinets do not wink in and out.
Perhaps there is a bug with visibility culling?
We're pretty dang sure the nodes are not disappearing because we are calling .hide() anywhere.
Perhaps object detection from RoomPlan running in the background is interfering?
Will visionOS support SceneKit?
Post not yet marked as solved
I am creating an AR app that uses ARKit and SceneKit.
Will visionOS support SceneKit AR applications, or will I have to rewrite my app using RealityKit?
Thanks for answering my question.
: - )
Post not yet marked as solved
I’m wondering if it’s possible to use SceneKit or GamePlayKit to handle physics etc and having SwiftUI to create the 3D content?
Post not yet marked as solved
Hello
I try to understand the movement of a character with physics. To do this, I imported max into the fox2 file provided by Apple. I apply a .static physics to it and I have a floor with static physics and static blocks to test collisions everything works very well except that Max is above the ground.
He doesn't touch my ground. I couldn't understand why until I had the physicsShapes displayed in the debug options. With that I see that if max does not touch the ground it is because the automatic shape is below Max and this shape touches the ground.
So I would like to know why the shape is shifted downwards and how to correct this problem?
I did tests and the problem seems to come from physicsBody?.mass. If I remove the mass, the shape is correct but when I move my character it crosses the walls and when I put it on it is well stopped by the static boxes...
Someone with an idea of how to correct this problem?
This is my simplify code
import SceneKit
import PlaygroundSupport
// create a scene view with an empty scene
var sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300))
var scene = SCNScene()
sceneView.scene = scene
// start a live preview of that view
PlaygroundPage.current.liveView = sceneView
// default lighting
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.debugOptions = [.showPhysicsShapes]
// a camera
var cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 3)
scene.rootNode.addChildNode(cameraNode)
// Make floor node
let floorNode = SCNNode()
let floor = SCNFloor()
floor.reflectivity = 0.25
floorNode.geometry = floor
floorNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
scene.rootNode.addChildNode(floorNode)
//add character
guard let fichier = SCNScene(named: "max.scn") else { fatalError("failed to load Max.scn") }
guard let character = fichier.rootNode.childNode(withName: "Max_rootNode", recursively: true) else { fatalError("Failed to find Max_rootNode") }
scene.rootNode.addChildNode(character)
character.position = SCNVector3(0, 0, 0)
character.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
character.physicsBody?.mass = 5
Thank you!
Post not yet marked as solved
Hello, I'm trying to understand physics by placing objects but I have a positioning problem. In fact, I create a floor with static physics and then a static block as well. I position the two elements at position 0 but the block crosses the ground by half.
Why?
this is my playground
import SceneKit
import PlaygroundSupport
// create a scene view with an empty scene
var sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300))
var scene = SCNScene()
sceneView.scene = scene
// start a live preview of that view
PlaygroundPage.current.liveView = sceneView
// default lighting
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.debugOptions = [.showPhysicsShapes]
// a camera
var cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 3)
scene.rootNode.addChildNode(cameraNode)
// Make floor node
let floorNode = SCNNode()
let floor = SCNFloor()
floor.reflectivity = 0.25
floorNode.geometry = floor
floorNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
scene.rootNode.addChildNode(floorNode)
// Add box
let cube = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0))
cube.geometry?.firstMaterial?.diffuse.contents = UIColor.red
cube.position = SCNVector3(0, 0, 0)
cube.physicsBody = SCNPhysicsBody(type: .static, shape:SCNPhysicsShape(geometry: cube.geometry!, options: nil))
scene.rootNode.addChildNode(cube)
thanks !