Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Metal default library not found
When I used metal api to build a thrid static library for producing ".a" lib.But I add my ***.metal file into my library project target, and run an example project which depend on this metal static lib. it failed at this place:id <MTLLibrary> _flibrary = [_mDevice newDefaultLibrary];//failed.erro:/BuildRoot/Library/Caches/com.apple.xbs/Sources/Metal/Metal-56.7/Framework/MTLLibrary.mm:1842: failed assertion `Metal default library not found'But when I add my ***.metal file into my example project, and it works.My question is : I do not want to show my ***.metal file for example project, what method could I using for hiding the metal file? Please show me ,thx...
2
0
7.1k
Jul ’16
Sorting of isometric SKTileMapNode incorrect?
Im programmatically using SKTileMapNode. The code is C# (Xamarin.iOS) but should be readable by every Swift/ObjC developer.The problem is that the sorting of tiles in isometric projection seems to be incorrect and I cannot see why. To test, the map has 1 row and 5 columns. See the screenshot:https://dl.dropboxusercontent.com/u/111612273/SVhjD.pngThe tiles at 0|0 and 2|0 are in front of the others. The pyramid styled tile at 4|0 however, is drawn correctly in front of the one at 3|0.I'm using two simple tiles: The first one has a resolution of 133x83px and the second one is 132x131px:https://dl.dropboxusercontent.com/u/111612273/bcIvP.pngandhttps://dl.dropboxusercontent.com/u/111612273/wzCZV.pngThis is what it looks like in Tiled and what I am trying to reproduce: https://dl.dropboxusercontent.com/u/111612273/66G6p.pngThe tile map is setup and added to the scene using the following code:var tileDef1 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_014")); var tileDef2 = new SKTileDefinition (SKTexture.FromImageNamed ("landscapeTiles_036")); var tileGroup1 = new SKTileGroup (tileDef1); var tileGroup2 = new SKTileGroup (tileDef2); var tileSet = new SKTileSet (new [] { tileGroup1, tileGroup2 }, SKTileSetType.Isometric); var tileMap = SKTileMapNode.Create(tileSet, 5, 2, new CGSize (128, 64)); tileMap.Position = new CGPoint (0, 0); tileMap.SetTileGroup (tileGroup1, 0, 0); tileMap.SetTileGroup (tileGroup2, 1, 0); tileMap.SetTileGroup (tileGroup1, 2, 0); tileMap.SetTileGroup (tileGroup2, 3, 0); tileMap.SetTileGroup (tileGroup2, 4, 0); tileMap.AnchorPoint = new CGPoint (0, 0); Add (tileMap);The tile size used to initialise the tile map (128|64) looks strange to me. It really hasn't anything to do with the texture size of the tiles and unlike everywhere else in SpriteKit it doesn't seem to be using "points". However, this is the size where Tiled is giving me nicely aligned tiles and also SpriteKit is looking good, besides the sorting issue.What am I doing wrong or where am I thinking wrong?
7
0
1.8k
Sep ’16
Creating a custom polygon plane SCNGeometry error
Hello!I have been using this piece of code in order to create a custom plane geometry using the SCNGeometryPrimitiveType option ".polygon".extension SCNGeometry { static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry { var indices: [Int32] = [Int32(vertices.count)] var index: Int32 = 0 for _ in vertices { indices.append(index) index += 1 } let vertexSource = SCNGeometrySource(vertices: vertices) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout.size) let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout.size) let geometry = SCNGeometry(sources: [vertexSource], elements: [element]) let material = SCNMaterial() material.diffuse.contents = UIColor.blue material.isDoubleSided = true geometry.firstMaterial = material return geometry } }This works by sending in vertex coordinates in an order which represent the outline of the desired plane. As an example I might make an array with vertices representing a rectangle geometry as follows: [lowerLeft, upperLeft, upperRight, lowerRight].This method seems to work well for simpler shapes, but I sometimes get an error which I haven't been able to find the cause of when using more complex shapes or vertex coordinates which are randomly scattered in a plane (eg. when the method recieves an array where the vertices order is not outlining a shape. In the rectangle case it could look like this: [lowerLeft, upperRight, lowerRight, upperLeft] ). The error seems more likely to occur when the number of vertices used increases.I'm using this method to allow the user of my app to "paint" an outline of the desired plane, and as I can't control how the user chooses to do so I want this method to be able to handle those cases as well.This is the error print i recieve after calling this method:-[MTLDebugDevice validateNewBufferArgs:options:]:467: failed assertion `Cannot create buffer of zero length.'(lldb)And this is what appears in the debug navigator:libsystem_kernel.dylib`__pthread_kill: 0x219cad0c4 <+0>: mov x16, #0x148 0x219cad0c8 <+4>: svc #0x80 -> 0x219cad0cc <+8>: b.lo 0x219cad0e4 ; <+32> 0x219cad0d0 <+12>: stp x29, x30, [sp, #-0x10]! 0x219cad0d4 <+16>: mov x29, sp 0x219cad0d8 <+20>: bl 0x219ca25d4 ; cerror_nocancel 0x219cad0dc <+24>: mov sp, x29 0x219cad0e0 <+28>: ldp x29, x30, [sp], #0x10 0x219cad0e4 <+32>: ret Where line 4 has the error: com.apple.scenekit.scnview-renderer (16): signal SIGABRTAny help or explanation for this error would be greatly appriciated!
3
0
1.5k
Jun ’19
RealityKit add gestures and animations to Entity/ModelEntity
I have a problem, I want to load a usdz file, place it and animate it in RealityKit. It should also be possible to apply standard gestures as done inarView.installGestures(for: entity)From what I know I have two choices to load an Entity, for example Apples "toy_biplane.usdz" (It also contains an skeletal animation).I havelet entity = try! Entity.load(named: "toy_biplane")orlet entity = try! Entity.loadModel(named: "toy_biplane")when I use the first one, the usdz is loaded as an Entity, not ModelEntity. I can executeentity.playAnimation(entity.repeat())But I can't usearView.installGestures(for: entity)because it does not conform to HasCollision. I tried subclassing Entity and conforming to HasCollision and HasModel. It compiles but even if I call the generateCollisionShape(recursive: true) the gestures are not working.So I tried the loadModel approach which returns an ModelEntity. There the arView.installGestures are working fine exactly as tried out. But when I want to play the animation, the airplane just rotates around my camer very weird.I also tried loading them asynchronously, no success.After many time of debugging I found out, that the Entity from the first approach contained many children from the usdz. Each of them is a part of the skeleton and has its own animation. Not so in the ModelEntity. The children property is an empty set and therefore the animation (e.g rotation of the propeller) is not applied to the skeletal element it belongs to but rather to the combined overall skeleton. Causing a rotation of the whole plane which is not what I want.What am I doing wrong or is something of this unintended behaviour from RealityKit?Thanks.
12
2
8.7k
Jul ’19
Reality Composer - Buttons don't work after exporting .reality file
Hello everyone,I had some problems with buttons and tap-trigger in AR Quick View.If I place a button-object and another object over the button ( in my case it was a sphere, or an imported usdz object) and export the project from Reality Composer as a .reality file - the button loses it's interactivity. It is working in a Reality Composer play mode ( in the example video I attached the sphere starts moving if you tap the button) but nothing happens if I export the project and test it in AR Quick View.Here is a small example of this problem (with attached .rpproject, .reality and two videos of testing the scene in Reality Composer play mode and in in AR Quick View) :https://drive.google.com/file/d/1eQa-pCEihRVtgP7jJUlpfhG5PjKZulJB/view?usp=sharingDo you have any ideas how to fix this problem?
2
1
877
Feb ’20
Swift Package with Metal
Hi, I've got a Swift Framework with a bunch of Metal files. Currently users have to manually include a Metal Lib in their bundle provided separately, to use the Swift Package. First question; Is there a way to make a Metal Lib target in a Swift Package, and just include the .metal files? (without a binary asset) Second question; If not, Swift 5.3 has resource support, how would you recommend to bundle a Metal Lib in a Swift Package?
9
1
6.1k
Jun ’20
Subclassing / Modifying the built-in gesture recognizers
Hello, RealityKit offers an awesome interface to install gestures for the common interactions with the virtual object in the 3D space. One of them is the EntityTranslationGestureRecognizer to move the 3D object in the 3D space. When checking the documentation I found the velocity(in:) method which I'd like to modify to limit the speed an object can be moved through the 3D space. https://developer.apple.com/documentation/realitykit/entitytranslationgesturerecognizer/3255581-velocity I didn't find a straight forward way to subclass and install this gesture recognizer yet. Do I miss something? Best, Lennart
2
1
566
Jun ’20
Metal Perspective Martrix issue
new to Metal, following Janie Clayton's book. Ran into a problem creating a proper Perspective Projection Matrix. I'm hoping someone with matrix experience will see this and the issue will jump out. the matrix structures: swift: struct Matrix4x4{     var X : SIMD4<Float>     var Y : SIMD4<Float>     var Z : SIMD4<Float>     var W : SIMD4<Float> metal: float4x4 projectionMatrix; the swift code that generates the projection matrix: static func perspectiveProjection(_ aspect : Float32, fieldOfView: Float32, near: Float32, far: Float32)->Matrix4x4{         var mat = Matrix4x4()         let zRange = far - near         let fovRadians = fieldOfView * Float32(Double.pi / 180.0)         let yScale = 1 / tan(fovRadians * 0.5)         let xScale = yScale / aspect         let zScale = -( far + near) / zRange         let wzScale = -2 * far * near / zRange         mat.X.x = xScale         mat.Y.y = yScale         mat.Z.z = zScale         mat.Z.w = -1         mat.W.z = wzScale         mat.W.w = 0         return mat     } how the shader applies the projection matrix : outVertex.position =  uniforms.projectionMatrix * props.modelMatrix * vert[vid].position; the result here is just the clear color. it seems that the issue is with wZScale. hard coding that to zero and the mat.W.w to 1.0, allows me to at least see my scene, skewed. messing around with those values, it seems like the objects are crushed and pushed through the camera, existing behind it. I'm basically dead in the water here, typing word for word what is in the book. it's pretty darned frustrating. I'm just learning my way around matrices.
2
0
1.4k
Oct ’20
Properly projecting points with different orientations and camera positions?
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
3
0
1.7k
Oct ’20
How to play Vorbis/OGG files with swift?
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C&#92;&#43;+ to fill the PCM because this method seems to only be available in C&#92;&#43;+ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
1
0
2.6k
Dec ’20
My game crashes after launch
Hello, I have created game (first one) using Unreal Engine 4. Distributed it on Android and works good. But I have problem with ios version. Game crashes after launch. It shows only splash screen for very short period of time. I have CrashLog from one of test devices. CrashLog - https://developer.apple.com/forums/content/attachment/c739bc91-2a0c-4fde-89c0-52f7550d9fbe I tryied also build my game using xcode (I have read somewhere that it could help with crash) but build failed. Have no idea what couses crash. Any solution I found didnt work in my case. Please help me with translate CrashLog.
6
0
3.8k
Mar ’21
Game Center authentication workflow
Prior to iOS 14.5, I adopted the following workflow for my Game Center enabled game - At the launch, if there is no authenticated GKLocalPlayer, then store the viewController passed in the completion handler and display it later to the user (i.e. not immediately after the app completes launching). I am using the following code for the authentication process -         GKLocalPlayer.local.authenticateHandler = {viewController, error in             if (viewController != nil){ //store the viewController and present it at a later time             } else if (GKLocalPlayer.local.isAuthenticated) { //successfully authenticated             } else { // player could not be authenticated             } Since iOS 14.5 the callback to the authenticationHandler seems to have changed. The login prompt is displayed automatically after app launch, if there is no authenticated player. I checked - the completionHandler is not even being called in this case before the login prompt is presented. If there is already an authenticated player on the device, then the handler is called as expected. How can I prevent the login prompt from being displayed automatically by the system? Is there a new workflow for authenticating a player from iOS 14.5 onwards? thanks,
9
2
3.2k
Aug ’21
GPU ray tracing requirements for HelloPhotogrammetry command line app
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
11
0
2.8k
Sep ’21
How to use GCVirtualController directly with SKScene?
GCVirtualController isn't displaying when used with SKScene class. The Virtual controllers appear but then it seems that they are obscured by the SKScene itself!? The documentation says that calling connect() will display the virtual controllers but I seem to be missing how to add the controllers to the SKScene? class GameScene: SKScene { private var _virtualController: Any? @available(iOS 15.0, *) public var virtualController: GCVirtualController? { get { return self._virtualController as? GCVirtualController } set { self._virtualController = newValue } } override func didMove(to view: SKView) { let background = SKSpriteNode(imageNamed: ".jpg") background.zPosition = -1 addChild(background) let virtualConfig = GCVirtualController.Configuration() virtualConfig.elements = [GCInputLeftThumbstick, GCInputRightThumbstick, GCInputButtonA, GCInputButtonB] virtualController = GCVirtualController(configuration: virtualConfig) virtualController?.connect() } } I've also tried adding the virtual controllers in the UIViewController but this doesn't work either.
4
0
1.5k
Oct ’21
Are OpenGL and OpenCL supported on Apple silicon?
OpenGL is deprecated for new development, and Apple's docs no longer even acknowledge the existence of OpenCL. However, many legacy MacOS apps contain modules written in those languages, and they have been building with Xcode and running on Intel Macs just fine. I need to know if they are also tacitly supported on M1 Macs, and if so, whether there is a known kill date.
5
0
9.0k
Nov ’21
Metal Core Image passing sampler arguments
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
1
0
1.2k
Nov ’21