Posts

Post not yet marked as solved
4 Replies
1.9k Views
I am trying to find out if reading OpenEXR files are supported or not on iOS? Here is what I found out:1. The docs barely mention OpenEXR, and where they do they say that it's only supported on recent macOS.2. Yet, the official WWDC 2017 samples for ARKit simply load .exr images with UIImage(named: "image.exr") This is found in the following samples:- Handling 3D Interaction and UI Controls in Augmented Reality- Interactive Content with ARKit- Audio in ARKit- Placing Objects [old, I think it's removed now]Can you point me to anything about how is OpenEXR support on iOS and on macOS?
Posted
by hyperknot.
Last updated
.
Post marked as solved
5 Replies
18k Views
I'm trying to read the contents of a file on the filesystem in a macOS Swift app (Xcode 9 / Swift 4).I'm using the following snippet for it:let path = "/my/path/string.txt" let s = try! String(contentsOfFile: path) print(s)My problem is the following:1. This works in a Playground2. This works when I use the Command Line Tool macOS app template3. This terminates in a permission error when I use the Cocoa App macOS app templateThe permission error is the following:Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=NSCocoaErrorDomain Code=257 "The file "data.txt" couldn't be opened because you don't have permission to view it." UserInfo={NSFilePath=/my/path/data.txt, NSUnderlyingError=0x60c0000449b0 {Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"}}I guess it's related to sandboxing but I found no information about it.1. How can I read from the filesystem in a sandboxed app? I mean there are so many GUI apps which need an Open File dialog, it cannot be a realistic restriction of sandboxed apps to not read files from outside the sandbox.2. Alternatively, how can I switch off sandboxing in Build Settings?3. Finally, I tried to compare the project.pbxproj files between the default Cocoa Apps and Command Line Tool template and I didn't see any meaningful difference, like something about security or sandbox. If not here, where are those settings stored?
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
3 Replies
1.3k Views
I believe native reading of OpenEXR format is (at least officially on macOS) supported both on recent macOS and iOS versions: https://forums.developer.apple.com/thread/97119I'd like to load an OpenEXR image to a Metal texture, probably via MTKTextureLoader.newTexture().My problem is that XCode doesn't recognise OpenEXR files as texture assets, but as data asset.This means I cannot use MTKTextureLoader.newTexture(name: textureName, ...).What cross platform (recent macOS / iOS) options are there to read an image from a data asset?Since .newTexture supports CGImage, I'd guess that the natural way would be to load into CGImage, but I don't quite understand how.Or should I simply make an URL out of the data asset's file and try to load that one?
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
1 Replies
484 Views
I'm trying to convert an CGImage to MTLTexture, and for this, I'm using this code:let width = cgimage.width let height = cgimage.height let channels = cgimage.bitsPerPixel / cgimage.bitsPerComponent let colorSpace = cgimage.colorSpace! let bitsPerComponent = cgimage.bitsPerComponent let bytesPerComponent = cgimage.bitsPerComponent / 8 let bytesPerPixel = cgimage.bitsPerPixel / 8 let bytesPerRow = width * bytesPerPixel let options = CGImageAlphaInfo.premultipliedLast.rawValue var pixelValues = [UInt8](repeating: 0, count: width * height * channels) let contextRef = CGContext(data: &pixelValues, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: options) contextRef?.draw(cgimage, in: CGRect(x: 0.0, y: 0.0, width: CGFloat(width), height: CGFloat(height)))My problem is that CGContext is extremely finicky, and it just returns nil without any explanation if some of it's obscure options are not perfectly set, for example in bitmapInfo.Now, I've seen some tutorials online in ObjC where it was using CGContextRef which provided errors with nice, detailed explanation.However in Swift, it's just an empty nil, without any warning lines or anything. This is just extremely hard to develop anything like this, I'm pretty much just guessing in the dark.For example, I'm trying to figure out, why does this function work with UInt8 and UInt16 but not with Float, and I don't even know where to start debugging.
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
1 Replies
2.5k Views
MTKTextureLoader.newTexture fails to load 16-bit CGImages.let path = "test.scnassets/test16.png" let image = UIImage(named: path)! let textureLoader = MTKTextureLoader(device: defaultDevice) let texture = try! textureLoader.newTexture(cgImage: image.cgImage!, options: [:])Results in error:Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}Now my question is, if not like this, then how can I create a texture, manually, using a proper 16 bit pixel format.
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
0 Replies
352 Views
I'd like to get some information about an MTLPixelFormat object, which I get by MTLTexture.pixelFormat. However when I print it, it just prints "MTLPixelFormat".debugPrint just prints out "__C.MTLPixelFormat"How can I actually get which pixel format this means?
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
6 Replies
1.2k Views
In the Metal 2 Optimization and Debugging session at 11:37 we can see that not just the input, but the output of a vertex shaders is being debugged.https://developer.apple.com/videos/play/wwdc2017/607/This seems like the most magical thing ever, but I cannot replicate it. What does this behaviour depend on? I'm on High Sierra with a Hashwell CPU / integrated graphics.Does this feature require using argument buffers? Does it work on my Hashwell computer (MacBook Pro (Retina, 15-inch, Late 2013), or on an iPhone SE?
Posted
by hyperknot.
Last updated
.
Post not yet marked as solved
1 Replies
541 Views
Hi, I've read through the line rendering discussions both here and on the following resources:https://mattdesl.svbtle.com/drawing-lines-is-hardhttp://codeflow.org/entries/2012/aug/05/webgl-rendering-of-solid-trails/http://discourse.libcinder.org/t/smooth-efficient-perfect-curves/925Based on that, I've implemented my screen space shader for 3D line rendering, which works nicely (with possible gaps at joins for now).However I got stuck in understanding how to calculate the fragment distance from the center line in the fragment shader.If I visualize the in.position in my fragment shader, all I get is a constant value for all pixels.If I visualize the additional in.midPoint, I get a nicely interpolated, changing value.Why is my in.position constant in my fragment, and thus, how would you calculate a distance from the center line?struct LineVertexOut { float4 position [[position]]; float4 midPoint; float4 color; }; vertex LineVertexOut lineVertexShader(device LineLine* lines [[buffer(0)]], constant Uniforms& uniforms [[buffer(2)]], uint vertexId [[vertex_id]], uint instanceId [[instance_id]]) { float thickness = 2; LineLine line = lines[instanceId]; LinePoint startPoint = line.start; LinePoint endPoint = line.end; float4 startProjected = uniforms.MVP * float4(startPoint.position, 1); float4 endProjected = uniforms.MVP * float4(endPoint.position, 1); float2 startScreen = startProjected.xy / startProjected.w; float2 endScreen = endProjected.xy / endProjected.w; float2 v = normalize(endScreen - startScreen); float2 normal = normalize(float2(-v.y, v.x)); normal *= thickness / 2; normal.x /= uniforms.aspectRatio; LineVertexOut out; if (vertexId == 0) { out.position = startProjected + float4(normal, 0, 1); out.midPoint = startProjected; out.color = startPoint.color; } if (vertexId == 1) { out.position = startProjected + float4(-normal, 0, 1); out.midPoint = startProjected; out.color = startPoint.color; } if (vertexId == 2) { out.position = endProjected + float4(normal, 0, 1); out.midPoint = endProjected; out.color = endPoint.color; } if (vertexId == 3) { out.position = endProjected + float4(-normal, 0, 1); out.midPoint = endProjected; out.color = endPoint.color; } return out; } fragment float4 lineFragmentShader(LineVertexOut in [[stage_in]]) { // float2 color = in.position.xy; // position not changing float2 color = in.midPoint.xy; // position changing color = fract(color); return float4(color, 0, 1); }Here is how it looks: https://i.imgur.com/4zwvyVh.png
Posted
by hyperknot.
Last updated
.
Post marked as solved
2 Replies
1.5k Views
I have an app created from Xcode 9 / New Project / Cross Platform Game App, targets: macOS and iOS.I wrote the following simple function to load text from a Data Asset.func dataAssetAsString(_ name: String) -> String? { if let asset = NSDataAsset(name: NSDataAsset.Name(name)) { return String(data: asset.data, encoding: .utf8) } return nil }I'm puzzled by the following things:1. This function only works if I import either UIKit, or AppKit.2. But somehow importing MetalKit instead of UIKIt or AppKit makes it work. This really puzzles me. This should have nothing to do with MetalKit.Most answers I found for code sharing between iOS and macOS suggest some kind of conditional import at the beginning of the file / shimming.I went through the slides of WWDC 2014 Session 233: Sharing code between iOS and OS X, and it explicitly says we should not do shimming in Swift.On the other hand, I don't understand what we should do instead of shimming. The mentioned very complicated process of creating and compiling shared frameworks for a 5 line function cannot be the right direction.1. In 2018 (Swift 4, Xcode 9, iOS 11, macOS 10.13), what would be the recommended way to add such a trivial cross platform code to an app?2. What is the magic of MetalKit which makes it work without shimming? Does it shim internally?
Posted
by hyperknot.
Last updated
.
Post marked as solved
4 Replies
1.3k Views
What is the connection between:Using [[stage_in]] in a Metal ShaderUsing MTLVertexDescriptorUsing MTKMeshFor exampleIs it possible to use [[stage_in]] without using MTLVertexDescriptor?Is it possible to use MTLVertexDescriptor without using MTKMesh, but an array of a custom struct based data structure? Such as struct Vertex {...}, Array<Vertex>?Is it possible to use MTKMesh without using MTLVertexDescriptor? For example using the same struct based data structure?I didn't find this information on the internet, and the Metal Shading Language Specification doesn't even include the words "descriptor" or "mesh".
Posted
by hyperknot.
Last updated
.
Post marked as solved
7 Replies
1.8k Views
Metal Best Practices Guide states thatThe setVertexBytes:length:atIndex: method is the best option for binding a very small amount (less than 4 KB) of dynamic buffer data to a vertex functionI believe this means that in case of simple scenes, instead of storing uniforms in a manually managed dynamic buffer, it's best to simply update model/view/projection matrices without using any buffer at all, by using setVertexBytes and setFragmentBytes.My question is, that in this case, as there is no dynamic buffer at all (only static vertex data), what are we calling triple buffering?Is it simply because we have a semaphore with value: 3 left now?Moreover, what if I totally remove the semaphore as well? The app seems to work fine. But what is happening in this case actually? Am I getting better latency or worse, compared to a semaphore with value: 3?Since the render loop is limited to 60 FPS and the frame time is about 1.5 ms for the CPU (in case of a simple example), some command has to take the place of the blocking semaphore, right? Is Metal going into double-buffering in this case (GPU displays one frame while CPU is encoding the next)?
Posted
by hyperknot.
Last updated
.
Post marked as solved
1 Replies
447 Views
I'm trying to add multiple objects to the scene, starting from Xcode 9's Metal Game App template.I've modified the template as little as possible, to render multiple objects, based on this answer: https://stackoverflow.com/a/37424817/518169My problems is that the second object doesn't appear, and I cannot debug this issue. Adding print lines show the buffer is updated, but I'm not sure it's the same state on the GPU as well.Why is the second object not drawn? Is it something about how I'm handling the buffers, or it's something in the draw pipeline (like clearing the screen twice for example)?_ = inFlightSemaphore.wait(timeout: DispatchTime.distantFuture) if let commandBuffer = commandQueue.makeCommandBuffer() { let semaphore = inFlightSemaphore commandBuffer.addCompletedHandler { (_) -> Swift.Void in semaphore.signal() } updateDynamicBufferState() scene.updateGameState() uniforms[0].projectionMatrix = scene.projectionMatrix if let renderPassDescriptor = view.currentRenderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) { renderEncoder.label = "Primary Render Encoder" for node in scene.nodes { // ------ FOR LOOP START uniforms[0].modelViewMatrix = node.modelViewMatrix renderEncoder.pushDebugGroup("Draw Node: \(node.name)") renderEncoder.setCullMode(node.cullMode) renderEncoder.setFrontFacing(node.frontFacing) renderEncoder.setRenderPipelineState(node.pipelineState) renderEncoder.setDepthStencilState(depthState) renderEncoder.setVertexBuffer(dynamicUniformBuffer, offset: uniformBufferOffset, index: 2) renderEncoder.setFragmentBuffer(dynamicUniformBuffer, offset: uniformBufferOffset, index: 2) node.render(renderEncoder: renderEncoder) // setVertexBuffer, setFragmentTexture, drawIndexedPrimitives renderEncoder.popDebugGroup() } // ------ FOR LOOP END renderEncoder.endEncoding() if let drawable = view.currentDrawable { commandBuffer.present(drawable) } } commandBuffer.commit() }
Posted
by hyperknot.
Last updated
.