hello, I am a machine learning engineer, recently I need to run pytorch's grid_sample opration on iphone. so I use coremltools to convert pytorch grid_sample to MIL resample op which is officially supported. But when running on the phone, it is switched to the CPU instead of the GPU or ANE (xcode connected with phone, run offical performance benchmark). I would like to ask why there is no efficient GPU implementation?
What I am looking forward to is running around 2ms, but 8ms with cpu
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows:
let pixelFormatY = MTLPixelFormat.r16Unorm
let pixelFormatUV = MTLPixelFormat.rg16Unorm
renderPassDescriptorY!.colorAttachments[0].texture = texture;
renderPassDescriptorY!.colorAttachments[0].loadAction = .clear;
renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0)
renderPassDescriptorY!.colorAttachments[0].storeAction = .store;
renderPassDescriptorCbCr!.colorAttachments[0].texture = texture;
renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear;
renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0)
renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store;
// Vertices and texture coordinates for Metal shader
let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)),
AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)),
AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)),
AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0))
]
let commandBuffer = commandQueue!.makeCommandBuffer()
if let commandBuffer = commandBuffer {
let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!)
renderEncoderY?.setRenderPipelineState(pipelineStateY!)
renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0)
renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1))
renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
renderEncoderY?.endEncoding()
let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!)
renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!)
renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0)
renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0)
renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1))
renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
renderEncoderCbCr?.endEncoding()
commandBuffer.commit()
}
And here is shader code:
vertex MappedVertex vertexShaderYCbCrPassthru (
constant Vertex *vertices [[ buffer(0) ]],
unsigned int vertexId [[vertex_id]]
)
{
MappedVertex out;
Vertex v = vertices[vertexId];
out.renderedCoordinate = float4(v.position, 0.0, 1.0);
out.textureCoordinate = v.texCoord;
return out;
}
fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]],
texture2d<float, access::sample> textureY [[ texture(0) ]]
)
{
constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear);
float Y = float(textureY.sample(s, in.textureCoordinate).r);
return half(Y);
}
fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]],
texture2d<float, access::sample> textureCbCr [[ texture(0) ]]
)
{
constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear);
float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg);
return half2(CbCr);
}
Is there anything fundamentally wrong in the code that makes it slow?
I'm using DrawableQueue to create textures that I apply to my ShaderGraphMaterial texture. My metal render is using a range of alpha values as a test.
My objects displayed with the DrawableQueue texture are working as expected, but the alpha component is not working.
Is this an issue with my DrawableQueue descriptor? My ShaderGraphMaterial? A missing setting on my scene objects? or some limitation in visionOS?
DrawableQueue descriptor
let descriptor = await TextureResource.DrawableQueue.Descriptor(
pixelFormat: .rgba8Unorm,
width: textureResource!.width,
height: textureResource!.height,
usage: [.renderTarget, .shaderRead, .shaderWrite], // Usage should match the requirements for how the texture will be used
//usage: [.renderTarget], // Usage should match the requirements for how the texture will be used
mipmapsMode: .none // Assuming no mipmaps are needed for the text texture
)
let queue = try await TextureResource.DrawableQueue(descriptor)
queue.allowsNextDrawableTimeout = true
await textureResource!.replace(withDrawables: queue)
Draw frame:
guard
let drawable = try? drawableQueue!.nextDrawable(),
let commandBuffer = commandQueue?.makeCommandBuffer()//,
//let renderPipelineState = renderPipelineState
else {
return
}
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.colorAttachments[0].clearColor = clearColor
/*renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(
red: clearColor.red,
green: clearColor.green,
blue: clearColor.blue,
alpha: 0.5 )*/
renderPassDescriptor.renderTargetHeight = drawable.texture.height
renderPassDescriptor.renderTargetWidth = drawable.texture.width
guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {
return
}
renderEncoder.pushDebugGroup("DrawNextFrameWithColor")
//renderEncoder.setRenderPipelineState(renderPipelineState)
// No need to create a render command encoder with shaders, as we are only clearing the drawable.
// Since we are just clearing the drawable to a solid color, no need to draw primitives
renderEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
drawable.present()
}
Hi there:
From iOS 17 devices, the access to .reality files hosted in my server show the infamous "Object requires a newer version of iOS." message. Same page works flawless accessing the asset form iOS16 and below.
Please, check it out a repro accessing to this URL: https://qlar.vortice3d.com/
Any help with this?
Thanks for your time.
Really excited after got some experiences with MPS backends for torch. But when I try to install Horovod due to problems related to c++17. Needs help.
Hi,
When I try to train resnet-50 with tensorflow-metal I found the l2 regularizer makes each epoch take almost 4x as long (~220ms instead of 60ms). I'm on a M1 Max 16" MBP. It seems like regularization shouldn't add that much time, is there anything I can do to make it faster?
Here's some sample code that reproduces the issue:
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, ZeroPadding2D,\
Flatten, BatchNormalization, AveragePooling2D, Dense, Activation, Add
from tensorflow.keras.regularizers import l2
from tensorflow.keras.models import Model
from tensorflow.keras import activations
import random
import numpy as np
random.seed(1234)
np.random.seed(1234)
tf.random.set_seed(1234)
batch_size = 64
(train_im, train_lab), (test_im, test_lab) = tf.keras.datasets.cifar10.load_data()
train_im, test_im = train_im/255.0 , test_im/255.0
train_lab_categorical = tf.keras.utils.to_categorical(
train_lab, num_classes=10, dtype='uint8')
train_DataGen = tf.keras.preprocessing.image.ImageDataGenerator()
train_set_data = train_DataGen.flow(train_im, train_lab, batch_size=batch_size, shuffle=False)
# Change this to l2 for it to train much slower
regularizer = None # l2(0.001)
def res_identity(x, filters):
x_skip = x
f1, f2 = filters
x = Conv2D(f1, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x = Activation(activations.relu)(x)
x = Conv2D(f1, kernel_size=(3, 3), strides=(1, 1), padding='same', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x = Activation(activations.relu)(x)
x = Conv2D(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x = Add()([x, x_skip])
x = Activation(activations.relu)(x)
return x
def res_conv(x, s, filters):
x_skip = x
f1, f2 = filters
x = Conv2D(f1, kernel_size=(1, 1), strides=(s, s), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x = Activation(activations.relu)(x)
x = Conv2D(f1, kernel_size=(3, 3), strides=(1, 1), padding='same', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x = Activation(activations.relu)(x)
x = Conv2D(f2, kernel_size=(1, 1), strides=(1, 1), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x)
x = BatchNormalization()(x)
x_skip = Conv2D(f2, kernel_size=(1, 1), strides=(s, s), padding='valid', use_bias=False, kernel_regularizer=regularizer)(x_skip)
x_skip = BatchNormalization()(x_skip)
x = Add()([x, x_skip])
x = Activation(activations.relu)(x)
return x
input = Input(shape=(train_im.shape[1], train_im.shape[2], train_im.shape[3]), batch_size=batch_size)
x = ZeroPadding2D(padding=(3, 3))(input)
x = Conv2D(64, kernel_size=(7, 7), strides=(2, 2), use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation(activations.relu)(x)
x = MaxPooling2D((3, 3), strides=(2, 2))(x)
x = res_conv(x, s=1, filters=(64, 256))
x = res_identity(x, filters=(64, 256))
x = res_identity(x, filters=(64, 256))
x = res_conv(x, s=2, filters=(128, 512))
x = res_identity(x, filters=(128, 512))
x = res_identity(x, filters=(128, 512))
x = res_identity(x, filters=(128, 512))
x = res_conv(x, s=2, filters=(256, 1024))
x = res_identity(x, filters=(256, 1024))
x = res_identity(x, filters=(256, 1024))
x = res_identity(x, filters=(256, 1024))
x = res_identity(x, filters=(256, 1024))
x = res_identity(x, filters=(256, 1024))
x = res_conv(x, s=2, filters=(512, 2048))
x = res_identity(x, filters=(512, 2048))
x = res_identity(x, filters=(512, 2048))
x = AveragePooling2D((2, 2), padding='same')(x)
x = Flatten()(x)
x = Dense(10, activation='softmax', kernel_initializer='he_normal')(x)
model = Model(inputs=input, outputs=x, name='Resnet50')
opt = tf.keras.optimizers.legacy.SGD(learning_rate = 0.01)
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE), optimizer=opt)
model.fit(x=train_im, y=train_lab_categorical, batch_size=batch_size, epochs=150, steps_per_epoch=train_im.shape[0]/batch_size)
I have the following struct
struct NormalData { packed_float3 N[1]; };
declared as a buffer
device NormalData& _163 [[buffer(1)]]
I want to do an atomic float operation on the components on the vector via
atomic_fetch_add_explicit((device atomic_float*)&_163.N[0][0u], _206.x, memory_order_relaxed);
However, I get the error
error: address of vector element requested
The spec is a little vague on this, since it says addresses of vector. swizzles are illegal, but [0u] isn't a swizzle IMO.
What is correct, and is there a way to apply an atomic operation to a vector component?
Translated Report (Full Report Below)
Version: 1.0.0 (2.0)
Code Type: X86-64 (Translated)
Parent Process: launchd [1]
User ID: 948009654
Date/Time: 2023-11-02 19:47:33.1522 +0800
OS Version: macOS 12.1 (21C52)
Report Version: 12
Anonymous UUID: 815896E6-939E-002C-08C6-C903A4B87DF4
Sleep/Wake UUID: F06CECA0-3643-4423-A6F4-1163217FF863
Time Awake Since Boot: 100000 seconds
Time Since Wake: 92675 seconds
System Integrity Protection: enabled
Crashed Thread: 0 CrBrowserMain Dispatch queue: com.apple.main-thread
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Application Specific Information:
Assertion failed: (mach_vm_map(mach_task_self(), &address, size, 0, VM_FLAGS_ANYWHERE | VM_MAKE_TAG(VM_MEMORY_COREGRAPHICS_BACKINGSTORES), port, 0, false, prot, prot, VM_INHERIT_SHARE) == KERN_SUCCESS), function backing_map, file CGSBackingStore.c, line 192.
Kernel Triage:
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
VM - Compressor failed a blocking pager_get
Thread 0 Crashed:: CrBrowserMain Dispatch queue: com.apple.main-thread
0 <translation info unavailable> 0x108107a20 ???
1 libsystem_kernel.dylib 0x7ff8023cd5e2 __sigreturn + 10
2 ??? 0x7fc103a4f190 ???
3 libsystem_c.dylib 0x7ff80234dd10 abort + 123
4 libsystem_c.dylib 0x7ff80234d0be __assert_rtn + 314
5 SkyLight 0x7ff8075129de backing_map + 550
6 SkyLight 0x7ff8072c82ad lock_window_backing + 557
7 SkyLight 0x7ff807369f41 SLSDeviceLock + 54
8 CoreGraphics 0x7ff8076e6550 ripd_Lock + 56
9 CoreGraphics 0x7ff807678772 RIPLayerBltShape + 490
10 CoreGraphics 0x7ff8076769c7 ripc_Render + 328
11 CoreGraphics 0x7ff8076737d4 ripc_DrawRects + 482
12 CoreGraphics 0x7ff807673565 CGContextFillRects + 145
13 CoreGraphics 0x7ff8076734c4 CGContextFillRect + 117
14 CoreGraphics 0x7ff807672fe8 CGContextClearRect + 52
15 HIToolbox 0x7ff80b6176e0 HIMenuBarView::DrawOnce(CGRect, CGRect, bool, HIMenuBarTextAppearance, CGContext*) + 110
16 HIToolbox 0x7ff80b617640 HIMenuBarView::DrawIntoWindow(unsigned int*, CGRect, double, CGRect, bool, HIMenuBarTextAppearance, CGContext*) + 410
17 HIToolbox 0x7ff80b53c146 HIMenuBarView::DrawSelf(short, __HIShape const*, CGContext*) + 280
18 HIToolbox 0x7ff80b53bd56 HIMenuBarView::DrawingDelegateHandler(OpaqueEventHandlerCallRef*, OpaqueEventRef*, void*) + 262
19 HIToolbox 0x7ff80b520d1d DispatchEventToHandlers(EventTargetRec*, OpaqueEventRef*, HandlerCallRec*) + 1391
20 HIToolbox 0x7ff80b52014e SendEventToEventTargetInternal(OpaqueEventRef*, OpaqueEventTargetRef*, HandlerCallRec*) + 333
21 HIToolbox 0x7ff80b51ffef SendEventToEventTargetWithOptions + 45
22 HIToolbox 0x7ff80b53b8d3 HIView::SendDraw(short, OpaqueGrafPtr*, __HIShape const*, CGContext*) + 325
23 HIToolbox 0x7ff80b53b399 HIView::RecursiveDrawComposited(__HIShape const*, __HIShape const*, unsigned int, HIView*, CGContext*, unsigned char, double) + 571
24 HIToolbox 0x7ff80b53b56d HIView::RecursiveDrawComposited(__HIShape const*, __HIShape const*, unsigned int, HIView*, CGContext*, unsigned char, double) + 1039
25 HIToolbox 0x7ff80b53add8 HIView::DrawComposited(short, OpaqueGrafPtr*, __HIShape const*, unsigned int, HIView*, CGContext*) + 832
26 HIToolbox 0x7ff80b53aa89 HIView::Render(unsigned int, CGContext*) + 51
27 HIToolbox 0x7ff80b5521a9 FlushWindowObject(WindowData*, void**, unsigned char) + 772
28 HIToolbox 0x7ff80b551c2f FlushAllBuffers(__CFRunLoopObserver*, unsigned long, void*) + 317
29 CoreFoundation 0x7ff8024c6f98 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
30 CoreFoundation 0x7ff8024c6e34 __CFRunLoopDoObservers + 543
31 CoreFoundation 0x7ff8024c5830 CFRunLoopRunSpecific + 446
32 HIToolbox 0x7ff80b5474f1 RunCurrentEventLoopInMode + 292
33 HIToolbox 0x7ff80b547118 ReceiveNextEventCommon + 284
34 HIToolbox 0x7ff80b546fe5 _BlockUntilNextEventMatchingListInModeWithFilter + 70
35 AppKit 0x7ff804e1bb4c _DPSNextEvent + 886
36 AppKit 0x7ff804e1a1b8 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1411
37 AppKit 0x7ff804e0c5a9 -[NSApplication run] + 586
38 libqcocoa.dylib 0x11402762f QCocoaEventDispatcher::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 2495
39 QtCore 0x11ace2acf QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 431
40 QtCore 0x11ace7042 QCoreApplication::exec() + 130
Anyone knows why it happens and how to fix it?
I have imported two metal files and defined two stitchable Metal Core Image kernels, one of them being CIColorKernel and other being CIKernel. As outlined in the WWDC video, I need to add a flag -framework CoreImage to other Metal Linker flags. Unfortunately, Xcode 15 puts a double quotes around this and generates an error metal: error: unknown argument: '-framework CoreImage'. So I built without this flag and it works for the first kernel that was added. The other kernel is never added to metal.defaultlib and fails to load. How do I get it working?
class SobelEdgeFilterHDR: CIFilter {
var inputImage: CIImage?
var inputParam: Float = 0.0
static var kernel: CIKernel = { () -> CIKernel in
let url = Bundle.main.url(forResource: "default",
withExtension: "metallib")!
let data = try! Data(contentsOf: url)
let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data)
NSLog("Kernels \(kernelNames)")
return try! CIKernel(functionName: "sobelEdgeFilterHDR", fromMetalLibraryData: data)
}()
override var outputImage : CIImage? {
guard let inputImage = inputImage else {
return nil
}
return SobelEdgeFilterHDR.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in
return rect }, arguments: [inputImage])
}
}
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code:
class FilterTwo: CIFilter {
var inputImage: CIImage?
var inputParam: Float = 0.0
static var kernel: CIKernel = { () -> CIKernel in
let url = Bundle.main.url(forResource: "default",
withExtension: "metallib")!
let data = try! Data(contentsOf: url)
let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data)
NSLog("Kernels \(kernelNames)")
return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails!
}()
override var outputImage : CIImage? {
guard let inputImage = inputImage else {
return nil
}
return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in
return rect }, arguments: [inputImage])
}
}
Here is the Metal code:
using namespace metal;
[[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest)
{
float2 srcCoord = inputImage.coord();
half4 color = half4(inputImage.sample(srcCoord));
return color;
}
And here is the usage:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let filter = FilterTwo()
filter.inputImage = CIImage(color: CIColor.red)
let outputImage = filter.outputImage!
NSLog("Output \(outputImage)")
}
}
And the output:
StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆}
Kernels []
reflect Function 'secondFilter' does not exist.
I am trying to run a .NET application, Tooll3 with the Game Porting Toolkit on an M2 Max and I keep getting the following error:
018c:err:virtual:virtual_setup_exception stack overflow 2816 bytes in thread 018c addr 0x6811c52b stack 0x20500 (0x20000-0x21000-0x1a0000)
Is there any way to get a better stack trace to narrow down where the problem could lie?
I have tried to use apps in iOS like iSH and a-Shell to build a linux-like environment to run pytorch (training a small network directly without using Swift, metal, and such). However, they cannot be installed. Is there a way for iPhone to use an app (like termux in Android) to train (instead of just inference) a deep network in pytorch directly?
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting
let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()];
do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
CGImageRef __nullable CGImageCreate(size_t width, size_t height,
size_t bitsPerComponent, size_t bitsPerPixel, size_t bytesPerRow,
CGColorSpaceRef cg_nullable space, CGBitmapInfo bitmapInfo,
CGDataProviderRef cg_nullable provider,
const CGFloat * __nullable decode, bool shouldInterpolate,
CGColorRenderingIntent intent)
function returns null when kCGImageAlphaNone is passed for bitmap info with error message "verify_image_parameters: invalid image alphaInfo: kCGImageAlphaNone. It should be kCGImageAlphaNoneSkipLast"
This issue happens only when installing on iOS 17 from XCode 15(Swift 5).
Is it possible to fix this problem without having change the bitmap info as that can affect other parts of image processing.
Refering to the apple accessibility plugin for Unity: https://github.com/apple/unityplugins.
I have implemented the apple accessibility into my app and it works mostly well. However, why are button clicks (voiceover) triggered by a triple tap instead of a double tap, unlike applications developed using Xcode?
Is this normal? Is there a way to change it to a double tap?
Thank you
I've started using swift charts and since then get random crashes with the error: Thread 467: hit program assert
The console outputs the following at the time of the crash:
-[MTLDebugRenderCommandEncoder setVertexBufferOffset:atIndex:]:1758: failed assertion Set Vertex Buffer Offset Validation
index(0) must have an existing buffer.`
I'm not using Metal directly buit it seems like this is related to Swift Charts.
I cannot work out the source of the issue from the stack trace and the debugger shows teh crash in libsystem_kernel.dylib so does not tie back to my code.
I'm looking for ideas about where to start to try and find the source of the issue
0 libsystem_kernel.dylib 0x9764 __pthread_kill + 8
1 libsystem_pthread.dylib 0x6c28 (Missing UUID 1f30fb9abdf932dba7098417666a7e45)
2 libsystem_c.dylib 0x76ae8 abort + 180
3 libsystem_c.dylib 0x75e44 __assert_rtn + 270
4 Metal 0x1426c4 MTLReportFailure.cold.1 + 46
5 Metal 0x11f22c MTLReportFailure + 464
6 Metal 0x11552c _MTLMessageContextEnd + 876
7 MetalTools 0x95350 -[MTLDebugRenderCommandEncoder setVertexBufferOffset:atIndex:] + 272
8 RenderBox 0xa5e18 RB::RenderQueue::encode(RB::RenderQueue::EncoderState&) + 1804
9 RenderBox 0x7d5fc RB::RenderFrame::encode(RB::RenderFrame::EncoderData&, RB::RenderQueue&) + 432
10 RenderBox 0x7d928 RB::RenderFrame::flush_pass(RB::RenderPass&, bool)::$_4::__invoke(void*) + 48
11 libdispatch.dylib 0x4400 (Missing UUID 9897030f75d3374b8787322d3d72e096)
12 libdispatch.dylib 0xba88 (Missing UUID 9897030f75d3374b8787322d3d72e096)
13 libdispatch.dylib 0xc5f8 (Missing UUID 9897030f75d3374b8787322d3d72e096)
14 libdispatch.dylib 0x17244 (Missing UUID 9897030f75d3374b8787322d3d72e096)
15 libsystem_pthread.dylib 0x3074 (Missing UUID 1f30fb9abdf932dba7098417666a7e45)
16 libsystem_pthread.dylib 0x1d94 (Missing UUID 1f30fb9abdf932dba7098417666a7e45)
Hi all,
I had a quick query to see if this would be considered gambling under guidelines -
I charge a monthly subscription fee to unlock this feature of my app ($4.99/month)
Each subscriber gets 400 of our app's internal currencies.
Users can use this app to bet on sporting events. They can bet only on the outcome of the event. They will be provided with live odds through an API.
Based on how well they do, the users with the most of our internal currencies at the end of the month will win prizes.
Would this process be considered gambling and require licenses on our end?
Thank You for your help!!!!
Kacey gave a great discussion of USD Fundamentals using a Chess Set as an example. Is there a link to the complete set of usda files used in the example?
tag: [wwdc2022-10129]
[AGXA11Device originalObject]: unrecognized selector sent to instance 0x116858c00
Metal `_MTLCreateGLMetalDevice + 76
Any friends konw?
a circle can be exactly represented as a rational quadratic b-spline curve.
in the curve raytracing example a simd_float3 type was used for control points.
a rational curve, however, requires an additional homogenious control point coordinate w.
is this supported?
the math is straightforward an it would open great opportunities to efficiently render solid primitives like torus and tube.