Hello - I have been struggling to find a solution online and I hope you can help me timely. I have installed the latest tesnorflow and tensorflow-metal, I even went to install the ternsorflow-nightly. My app generates the following as a result of my fit function on a CNN model with 8 layers.
2023-09-29 22:21:06.115768: I metal_plugin/src/device/metal_device.cc:1154] Metal device set to: Apple M1 Pro
2023-09-29 22:21:06.115846: I metal_plugin/src/device/metal_device.cc:296] systemMemory: 16.00 GB
2023-09-29 22:21:06.116048: I metal_plugin/src/device/metal_device.cc:313] maxCacheSize: 5.33 GB
2023-09-29 22:21:06.116264: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2023-09-29 22:21:06.116483: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
Most importantly, the learning process is very slow and I'd like to take advantage of al the new features of the latest versions. What can I do?
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I only get this error when using the JAX Metal device (CPU is fine). It seems to be a problem whenever I want to modify values of an array in-place using at and set.
note: see current operation:
%2903 = "mhlo.scatter"(%arg3, %2902, %2893) ({
^bb0(%arg4: tensor<f32>, %arg5: tensor<f32>):
"mhlo.return"(%arg5) : (tensor<f32>) -> ()
}) {indices_are_sorted = true, scatter_dimension_numbers = #mhlo.scatter<update_window_dims = [0, 1], inserted_window_dims = [1], scatter_dims_to_operand_dims = [1]>, unique_indices = true} : (tensor<10x100x4xf32>, tensor<1xsi32>, tensor<10x4xf32>) -> tensor<10x100x4xf32>
blocks = blocks.at[i].set(
...
Warning: apple/apple/game-porting-toolkit 1.0.4 is already installed and up-to-date.
To reinstall 1.0.4, run:
brew reinstall game-porting-toolkit
dmitrxx@MacBook-Pro-Dima ~ % WINEPREFIX=~/Win10 brew --prefix game-porting-toolkit/bin/wine64 winecfg
Error: undefined method __prefix' for Homebrew:Module Please report this issue: https://docs.brew.sh/Troubleshooting /usr/local/Homebrew/Library/Homebrew/brew.rb:86:in '
zsh: no such file or directory: /bin/wine64
dmitrxx@MacBook-Pro-Dima ~ %
The release notes for Xcode 14 mention a new AppleTextureConverter library.
https://developer.apple.com/documentation/xcode-release-notes/xcode-14-release-notes
TextureConverter 2.0 adds support for decompressing textures, advanced texture error metrics, and support for reading and writing KTX2 files.
The new AppleTextureConverter library makes TextureConverter available for integration into third-party engines and tools. (82244472)
Does anyone know how to include this library into a project and use it at runtime?
I'm developing a drawing app. I use MTKView to render the canvas. But for some reason and for only a few users, the pixels are not rendered correctly (pixels have different sizes), the majority of users have no problem with this. Here is my setup:
Each pixel is rendered as 2 triangles
MTKView's frame dimensions are always multiple of the canvas size (a 100x100 canvas will have the frame size of 100x100, 200x200, and so on)
There is a grid to indicate pixels (it's an SwiftUI Path) which display correctly, and we can see that they don't align with the pixels).
There is also a checkerboard pattern in the background rendered using another MTKView which lines up with the pixels but not the grid.
Previously, I had a similar issue when my view's frame is not a multiple of the canvas size, but I fixed that with the setup above already.
The issue worsens when the number of points representing a pixel of the canvas becomes smaller. E.g. a 100x100 canvas on a 100x100 view is worse than a 100x100 canvas on a 500x500 view
The vertices have accurate coordinates, this is a rendering issue. As you can see in the picture, some pixels are bigger than others.
I tried changing the contentScaleFactor to 1, 2, and 3 but none seems to solve the problem.
My MTKView setup:
clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
delegate = renderer
renderer.setup()
isOpaque = false
layer.magnificationFilter = .nearest
layer.minificationFilter = .nearest
Renderer's setup:
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.vertexFunction = vertexFunction
pipelineDescriptor.fragmentFunction = fragmentFunction
pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineState = try? device.makeRenderPipelineState(descriptor: pipelineDescriptor)
Draw method of renderer:
commandEncoder.setRenderPipelineState(pipelineState)
commandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder.setVertexBuffer(colorBuffer, offset: 0, index: 1)
commandEncoder.drawIndexedPrimitives(
type: .triangle,
indexCount: indexCount,
indexType: .uint32,
indexBuffer: indexBuffer,
indexBufferOffset: 0
)
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
Metal file:
struct VertexOut {
float4 position [[ position ]];
half4 color;
};
vertex VertexOut frame_vertex(constant const float2* vertices [[ buffer(0) ]],
constant const half4* colors [[ buffer(1) ]],
uint v_id [[ vertex_id ]]) {
VertexOut out;
out.position = float4(vertices[v_id], 0, 1);
out.color = colors[v_id / 4];
return out;
}
fragment half4 frame_fragment(VertexOut v [[ stage_in ]]) {
half alpha = v.color.a;
return half4(v.color.r * alpha, v.color.g * alpha, v.color.b * alpha, v.color.a);
}
I've been object/mesh shaders using MTLRenderPipelineState built from MTLMeshRenderPipelineDescriptor with visible function tables. However, it seems that some functionality that is present on other MTL*RenderPipelineDescriptor types is missing. Namely, it lacks max<Stage>CallStackDepth() and setSupportAdding<Stage>BinaryFunctions().
The latter isn't too bad: I can always rebuild the the pipeline states from scratch if I want to add new linked functions.
However, not being able to set the max call stack depth is limiting. I assume that means I only get a depth of 1 as that is the default value for the other descriptor types. In practice it seems that I can go up to 2 with the functions I'm using before I start getting kIOGPUCommandBufferCallbackErrorSubmissionsIgnored errors due to "prior/excessive GPU errors".
I'm curious if the lack of this functionality on MTLMeshRenderPipelineDescriptor is a simple oversight. In my case I only am using VFTs and linked functions in the fragment stage. I suspect it should be possible since the other render pipeline descriptor types expose max call depths and adding binary functions in the fragment stage.
FWIW I'm also using Metal-CPP (which is fantastic!) but I don't see this functionality in the Swift/Obj-C docs either.
Hi, i am developing a Metal renderer the Metal debbuger gives me an error when i try to debug a Fragment Shader. This is happening since i updated to sonoma and Xcode 15, before everything was working fine.
Also i want to mention that i have tried the Apples DeferredLightning demo project and it gives the same error so its not my projects fault.
Device: MacbookPro 16 2019 5300M
MacOs: 14.0
Xcode: 15.0
Error:
Unable to create shader debug session
Thread data is corrupt
GPUDebugger error - 15A240c - DYPShaderDebuggerDataErrorDomain (2): Thread data is corrupt
=== GPUDebugger Item ===
API Call: 16 [drawIndexedPrimitives:Triangle indexCount:26652 indexType:UInt32 indexBuffer:MDL_OBJ-Indices indexBufferOffset:0]
Resource: fragment_main
See attached for console output after launch. Search for "err".
Once you get to the point where Origin can launch successfully, the game will attempt to launch for a second or two and then close.
M1 Macbook Air running 14.0 (23A344), using Whisky to handle the creation and linking of bottles
0754: thread_get_state failed on Apple Silicon - faking zero debug registers
0750:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
06a0:fixme:d3d:wined3d_check_device_format_conversion output 0079CCB0, device_type WINED3D_DEVICE_TYPE_HAL, src_format WINED3DFMT_B8G8R8X8_UNORM, dst_format WINED3DFMT_B8G8R8X8_UNORM stub!
0758: thread_get_state failed on Apple Silicon - faking zero debug registers
0758:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
0760: thread_get_state failed on Apple Silicon - faking zero debug registers
0764: thread_get_state failed on Apple Silicon - faking zero debug registers
0768: thread_get_state failed on Apple Silicon - faking zero debug registers
076c: thread_get_state failed on Apple Silicon - faking zero debug registers
0770: thread_get_state failed on Apple Silicon - faking zero debug registers
0688:fixme:kernelbase:AppPolicyGetProcessTerminationMethod FFFFFFFA, 0012FEB8
wine: Unhandled page fault on read access to 0000000000000000 at address 0000000000000000 (thread 06fc), starting debugger...
06a0:fixme:kernelbase:AppPolicyGetProcessTerminationMethod FFFFFFFA, 0012FEB8
wine: Unhandled page fault on read access to 0000000000000000 at address 0000000140003035 (thread 070c), starting debugger...
067c:fixme:file:ReplaceFileW Ignoring flags 2
0798: thread_get_state failed on Apple Silicon - faking zero debug registers
0784:fixme:imm:ImeSetActiveContext (0x36b920, 1): stub
0784:fixme:imm:ImmReleaseContext (0000000000030276, 000000000036B920): stub
0794:fixme:imm:ImeSetActiveContext (0x36b920, 1): stub
0794:fixme:imm:ImmReleaseContext (00000000000202DA, 000000000036B920): stub
0640:fixme:cryptnet:check_ocsp_response_info check responder id
07a0: thread_get_state failed on Apple Silicon - faking zero debug registers
0640:fixme:cryptnet:check_ocsp_response_info check responder id
0778:fixme:imm:ImeSetActiveContext (0x36ed90, 1): stub
0778:fixme:imm:ImmReleaseContext (000000000001030E, 000000000036ED90): stub
07a8: thread_get_state failed on Apple Silicon - faking zero debug registers
07ac: thread_get_state failed on Apple Silicon - faking zero debug registers
078c:fixme:imm:ImeSetActiveContext (0x36ed90, 1): stub
078c:fixme:imm:ImmReleaseContext (0000000000010330, 000000000036ED90): stub
06f0:fixme:d3d:wined3d_guess_card_vendor Received unrecognized GL_VENDOR "Apple". Returning HW_VENDOR_NVIDIA.
06f0:fixme:ntdll:NtQuerySystemInformation info_class SYSTEM_PERFORMANCE_INFORMATION
06f0:fixme:d3d:wined3d_check_device_format_conversion output 000000000027DC30, device_type WINED3D_DEVICE_TYPE_HAL, src_format WINED3DFMT_B8G8R8X8_UNORM, dst_format WINED3DFMT_B8G8R8X8_UNORM stub!
07b0:err:d3d:wined3d_check_gl_call >>>>>>> GL_INVALID_FRAMEBUFFER_OPERATION (0x506) from glClear @ /private/tmp/game-porting-toolkit-20231007-39251-eze8n5/wine/dlls/wined3d/context_gl.c / 2330.
steam-origin-launch.txt
The Apple documentation seems to say RealityKit should obey the autoplay metadata, but it doesn't seem to work. Is the problem with my (hand coded) USDA files, the Swift, or something else? Thanks in advance.
I can make the animations run with an explicit call to run, but what have I done wrong to get the one cube to autoplay?
https://github.com/carlynorama/ExploreVisionPro_AnimationTests
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
VStack {
//A ModelEntity, not expected to autoplay
Model3D(named: "cube_purple_autoplay", bundle: realityKitContentBundle)
//An Entity, actually expected this to autoplay
RealityView { content in
if let cube = try? await Entity(named: "cube_purple_autoplay", in: realityKitContentBundle) {
print(cube.components)
content.add(cube)
}
}
//Scene has one cube that should auto play, one that should not.
//Neither do, but both will start (as expected) with click.
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(scene)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
if enlarge {
for animation in scene.availableAnimations {
scene.playAnimation(animation.repeat())
}
} else {
scene.stopAllAnimations()
}
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.toggleStyle(.button)
}.padding().glassBackgroundEffect()
}
}
}
No autospin meta data
#usda 1.0
(
defaultPrim = "transformAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Y"
)
def Xform "transformAnimation" ()
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate"]
}
}
With autoplay metadata
#usda 1.0
(
defaultPrim = "autoAnimation"
endTimeCode = 89
startTimeCode = 0
timeCodesPerSecond = 24
autoPlay = true
playbackMode = "loop"
upAxis = "Y"
)
def Xform "autoAnimation"
{
def Scope "Geom"
{
def Xform "xform1"
{
float xformOp:rotateY.timeSamples = {
...
}
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"]
over "cube_1" (
prepend references = @./cube_base_with_purple_linked.usd@
)
{
quatf xformOp:orient = (1, 0, 0, 0)
float3 xformOp:scale = (2, 2, 2)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"]
}
}
}
}
How can I take the contents (i.e. the stroke and fill) of a CAShapeLayer and draw it into an MTLTexture, which can then be displayed with a normal vertex/fragment shader?
Hi there,
I'm currently trying to install the game porting toolkit on my M2pro Mac Mini. (MacOS Sonoma 14.0)
As the InstallAware AGPT-Installer always failed with "error 1" (..check disk space blabla) I tried the installation using the console and the steps described here: https://www.applegamingwiki.com/wiki/Game_Porting_Toolkit
But: The general installation step always fails:
brew -v install apple/apple/game-porting-toolkit
It results in the following error:
Error: apple/apple/game-porting-toolkit 1.0.4 did not build
Logs:
/Users/jgruen/Library/Logs/Homebrew/game-porting-toolkit/00.options.out
/Users/jgruen/Library/Logs/Homebrew/game-porting-toolkit/wine64-build
/Users/jgruen/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc
/Users/jgruen/Library/Logs/Homebrew/game-porting-toolkit/01.configure
If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core):
apple/apple
If you scroll up, you can see the likely source of the problem:
...
checking for ft2build.h... yes
checking for -lfreetype... not found
configure: error: FreeType 64-bit development files not found. Fonts will not be built.
Use the --without-freetype option if you really want this.
...
Note that I also have XCode 15.0 installed. FreeType is also available under /usr/local/opt/
All previous steps were ok. This one fails.
How can I "install" the 64 bit development version? By the way I also have the include files, so some kind of FreeType dev code is here. (/usr/local/include/freetype2/ft2build.h for example)
Any help or idea would be cool.
device: iphone 11 os: ios 15.6
I have a metal applicaton on IOS where a series of computer shaders are encoded, then disptached and comiited together at last. When I capture a GPU trace of my application, however I noticed there are these gaps between each computer shader invocation. And these gaps seem to take up a big part of the GPU time.
I'm wondering what are these gaps and what are causing them. Since all compute dispatch commands are commiited toghether at once, these gaps shouldn't be synchronizations between cpu and GPU
PS: In my application, later compute commands mostly depend on former ones and would use the result buffer from former invocations. But as shown in the picture, bandwith and read/write buffer limiter are not high as far as I'm concerned.
I can only download usdpython from following website: https://developer.apple.com/augmented-reality/tools/
And where could I get the various versions of the usdpython?
Is the usdzconvert(usdpython) of Apple open source? Because I wanna learn how Apple achieves the conversion from GLTF to USDZ because I'm currently using version 0.66, and I feel that the conversion of GLTF features is not quite sufficient.
This is verified to be a framework bug (occurs on Mac Catalyst but not iOS or iPadOS), and it seems the culprit is AVVideoCompositionCoreAnimationTool?
/// Exports a video with the target animating.
func exportVideo() {
let destinationURL = createExportFileURL(from: Date())
guard let videoURL = Bundle.main.url(forResource: "black_video", withExtension: "mp4") else {
delegate?.exporterDidFailExporting(exporter: self)
print("Can't find video")
return
}
// Initialize the video asset
let asset = AVURLAsset(url: videoURL, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
guard let assetVideoTrack: AVAssetTrack = asset.tracks(withMediaType: AVMediaType.video).first,
let assetAudioTrack: AVAssetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else { return }
let composition = AVMutableComposition()
guard let videoCompTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)),
let audioCompTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
videoCompTrack.preferredTransform = assetVideoTrack.preferredTransform
// Get the duration
let videoDuration = asset.duration.seconds
// Get the video rect
let videoSize = assetVideoTrack.naturalSize.applying(assetVideoTrack.preferredTransform)
let videoRect = CGRect(origin: .zero, size: videoSize)
// Initialize the target layers and animations
animationLayers = TargetView.initTargetViewAndAnimations(atPoint: CGPoint(x: videoRect.midX, y: videoRect.midY), atSecondsIntoVideo: 2, videoRect: videoRect)
// Set the playback speed
let duration = CMTime(seconds: videoDuration,
preferredTimescale: CMTimeScale(600))
let appliedRange = CMTimeRange(start: .zero, end: duration)
videoCompTrack.scaleTimeRange(appliedRange, toDuration: duration)
audioCompTrack.scaleTimeRange(appliedRange, toDuration: duration)
// Create the video layer.
let videolayer = CALayer()
videolayer.frame = CGRect(origin: .zero, size: videoSize)
// Create the parent layer.
let parentlayer = CALayer()
parentlayer.frame = CGRect(origin: .zero, size: videoSize)
parentlayer.addSublayer(videolayer)
let times = timesForEvent(startTime: 0.1, endTime: duration.seconds - 0.01)
let timeRangeForCurrentSlice = times.timeRange
// Insert the relevant video track segment
do {
try videoCompTrack.insertTimeRange(timeRangeForCurrentSlice, of: assetVideoTrack, at: .zero)
try audioCompTrack.insertTimeRange(timeRangeForCurrentSlice, of: assetAudioTrack, at: .zero)
}
catch let compError {
print("TrimVideo: error during composition: \(compError)")
delegate?.exporterDidFailExporting(exporter: self)
return
}
// Add all the non-nil animation layers to be exported.
for layer in animationLayers.compactMap({ $0 }) {
parentlayer.addSublayer(layer)
}
// Configure the layer composition.
let layerComposition = AVMutableVideoComposition()
layerComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
layerComposition.renderSize = videoSize
layerComposition.animationTool = AVVideoCompositionCoreAnimationTool(
postProcessingAsVideoLayer: videolayer,
in: parentlayer)
let instructions = initVideoCompositionInstructions(
videoCompositionTrack: videoCompTrack, assetVideoTrack: assetVideoTrack)
layerComposition.instructions = instructions
// Creates the export session and exports the video asynchronously.
guard let exportSession = initExportSession(
composition: composition,
destinationURL: destinationURL,
layerComposition: layerComposition) else {
delegate?.exporterDidFailExporting(exporter: self)
return
}
// Execute the exporting
exportSession.exportAsynchronously(completionHandler: {
if let error = exportSession.error {
print("Export error: \(error), \(error.localizedDescription)")
}
self.delegate?.exporterDidFinishExporting(exporter: self, with: destinationURL)
})
}
Not sure how to implement a custom compositor that performs the same animations as this reproducible case:
class AnimationCreator: NSObject {
// MARK: - Target Animations
/// Creates the target animations.
static func addAnimationsToTargetView(_ targetView: TargetView, startTime: Double) {
// Add the appearance animation
AnimationCreator.addAppearanceAnimation(on: targetView, defaultBeginTime: AVCoreAnimationBeginTimeAtZero, startTime: startTime)
// Add the pulse animation.
AnimationCreator.addTargetPulseAnimation(on: targetView, defaultBeginTime: AVCoreAnimationBeginTimeAtZero, startTime: startTime)
}
/// Adds the appearance animation to the target
private static func addAppearanceAnimation(on targetView: TargetView, defaultBeginTime: Double = 0, startTime: Double = 0) {
// Starts the target transparent and then turns it opaque at the specified time
targetView.targetImageView.layer.opacity = 0
let appear = CABasicAnimation(keyPath: "opacity")
appear.duration = .greatestFiniteMagnitude // stay on screen forever
appear.fromValue = 1.0 // Opaque
appear.toValue = 1.0 // Opaque
appear.beginTime = defaultBeginTime + startTime
targetView.targetImageView.layer.add(appear, forKey: "appear")
}
/// Adds a pulsing animation to the target.
private static func addTargetPulseAnimation(on targetView: TargetView, defaultBeginTime: Double = 0, startTime: Double = 0) {
let targetPulse = CABasicAnimation(keyPath: "transform.scale")
targetPulse.fromValue = 1 // Regular size
targetPulse.toValue = 1.1 // Slightly larger size
targetPulse.duration = 0.4
targetPulse.beginTime = defaultBeginTime + startTime
targetPulse.autoreverses = true
targetPulse.repeatCount = .greatestFiniteMagnitude
targetView.targetImageView.layer.add(targetPulse, forKey: "pulse_animation")
}
}
Creating an Entity and then changing between different animations (walk, run, jump, etc.) is pretty straightforward when you have individual USDZ files in your Xcode project, and then simply create an array of AnimationResource.
However, I'm trying to do it via the new Reality Composer Pro app because the docs state it's much more efficient versus individual files, but I'm having a heck of a time figuring out how exactly to do it.
Do you have one scene per USDZ file (does that erase any advantage over just loading individual files)? One scene with multiple entities? Something else all together?
If I try one scene with multiple entities within it, when I try to change animation I always get "Cannot find a BindPoint for any bind path" logged in the console, and the animation never actually occurs. This is with the same files that animate perfectly when just creating an array of AnimationResource manually via individual/raw USDZ files.
Anyone have any experience doing this?
I installed and configured all the environments follow the Read Me.rtf and then installed Steam with this command.
gameportingtoolkit ~/my-game-prefix ~/Downloads/SteamSetup.exe.
After installation, when I first opened steam.exe, it shows me this and installing update. It seems normal at this point.
Then the window disappeared, and so is the steam icon in the dock. The terminal window is also unable to enter new commands.
I don't know how to kill the steam.exe, so I just close this terminal window and open a new one.
No matter how many times I tried with this, there is nothing happened.
gameportingtoolkit ~/my-game-prefix 'C:\Program Files (x86)\Steam\steam.exe'
If I try this command, MTL_HUD_ENABLED=1 WINEESYNC=1 WINEPREFIX=~/my-game-prefix $(brew --prefix game-porting-toolkit)/bin/wine64 'C:\Program Files (x86)\Steam\steam.exe' the steam icon will appears in the Dock but it always shows no available window. at the same time, too many logs come up and seem to never stop (I suspect they're duplicate messages form a certain location, but since I can't quiet understand what they are talking about, so I just copied some of them and put them in the attachment).
I was wondering if it might have something to do with the error message when create the new Wine prefix, but this error message didn't seem to affect either the appearance of the “Wine configuration” window or the configuration.
Anyone else having the same issues? I really don't know what went wrong. Thank you all for your patience in reading!
part of log
Hello, I am writing a plugin for iOS for Unity. And recently the phone was updated, then I had to switch from xcode 14 to xcode 15, and now my plugin has stopped running with the error thread 1: EXC_BAD_ACCESS (code=1, address=0x0). But at the same time, there is no such error, if the project has appsflyer, one signal plugins at the same time, I can't find what exactly they can change, that there is no such problem. I have reviewed my entire project and the problem arises at this stage:
[WebView loadRequest:request];
Just in case, I output data on delegates to the console and whether the wkwebview itself is empty, but no, everything is fine:
2023-10-26 13:42:10.239021+0300 WWebView[40416:4492528] request: <NSURLRequest: 0x282505d40> { URL: https://www.google.com/ }
2023-10-26 13:42:10.239052+0300 WWebView[40416:4492528] WebView.navigationDelegate != nil
2023-10-26 13:42:10.239074+0300 WWebView[40416:4492528] WebView.UIDelegate != nil
Tell me, what could be the cause of such a problem?The plugin is written in objective-c, it worked before xcode 15, I can't check on the versions below, there is no device on which I could run xcode 14, since this version does not support new iOS versions.
I am experimenting with some alternative rendering techniques, where the scene is represented as a mixture of parametrised SDFs and the final shading is done evaluating and mixing the SDFs for each fragment. The basic algorithm divides the screen into tiles, collects and sorts the SDFs intersecting each tile, and then invokes the final compute shader. There can be multiple SDFs affecting each pixel as they are partially transparent.
It seems to me that Apple's TBDR tile shading pipeline would be an ideal for for this type of algorithm, but I am not quite sure how to utilise it efficiently. Essentially, I was thinking about rendering bounding rects over the SDFs and leveraging the binning hardware to arrange them into tiles for me. What I need the rasterisation pipeline to spit out is simply the list of primitives per tile. But there is no "per-primitive-per-tile" shader stage, so this has to be done in the fragment shader. I could of course record the primitive ID per pixel, but this is complicated by the fact that I can have multiple primitives affecting each pixel. Plus, there will be a lot of duplicates, as there are usually not more than 5-6 primitives per tile, and sorting the duplicates out seems like a waste.
What would be the most efficient way to handle this? Is there a way to utilize the tile shading pipeline to simply build out a list of primitive IDs in the tile?
I followed the instructions from the .dmg to install gpt. Everything worked fine until I ran the command brew -v install apple/apple/game-porting-toolkit. It threw the following error:
apple/apple/game-porting-toolkit 1.0.4 did not build
Logs:
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/00.options.out
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/wine64-build
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/02.make.cc
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/01.configure
/Users/stepanegorov/Library/Logs/Homebrew/game-porting-toolkit/02.make
If reporting this issue please do so to (not Homebrew/brew or Homebrew/homebrew-core):
apple/apple
I use MacBook air m2 and i have no idea how to fix it
Where is the sample code for hair rendering in wwdc2022-10162?