Hi, I have a an issue with jax.numpy.linalg.inv(a).
import jax.numpy.linalg as jnpl
B = jnp.identity(2)
jnpl.inv(B)
Throws the following error:
XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve'
/var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from
/var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo<transpose NO_TRANSPOSE>, unit_diagonal = true} : (tensor<2x2xf32>, tensor<2x2xf32>) -> tensor<2x2xf32>
Any ideas what could be the issue or how to solve it?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Where's the xcode project for the "World App" referenced in the Build spatial experiences with RealityKit?
At 3 minutes in, "the world app" is shown with a 2D window, and seems to be the expected starting place for the 3 module series.
I see the code snippets below the video, which seem to intend adjustments to the original project.
I've searched a..
I found it by searching github, maybe I'm missing an obvious link on the page.
It is available here: https://developer.apple.com/documentation/visionos/world under the documentation page.
Hope this helps someone.
Namaste!
I'm putting together a FCPX Effect that is supposed to increase the resolution with AI upscale, but the only way to add resolution is by scaling. The problem is that scaling causes the video to clip.
I want to be able to give a 480 video this "Resolution Upscale" Effect and have it output a 720 or 1080 AI upscaled video, however both FxPlug and Motion Effects does not allow such a thing.
The FxPlug is always getting 640x480 input (correct) but only 640x480 output.
What is the FxPlug code or Motion Configuration/Cncept for upscaling the resolution without affecting the scale? Is there a way to do this in Motion/FxPlug?
Scaling up by FxPlug effect, but then scaling down in a parent Motion Group doesn't do anything.
Setting the Group 2D Fixed Resolution doesn't output different dimensions; the debug output from the FxPlug continues saying the input and output is 640x480, even when the group is set at fixed resolution 1920x1080.
Doing a hierarchy of Groups with different settings for 2D Fixed Resolution and 3D Flatten do not work. In these instances, the debug output continues saying 640x480 for both input and output. So the plug in isn't aware of the Fixed Resolution change.
Does there need to be a new FxPlug property, via [properties:...], like "kFxPropertyKey_ResolutionChange" and an API for changing the dest image resolution? (and without changing the dest rect size)
How do we do this?
I know I watched this but it is no where to be found on Apple's site or the Developer app. Nor does the Wayback Machine have it.
http://developer.apple.com/wwdc16/608
Graphics and Games #WWDC16
What’s New in GameplayKit
Session 608
Bruno Sommer Game Technologies Engineer
Sri Nair Game Technologies Engineer
Michael Brennan Game Technologies Engineer
Is there a way for an FXPlug to access the Source audio?
Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug?
Is there an AU plugin for external processes to "listen" to the audio?
On startup I'm getting a "We reached more than 3 frames in flight. That's too many. Did you forget to call cp_frame_end_submission()?" error despite cp_frame_end_submission() being called when needed.
Nothing is rendered in the 1 frame that does go through. Is there something I'm missing that would cause cp_frame_end_submission to not register?
help me understand the crash report
this started happening from last update only
Translated Report (Full Report Below)
Process: dota2 [7353]
Path: /Users/USER/Library/Application Support/Steam/*/dota2.app/Contents/MacOS/dota2
Identifier: com.valvesoftware.dota2
Version: 1.0.0
Code Type: X86-64 (Translated)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2024-02-18 18:00:45.9766 -0500
OS Version: macOS 14.3.1 (23D60)
Report Version: 12
Anonymous UUID: 0F5E4D0D-9839-DF78-5C28-93F6D26A5763
Sleep/Wake UUID: 52D18CB1-ADD8-4A75-B6A1-C0CF4CF2A306
Time Awake Since Boot: 85000 seconds
Time Since Wake: 1722 seconds
System Integrity Protection: enabled
Notes:
PC register does not match crashing frame (0x0 vs 0x1032D1C08)
Crashed Thread: 0 MainThrd Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000441f0f660002
Exception Codes: 0x0000000000000001, 0x0000441f0f660002
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [7353]
VM Region Info: 0x441f0f660002 is not in any region. Bytes after previous region: 48357375344643 Bytes before following region: 65536781844478
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
Memory Tag 255 1823fb340000-1823fb380000 [ 256K] rw-/rwx SM=PRV
---> GAP OF 0x67960cc80000 BYTES
MALLOC_MEDIUM 7fba08000000-7fba10000000 [128.0M] rw-/rwx SM=PRV
Error Formulating Crash Report:
PC register does not match crashing frame (0x0 vs 0x1032D1C08)
Kernel Triage:
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this?
You can look at 10:34 in this video.
https://developer.apple.com/documentation/realitykit/videomaterial
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace.
For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it.
So far, the portal isn't showing up.
Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
Does RealityKit support a clipping plane, where I can define a plane and have all content on one side of the plane not rendered?
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black.
In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content?
Basically, I want to punch a hole through the virtual content and see the room behind it.
As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard.
Is this possible?
I was able to add a spotlight effect to my entities using ImageBasedLightComponent and the sample code. However, I noticed that whenever you set ImageBasedLightComponent the environmental lighting is completely turned off. Is it possible to merge them somehow?
So imagine you have a toy in a the real world, and you shine a flashlight on it. The environment light should still have an effect right?
GKLocalPlayer.local.authenticateHandler = {viewController, error in
When authenticating a player using authenticateHandler, the completion handler is only called if the player is already logged in. If the player is not logged in, the authentication window will appear but the completion handler is never called.
If I have content in a volumetric window that obscures the login window (which appears at a slight Z increase from the parent window), what can I do? If the completion handler was being called then I could make adjustments to my view, but it never gets called if the user is not already logged in.
https://developer.apple.com/documentation/gamekit/authenticating_a_player
Thanks.
Hi,
My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes.
Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes.
This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open.
Thank you
Hello,
I am building an app that requires players joined to an online game session and to have a video chat with both of their front cameras turned on. I currently know that GameKit supports the voice chat option but could not find any source for video chat. Is it possible set up a video chat on an online game session?. What libraries can I use that have support in implementing the video chat (AVFoundation) ?
Your assistance is greatly appreciated.
We have a custom photo booth for taking photos of people for use with photogrammetry - the usual vertical cylinder of cameras with the human subject stood in the middle.
We've found that often the lower legs of the subject are missing - this is particularly likely if the subject is wearing dark pants.
The API for PhotogrammetrySession is really very limited, but we've tried all the combinations or detail and sensitivity and object masking we can think of - nothing results in a reliable scan.
Personally I think this is related to the automatic isolation of the subject, rather than the photogrammetry itself. Often we get just the person, perfectly modelled. Occasionally we get everything the cameras can see - including the booth itself and the room it's in! But sometimes we get this footless result.
Is there anything we can try to improve the situation?
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle.
Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene?
Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source?
Thanks in advance!
An SCNNode is created and used for either an SCNView or an SKView.
SceneKit and SpriteKit are using default values.
The SceneView has an SCNScene with a rootNode of the SCNNode.
The SpriteKitView has a SpriteKitScene with an SK3DNode that has an SCNScene with a rootNode of the SCNNode.
There is no other code changing or adding values.
Why are the colors for the SCNView less vibrant than the colors for the SKView?
Is there a default to change to make them equivalent, or another value to add? I have tried changing the default SCNMaterial but only succeeded in making the image black or dark.
Any help is appreciated.
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]):
colorSample.x = in.texCoord.x;
Which produce this result:
However, if I make a small change to the code like this:
colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x;
Then it will produce this result:
If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression.
Is this a bug in fast-math mode? How should I circumvent this problem?
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map.
So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online:
import Combine
import Foundation
import RealityKit
import SwiftUI
extension Entity {
func addSkybox(for skybox: Skybox) {
let subscription = TextureResource
.loadAsync(named: skybox.imageName)
.sink(receiveCompletion: { completion in
switch completion {
case .finished: break
case let .failure(error): assertionFailure("\(error)")
}
}, receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material])
self.components.set(sphere)
/// flip sphere inside out so the texture is inside
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, 1.0, 0.0)
})
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
struct SubscriptionComponent: Component {
var subscription: AnyCancellable
}
}
This works fine and is looking awesome.
However, I can't get a gesture work on this.
If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this:
import RealityKit
import SwiftUI
struct ImmersiveMap: View {
@State private var rotationAngle: Float = 0.0
var body: some View {
RealityView { content in
let rootEntity = Entity()
rootEntity.addSkybox(for: .worldmap)
rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)]))
rootEntity.generateCollisionShapes(recursive: true)
rootEntity.components.set(InputTargetComponent())
content.add(rootEntity)
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in
log("drag gesture")
}))
But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events.
Is there a way to achieve this?