I have this drawing app that I have been working on for the past few years when I have free time. I recently rebuilt the app in Metal to build out other brushes and improve performance, need to render 10000s of lines in realtime.
I’m running into this issue trying to create a uniform opacity per path. I have a solution but do not love it - as this is a realtime app and the solution could have some bottlenecks. If I just generate a triangle strip from touch points and do my best to smooth, resample, and handle miters I will always get some overlaps. See:
To create a uniform opacity I render to an offscreen texture with blending disabled. I then pre-multiply the color and draw that texture to a composite texture with blending on (I do this per path). This works but gets tricky when you introduce a textured brush, the edges of the texture in the frag shader cut out the line.
Pasted Graphic 1.png
Solution: I discard below a threshold
fragment float4 fragment_line(VertexOut in [[stage_in]],
texture2d<float> texture [[ texture(0) ]]) {
constexpr sampler s(coord::normalized, address::mirrored_repeat, filter::linear);
float2 texCoord = in.texCoord;
float4 texColor = texture.sample(s, texCoord);
if (texColor.a < 0.01) discard_fragment(); // may be slow (from what I read)
return in.color * texColor;
}
Better but still not perfect.
Question: I'm looking for better ways to create a uniform opacity per path. I tried .max blending but that will cause no blending of other paths. Any tips, ideas, much appreciated. If this is too detailed of a question just achieve.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I want to use flattenedClone() node for the trees in my landscape and reduce the draw call. If my texture uses a jpg file the result is good. If I use a PNG with transparency there is no draw call reduction.
Can I use PNGs with transparency with the flattenedClone() ?
Below is the structure I'm using:
for child in scene_temp.rootNode.childNodes {
let newtree = tree.clone()
sum_tree.addChildNode(newtree)
}
let sum_tree_flat = sum_tree.flattenedClone()
scene.rootNode.addChildNode(sum_tree_flat)
If the tree texture contains a JPG file (diffuse) I get a draw call reduction (< 200 for my complete map) as expected (but no transparency with my leaves). If I use a PNG file, my draw calls are not reduced (> 2000) but I do have my trees with my leaves in transparency.
I've tried replacing tree.clone() with tree.flattenedClone() with no results.
Thanks for your help.
Translated with DeepL.com (free version)
The solo Leveling:arise is a game but the game mode is not switching on and game crashing everything time while playing
Topic:
Graphics & Games
SubTopic:
GameKit
Tags:
External Graphics Processors
Games
Graphics and Games
I have a UIView that displays lines, and I zoom in (scale by 2 on the scroll view zoomScale variable containing the UIView).
As I zoom in, on the Mac version (Designed for IPad) I loose the graphic after a certain number of zooms (the scrollView maximumZoomScale is set at 10).
To ensure that lines are correctly represented, I modify the contentScaleFactor variable on the UIView; otherwise, the line's display is pixelated.
On the IPad (simulator and real) I do not loose the graphic when zooming.
So the Mac port of the UIView drawing is not working as the IPad version.
Everything else of the application works fine except this important details.
I already submitted a feedback request (#FB16829106) with the images showing the problem. I need a solution to this problem.
Thanks.
Hi! I just installed GPTK2 on my new Mac , but the Terminal gave “Error:OpenSSL1.1 has been disabled.”
How should I fix it?Or waiting for the GPTK2 beta4?
Thanks.
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel.
I have no issues terminating QuickLook or ARSCNView.
I have a simple NavigationLink opening the RealityView...
NavigationLink(destination: MyRealityView()) {
Text("Open AR")
}
struct MyRealityView : View {
var body: some View {
RealityView { content in
// Create horizontal plane anchor for the content
let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5]))
let scene = await loadEntity(named: "Scene")
// Add model to anchor
anchor.addChild(scene!)
content.add(anchor)
// View Settings
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.onDisappear {
//print("RealityView is disappearing. Cleanup actions here.")
}
.edgesIgnoringSafeArea(.all)
// Activate onTap from Reality Composer Pro
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
_ = value.entity.applyTapForBehaviors()
})
}}
I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either.
Anyone else have this issue or know of a solution? What am I missing?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
Hi,
I’m using the latest iPad Pro (13-inch) and I can see that Metal offers an rgb10a2unorm texture for rendering, but when I render a grey ramp and measure the actual luminance, I get a pattern that I would expect from an 8-bit texture (see below). Before I start ripping apart all my code, is there anything else I need to do to convince iOS to render my texture in 10-bit?
I already tried setting the PixelFormat in my CMetalLayer to rgb10a2unorm, but that didn’t change anything.
Hi,
I've just moved my SpriteKit-based game from UIView to SwiftUI + SpriteView and I'm getting this mesage
Adding 'GCControllerView' as a subview of UIHostingController.view is not supported and may result in a broken view hierarchy. Add your view above UIHostingController.view in a common superview or insert it into your SwiftUI content in a UIViewRepresentable instead.
Here's how I'm doing this
struct ContentView: View {
@State var alreadyStarted = false
let initialScene = GKScene(fileNamed: "StartScene")!.rootNode as! SKScene
var body: some View {
ZStack {
SpriteView(scene: initialScene, transition: .crossFade(withDuration: 1), isPaused: false , preferredFramesPerSecond: 60)
.edgesIgnoringSafeArea(.all)
.onAppear {
if !self.alreadyStarted {
self.alreadyStarted.toggle()
initialScene.scaleMode = .aspectFit
}
}
VirtualControllerView()
.onAppear {
let virtualController = BTTSUtilities.shared.makeVirtualController()
BTTSSharedData.shared.virtualGameController = virtualController
BTTSSharedData.shared.virtualGameController?.connect()
}
.onDisappear {
BTTSSharedData.shared.virtualGameController?.disconnect()
}
}
}
}
struct VirtualControllerView: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
let result = PassthroughView()
return result
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
class PassthroughView: UIView {
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
for subview in subviews.reversed() {
let convertedPoint = convert(point, to: subview)
if let hitView = subview.hitTest(convertedPoint, with: event) {
return hitView
}
}
return nil
}
}
I want to use reality to create a custom material that can use my own shader and support Mesh instancing (for rendering 3D Gaussian splating), but I found that CustomMaterial does not support VisionOS. Is there any other interface that can achieve my needs? Where can I find examples?
Topic:
Graphics & Games
SubTopic:
RealityKit
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app.
The character has customization such as clothing items and hair and all objects are properly weighted to the rig.
The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro:
My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name.
What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
macOS 15.2, MBP M1, built-in display.
The following code produces a line outside the bounds of my clipping region when drawing to CGLayers, to produce a clockwise arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
Drawing other shapes such as rects or ellipses doesn't cause a problem.
I can work around the issue by bringing the path back to the start of the arc:
CGContextBeginPath(m_s.outContext);
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1);
// add a second arc back to the start
CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -endRads, -startRads, 0);
CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha());
CGContextSetLineWidth(m_s.outContext, width);
CGContextStrokePath(m_s.outContext);
But this does appear to be a bug.
Hello
I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access.
Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way
[renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0]
Unfortunately, all I am getting is the following error (runtime assertion)
-[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation
offset + length(16) must be <= threadgroupMemoryLength(0).`
What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
Looks like is something to do with the code and that is why my images are distorted. please help me get to the right person to help me get this issue resolved
Topic:
Graphics & Games
SubTopic:
General
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
Hello,
I created a new project with the provided template for Immersive Environments.
Straight out of box I build to both the Simulator and to Vision Pro and the provided Environment looks like this.
What's interesting is that in Reality Composer Pro, it looks correct so how do I achieve the same look?
Thank you in advance!
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format.
Here is the code
(IBAction)convertButtonClicked:(id)sender {
NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"];
NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"];
MagickWand *wand = NewMagickWand();
if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error reading image: %s", description);
MagickRelinquishMemory(description);
return;
}
if (MagickSetFormat(wand, "JP2") == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error setting image format to JP2: %s", description);
MagickRelinquishMemory(description);
}
if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) {
NSLog(@"Error writing JP2 image");
return;
}
NSLog(@"Image successfully converted.");
}
@end
Topic:
Graphics & Games
SubTopic:
General
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView.
I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085
I've included a crash log with the feedback.
If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior.
Please let me know if there's anything I can do to help.
Thank you.
I am trying to install the Game Porting Toolkit 2.1 according to the Readme file provided with the toolkit.
When I run the following command:
WINEPREFIX=~/my-game-prefix brew --prefix game-porting-toolkit/bin/wine64 winecfg
I get an error message:
zsh: no such file or directory: /usr/local/opt/game-porting-toolkit/bin/wine64
I don't know how to resolve this.
When I type in the command which brew , I get the path
/usr/local/bin/brew
What am I doing wrong?