Graphics and Games

RSS for tag

Build captivating gaming experiences for Apple platforms.

Posts under Graphics and Games tag

28 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Mistake in OpenGLView:drawRect
I rewrite an old project since OpenGL is deprecated and the downgrade to High Sierra goes with the lost of the old project. Here is the drawRect: - (void)drawRect:(NSRect)dirtyRect { [super drawRect:dirtyRect]; if(backgroundColor == 1) glClearColor(0.95f, 1.0, 1.0f, 1.0); else glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0 ,0.0, -10.0); // After mistake I wrote PushMatrix() glTranslatef(-0.01 ,-0.01, -10.0); // Screen Shot 0 // glTranslatef(-0.25 ,-0.25, -10.0); Screen Shot 1 glLineWidth(1.0); glRotated(rotationX, 1, 0, 0); glRotated(rotationY, 0, 1, 0); glCallList(axes); // After mistake I wrote PopMatrix() // Axes Vertices of xx,y,z are 1.0 // With vertices +/-0.99 nothing is drawn [self glError]; [self.openGLContext flushBuffer]; } I hope you Accept my Text and hope you can help me ! Mit freundlichen Grüßen Uwe Screen Shot 0 Screen Shot 1
1
0
369
Jun ’24
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Why do I get this error almost immediately on starting my rendering pass? Multiline BlockQuote. 2024-05-29 20:02:22.744035-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:02:22.744455-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:05:54.079981-0500 RoomPlanExampleApp[491:10025] [CAMetalLayer nextDrawable] returning nil because allocation failed. 2024-05-29 20:05:54.080144-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
6
1
765
Jun ’24
Eye Tracking as a General API Feature for iPadOS 18+?
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here. Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking. I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events. My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes. For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.) The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster. Another simple example: Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available. This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection. Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this. In short, I propose: a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking. alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.
0
2
535
Jun ’24
SF Symbol License
Hi all, I am a UI/UX designer working on several commercial projects, and I have a few questions that I need you to answer: Can I use the icons from your SF Symbols set in my application? This is a SAAS application and is used on various platforms such as macOS and Windows. Is SF Symbols only allowed for use in applications running on Apple platforms? Meaning, if my application is used on a MacBook or iPhone, am I allowed to use your icon set? If usage is not permitted on platforms other than Apple’s, how can I legally use them on those platforms? Does Apple sell licenses for using SF Symbols on other platforms? If so, what is the cost? Looking forward to your response.
1
0
408
Jun ’24
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
0
304
4w
How to optimise RealityKit performance with many similar objects
I have code such as the following. The performance on the Vision Pro seems to get quite bad once I hit a few thousand of these models. It feels like I should be able to optimise this somehow, perhaps using instancing. Is that possible with RealityKit in visionOS 2? let material = UnlitMaterial(color: .white) let sphereModel = ModelEntity( mesh: .generateSphere(radius: 0.001), materials: [material]) for index in 0..<5000 { let point = generatedPoints[index] let model = sphereModel.clone(recursive: false) model.position = [point.x, point.y, point.z] parent.addChild(starModel) }
0
0
208
2w
can not create framework for ealiear code?
I want to create framework for this repo:https://github.com/BradLarson/GPUImage but failed. 1.I downloaded this repo and run below: xcodebuild archive -project GPUImage.xcodeproj -scheme GPUImage -destination "generic/platform=iOS" -archivePath "archives/GPUImage" xcodebuild archive -project GPUImage.xcodeproj -scheme GPUImage -destination "generic/platform=iOS Simulator" -archivePath "archivessimulator/GPUImage" xcodebuild -create-xcframework -archive archives/GPUImage.xcarchive -framework GPUImage.framework -archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework -output xcframeworks/GPUImage.xcframework there is error :cryptexDiskImage' is an unknown content type and com.apple.platform.xros' is an unknown platform identifier
0
0
113
1w
Implementing a bouncing surface
I am trying to simulate a pinball game and I want to use PhysicsBody & PhysicsMotion to achieve that. I tuned the parameters around in PhysicsBodyComponent, but the result is not quite ideal for now. Imagine a fully inflated basketball bouncing high off the ground (ground vs basketball). I assign PhysicsBodyComponent and CollisionComponent to both basketball and the ground. For basket ball, I set it as: dynamic mode mass 1, inertia .one Material.Restitution 1 Angular Damping and Linear Damping to 0 AddForce to make the basketball move to hit the ground For ground, I set it as: static mode mass 1, inertia .zero Material.Restitution 1 Angular Damping and Linear Damping to 0 However, when the basket ball hit the ground, it isn't that bouncy, the basketball behaves like hitting to a cotton and the linear speed just dumps fast. Wonder how I could achieve the bouncing effect like real basketball vs ground.
4
0
368
1w