I created a feedback report: FB16411517
Post
Replies
Boosts
Views
Activity
I created a feedback report: FB16411500
NOTE: I read the spec:
https://www.w3.org/TR/pointerevents3/#the-primary-pointer
"Some devices, operating systems and user agents may ignore the concurrent use of more than one type of pointer input to avoid accidental interactions. For instance, devices that support both touch and pen interactions may ignore touch inputs while the pen is actively being used, to allow users to rest their hand on the touchscreen while using the pen (a feature commonly referred to as "palm rejection"). Currently, it is not possible for authors to suppress this behavior."
Since the iPad can handle simultaneous touch-types natively, I don't see why the web version cannot. Please lift this restriction/think about a way to lift it.
I am waiting for someone from Apple to reply here. Really I think this should be written in the release notes, but it isn't.
The request prompted me with a code for my records, but what can I do with it? Is there a way to track the status of the request or contact someone via support?
@DTS Engineer Last thing: do you anticipate replies will roll-out next week? I understand this week has been super hectic given the event of course.
@DTS Engineer Actually I was able to access the form yesterday, so I filed a request with a very detailed description of what I want to do (for research dev only). I hope this will be accepted. It would be incredibly useful.
EDIT: This should be deleted. It turned out that there was some sort of change in the compilation steps causing issues with initialization of color data. The issue has nothing to do with rendering.
EDIT: the augmented reality flag exists, and I'm wondering if it already worked in v1? If not, does it work in v2?
Unfortunately, the example doesn't show how to integrate scene understanding for realistic lighting. It just seems to show how to add passthrough in the background. Is there an example that is more advanced, showing how to do occlusion, use the environment texture, do lighting with the scene reconstructed mesh, etc.?
If not, that's super needed. It's not so straightforward.
@DTS Engineer
I see this:
"Your account can’t access this page.
There may be certain requirements to view this content.
If you’re a member of a developer program, make sure your Account Holder has agreed the latest license agreement."
Is this link actually live, or is it planned to work after WWDC?
This is why I thought I needed to be a business — I don't see a way to gain access. I am just an individual who wants to do purely internal research via collaboration with my university. I 100% understand these APIs need to be used with care, and I don't intend to sell or distribute this specific part of the research potentially using these APIs.
@Engineer Are you able to use eye tracking as a regular API within an app? I’m interested in triggering an event upon some short dwell time over a specific region of the screen.
@sanercakir I clicked the developer only request button and it says I am not allowed to view the page. But I am not the account holder. I assumed that I needed to be an enterprise with 100+ employees and so on.
By the way, I am an individual account holder. Might I need to be a “business?”
In any case, please let me know how to resolve this, if I need to contact a certain department.
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming.
A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support?
I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex.
Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)