when trying to build wireshark I'm getting the following, any idea how to solve it?
[ 13%] Building C object wsutil/CMakeFiles/wsutil.dir/os_version_info.c.o
In file included from wireshark/wsutil/os_version_info.c:23:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CoreFoundation.h:54:
/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:676:195: error: expected ','
676 | void *CFAllocatorAllocateTyped(CFAllocatorRef allocator, CFIndex size, CFAllocatorTypeID descriptor, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0));
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:679:211: error: expected ','
679 | void *CFAllocatorReallocateTyped(CFAllocatorRef allocator, void *ptr, CFIndex newsize, CFAllocatorTypeID descriptor, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0));
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:682:165: error: expected ','
682 | void *CFAllocatorAllocateBytes(CFAllocatorRef allocator, CFIndex size, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0));
| ^
/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFBase.h:685:181: error: expected ','
685 | void *CFAllocatorReallocateBytes(CFAllocatorRef allocator, void *ptr, CFIndex newsize, CFOptionFlags hint) API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0));
| ^
In file included from wireshark/wsutil/os_version_info.c:23:
In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CoreFoundation.h:73:
/Library/Developer/CommandLineTools/SDKs/MacOSX15.0.sdk/System/Library/Frameworks/CoreFoundation.framework/Headers/CFNumberFormatter.h:144:147: error: expected ','
144 | CF_EXPORT const CFNumberFormatterKey kCFNumberFormatterMinGroupingDigits API_AVAILABLE(macos(15.0), ios(18.0), watchos(11.0), tvos(18.0), visionos(2.0)); // CFNumber
| ^
5 errors generated.
make[2]: *** [wsutil/CMakeFiles/wsutil.dir/os_version_info.c.o] Error 1
make[1]: *** [wsutil/CMakeFiles/wsutil.dir/all] Error 2
make: *** [all] Error 2
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
let radius: Float = 0.025
let mass: Float = 0.04593 // 质量,单位:kg
var inertia = 2/5 * mass * pow(radius, 2)
let currentPosition = entity.position(relativeTo: nil)
let distance = distance(currentPosition, rgfc.lastPosition)
let deltaTime = Float(context.deltaTime)
let speed = distance / deltaTime
let C_d: Float = 0.47 //阻力系数
let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
let acceleration = speed / deltaTime
let forceDirection = normalize(currentPosition - rgfc.lastPosition)
let forceMultiplier: Float = 1.0
let appliedForce = forceDirection * mass * acceleration * forceMultiplier
entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
Also I try to applyImpulse but not addForce, like:
let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
Function Introduction "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/"
When I use this function, my videoPlayer has no back Action in player.
And we did not find any method provided by the system "addChildViewControllerAndView(form)"
"https://developer.apple.com/documentation/avkit/adopting-the-system-player-interface-in-visionos"
Referencing this document also did not work
As long as you enter this line of code
let playerController = AVPlayerViewController()
// Enable the multiview experience along with the default recommended set.
playerController.experienceController.allowedExperiences = .recommended(including: [.multiview])
there is no back button, only full screen and zoom out
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior:
In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches.
However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion.
From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion.
The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene.
Has anyone else experienced this or found a workaround?
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference.
Here’s the relevant code snippet that throws the error:
private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) {
let audioEngine = AVAudioEngine()
let request = SFSpeechAudioBufferRecognitionRequest()
request.shouldReportPartialResults = true
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
request.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
return (audioEngine, request)
}
The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed.
Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated!
The specific error is:
The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so.
topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity.
From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro.
DragGesture()
.targetedToEntity(where: .has(CustomComponent.self))
.onChanged({ value in
let rotation = -Float(value.translation.height)
let clampedRotation = min(max(rotation, 0), 45)
if value.entity.name == "grab"{
if let topEntity = selectedEntity.findEntity(named: "top"),
let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){
topEntity.transform.rotation = simd_quatf(
angle: clampedRotation * .pi / 180,
axis: SIMD3(x: 0, y: 0, z: 1)
)
}
}
})
Despite being enrolled, I am utterly unable to locate any option to download 2.2 beta on my Vision Pro. All I see in the system update is that I'm uptodate with 2.1.
How do I locate beta download option?
thanks
I have a problem. In AR mode, objects or people in the real environment cannot dynamically block the virtual 3D model. What should I do if I want to mimic real-world occlusion relationships?
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window.
Is there anyway to reject that focus?
Hi,
I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case.
However, I found a few problems:
When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing.
When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library).
How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Treating EntityEquipment's state to DieState is not a way, because it doesn't support custom collision shapes.
I have also tried adding PhysicsBodyComponent and CollisionComponent to EntityEquipment's Entity. However, the main issue is that the position of EntityEquipment itself does not synchronize with the Entity's physics body, resulting in two separate instances of one object.
struct PlayerPawn: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
let massProperties = PhysicsMassProperties(mass: 1.0)
let material = PhysicsMaterialResource.generate(friction: 0.5, restitution: 0.5)
let shape = ShapeResource.generateBox(size: [0.4, 0.2, 0.2])
let physicsBody = PhysicsBodyComponent(massProperties: massProperties, material: material, mode: .dynamic)
let collisionComponent = CollisionComponent(shapes: [shape])
entity.components.set(physicsBody)
entity.components.set(collisionComponent)
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .init(), rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture.
The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values.
I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem.
My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values?
Project Link: Private
STEPS TO REPRODUCE
Run the app.
Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem).
Tap the button in the timeline to create a new project.
Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1).
There should now be a green 3d model added to the stage.
Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage).
Re-run the app to see that the green element is offset to a new location, not the last dragged location.
To reset and try again, delete the project canvas next to the project name (trash button) then restart the app.
Areas of concern:
RealityKitView is the only file you may need.
Line 119 is where we create new child entities
Lines 185-219 is where we persist and apply persisted values.
You can also search FIXME in the file to see areas of concern.
Tip:
I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format.
However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue:
AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3)
But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected.
Could you please help me fix it? Below is the code snippet.
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// This crashes
private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData?[0] {
data.update(from: buf, count: samples * Int(format.channelCount))
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
/// This works
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body.
I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this?
void CalculateLengthInsideOrgan()
{
// Direction from the base of the probe to the tip
Vector3 direction = probeTip.position - probeBase.position;
float probeLength = direction.magnitude;
// Raycasting
RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask);
if (hits.Length > 0)
{
// Calculate the length entering the organ
float distanceToFirstHit = hits[0].distance;
lengthInsideOrgan = probeLength - distanceToFirstHit;
}
else
{
lengthInsideOrgan = 0f;
}
}
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low.
On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS.
Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data?
We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
Hello,
Is there a way to always have the attachments of a RealityView always face the user?
For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user.
How do we do this?
I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature.
All and any recommendations are appreciated, thanks.
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.