Post not yet marked as solved
Hello,
There isn't quite enough information here to determine if you should file a bug report for this issue or not. It is possible that your code is the cause of the memory growth, my recommendation is that you put together a small sample project that demonstrates the issue and post it back here.
Also, I did not observe any significant memory growth with the following snippet:
class GameViewController: NSViewController {
@IBOutlet var arView: ARView!
let modelEntity = ModelEntity()
override func awakeFromNib() {
let anchor = AnchorEntity()
anchor.position = .init(0, 0, -30)
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
Timer.scheduledTimer(withTimeInterval: 0.4, repeats: true) { _ in
let mesh = MeshResource.generateSphere(radius: .random(in: 5...10))
let model = ModelComponent(mesh: mesh, materials: [SimpleMaterial(color: .red, isMetallic: true)])
self.modelEntity.model = model
}
}
}
So it seems that this is at least not an issue with MeshResource in every case, though maybe there is something more specific about your case.
Post not yet marked as solved
Hello,
Please file an enhancement request using Feedback Assistant to request api that predicts whether an observation is a title or not.
Post not yet marked as solved
Assuming you set up a physics body for your entity in Reality Composer, you can actually directly get the mass of the physics body:
// Load the scene from your .rcproject.
let sceneAnchor = try! MyRealityComposerProject.loadScene()
// Get a reference to the entity that you configured a physics body for.
let box = sceneAnchor.box
// Get the physics body from the entity's component set.
guard let physicsBody = box?.components[PhysicsBodyComponent.self] as? PhysicsBodyComponent else {
fatalError("box has no PhysicsBodyComponent.")
}
// Access the mass properties of the physics body.
print(physicsBody.massProperties.mass)
Post not yet marked as solved
Hello,
Based on the format info, is it possible to make bufferSize for .installTap be just the write size for whatever time I wish to record for?
AVAudioNode.h states the following about the bufferSize parameter:
the requested size of the incoming buffers in sample frames. Supported range is [100, 400] ms.
Additionally, the documentation states:
The size of the incoming buffers. The implementation may choose another size.
Given this info, you would not be able do what you want, certainly you will not be able to get a 10-15 second buffer, but also you should not rely on the api always respecting the bufferSize that you pass in anyway.
All that being said, for your high-level goal of recording 10-15 seconds of audio you might find this api simpler to use: https://developer.apple.com/documentation/avfaudio/avaudiorecorder/1389378-record
Post not yet marked as solved
Hello @addy239,
The place to file a feature request is Feedback Assistant.
Even if another developer has already requested the same feature, duplicates are helpful for us to gauge interest in certain functionality!
Post not yet marked as solved
Hello,
There is no way to do that in Reality Composer, please file an enhancement request using Feedback Assistant.
Note that this is possible in RealityKit via a VideoMaterial.
Post not yet marked as solved
Hello,
If you print the Image.transferRepresentation, you will see that it only supports a contentType of "public.png", which means loadTransferable(type:Image.self) will only return a non-nil success value if the PhotosPickerItem's supportedContentTypes also contains "public.png". That is not always the case, often times the supportedContentTypes contains jpeg or heic instead of png.
So, my recommendation is that you file an enhancement request for Image to support a contentType that is more flexible than just "public.png" using Feedback Assistant.
Once you've done that, I recommend that you continue to load the PhotosPickerItems as Data, and then create an image from this Data, for example:
import SwiftUI
import PhotosUI
import CoreTransferable
struct ContentView: View {
@State var imageData: Data?
@State var selectedItems: [PhotosPickerItem] = []
var body: some View {
VStack {
if let imageData, let uiImage = UIImage(data: imageData) {
Image(uiImage: uiImage).resizable()
}
Spacer()
PhotosPicker(selection: $selectedItems,
maxSelectionCount: 1,
matching: .images) {
Text("Pick Photo")
}
.onChange(of: selectedItems) { selectedItems in
if let selectedItem = selectedItems.first {
selectedItem.loadTransferable(type: Data.self) { result in
switch result {
case .success(let imageData):
if let imageData {
self.imageData = imageData
} else {
print("No supported content type found.")
}
case .failure(let error):
fatalError(error.localizedDescription)
}
}
}
}
}
}
}
As noted in https://developer.apple.com/videos/play/wwdc2022/110429/?time=231, the builtInLiDARDepthCamera device is available on iPhone 12 Pro, iPhone 13 Pro, and iPad Pro (5th Gen).
The model you have described is iPad Pro (4th gen).
Post not yet marked as solved
I think what is most likely going on here is that CoreImage is clamping the depth values at 1, because it doesn't know that it needs to preserve values greater than 1 in this case.
If you want to produce a displayable image representation of the depthMap, you can normalize the values in the depthMap (i.e. find the largest depth value in the map (or choose some distant arbitrary threshold like 10 meters) and then divide every depth value by your threshold value).
Hello,
My recommendation is that you implement a UIViewControllerRepresentable that would wrap a UIViewController where you have set up the tap gesture.
Post not yet marked as solved
I'm not able to reproduce the behavior you have described. When I move an Entity via an EntityTranslationGestureRecognizer, the movement is correctly reflected in the transform of that Entity.
Maybe you are checking the transform of the wrong entity?
Note that entity.transform.matrix will give you the matrix relative to its parent Entity, so maybe you are moving the parent (and it's child moves too), and then you are checking the matrix of the child (which would always be the same relative to the parent in this case). If that is what is happening, try logging entity.transformMatrix(relativeTo: nil) which will give you the matrix in world space. If that value changes, then it is likely you have been checking the transform of a child of the Entity that you actually moved.
Post not yet marked as solved
Hello,
The iphone-ipad-minimum-performance-a12 key would be the closest match to what you are looking for.
Post not yet marked as solved
This appears to be working for me on macOS and iOS at least (didn't check tvOS, but suspect it works).
I slightly modified your code:
struct VideoView: View {
@State var mainScene = SKScene()
var body: some View {
HStack {
Spacer()
VStack {
Spacer()
SpriteView(scene: mainScene)
Spacer()
}.onAppear {
let scene = SKScene(size: .init(width: 500, height: 500))
scene.scaleMode = .aspectFit
scene.backgroundColor = .blue
scene.view?.allowsTransparency = true
let player = AVPlayer(url: Bundle.main.url(forResource: "output", withExtension: "mov")!)
let video = SKVideoNode(avPlayer: player)
video.size = CGSize(width: 500, height: 500)
video.anchorPoint = .init(x: 0, y: 0)
scene.addChild(video)
player.play()
mainScene = scene
}
Spacer()
}.background(Color.red)
}
}
I end up seeing the video play, and the background color is whatever the scene's background color is (in this case, blue), which is the expected behavior.
Post not yet marked as solved
if let imageRef = context.createCGImage(image, from: image.extent) {
let png = context.pngRepresentation(of: image, format: .BGRA8, colorSpace: image.colorSpace!)
try? png?.write(to: documentsURL.appending(component: "captured-image-corrected.png"))
}
I'm not sure what the purpose of the createCGImage call is here? Can you log the CIImage's colorSpace before and after, also log it before and after the transform, to make sure it hasn't changed, and then post the results to this thread?
Post not yet marked as solved
Hello,
I recommend leveraging CoreImage for this, for example:
session.captureHighResolutionFrame { frame, error in
guard let frame else { return }
let ciImage = CIImage(cvPixelBuffer: frame.capturedImage)
do {
try ciContext.writePNGRepresentation(of: ciImage, to: outputURL, format: .RGBA8, colorSpace: ciImage.colorSpace!)
} catch {
fatalError(error.localizedDescription)
}
}