Post

Replies

Boosts

Views

Activity

Reply to iOS BLE questions
In our tests—with iOS devices acting as peripherals in iPhone 16(peripheral)-to-NB(central) scenarios: 15ms connection interval, throughput is about 500kbps 60ms connection interval, throughput is about 200kbps And when iPhone 16(peripheral)-to-iPhone 13 pro(central) scenarios, the throughput is roughly 200kpbs. In this case, are we still limited to only 4 packets, and is the throughput related to this?
2w
Reply to iOS Peripheral Throughput is low due to updateValue return false
I take the comment out here.: Sorry I forgot to say that the PeripheralManagerIsReady callback time might ranging in 0.01ms ~ 250ms. Is it unstable normal? iOS limits the packet number to 4 per connection event for instance, so if the connection interval is 30ms, then I can just send 4 packet during 30ms, is that right? I'm trying to find out the low throughput issue And if IOS support DLE? If yes, how to enable it?
3w
Reply to iOS Peripheral Throughput is low due to updateValue return false
Additionally, we've observed that even when the data is only 1 byte, the first call to updateValue returns true, but all subsequent calls return false. We must wait for the peripheralManagerIsReady callback before we can call updateValue again—otherwise, the second call still returns false. Even if we wait 5 ms (using usleep(5000)) after the first updateValue call before trying again, it still fails. Could you please confirm whether this behavior is expected?
3w
Reply to iOS Peripheral Throughput is low due to updateValue return false
Hi, Thanks for your response. I understand that connection parameters are negotiated automatically and we can't directly control them from the app. We're testing with BLE 5.2 using 2M PHY, but our throughput (~206 kbps) is far below the expected values (around 1300 kbps). We haven't identified any implementation errors on our side. Could this low throughput be due to iOS’s internal flow control when acting as a Peripheral? Or is there something else we're missing? Would implementing L2CAP channels be the recommended solution to boost throughput? Any guidance is greatly appreciated. Thanks!
3w
Reply to Apple Watch CMMotionManager acceleration direction
Sorry, I uploaded the wrong file which contains acceleration data affected by gravity and other factors, including noise. That should be the reason that acceleration keeps some value. Here, the raw data has been logged by CMMotionManager.startDeviceMotionUpdates with userAcceleration : move_left_to_right_motion.txt You can see that if I move the watch from left to right, the initial value of acceleration.x will be negative. Then, once the watch stops moving, acceleration.x becomes positive.
Jan ’25
Reply to MultiThreaded rendering with actor
Hi, thanks for your explanation! Ideally, I would like to draw as many captured frames as possible. However, I understand that if processing speed isn’t sufficient, I may need to drop some frames to keep up with real-time rendering. That said, my goal is definitely not to draw only the latest frame, as I want to preserve as much of the original capture data as possible. Let me know if this aligns with what you’re asking!
Nov ’24
Reply to Core ML Async API Seems to Not Work Properly
I update a version that runs without crash. But the prediction speed is almost the same as sync version API. The createFrameAsync is called from ScreenCaptureKit stream. private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { if let surface = getIOSurface(for: sampleBuffer) { Task { do { try await runModelAsync(surface) } catch { os_log("error: \(error)") } } } } func runModelAsync(_ surface: IOSurface) async throws { try Task.checkCancellation() guard let model = mlmodel else {return} do { // Resize input var px: Unmanaged<CVPixelBuffer>? let status = CVPixelBufferCreateWithIOSurface(kCFAllocatorDefault, surface, nil, &px) guard status == kCVReturnSuccess, let px2 = px?.takeRetainedValue() else { return } guard let data = resizeIOSurfaceIntoPixelBuffer( of: px2, from: CGRect(x: 0, y: 0, width: InputWidth, height: InputHeight) ) else { return } // Model Prediction var results: [Float] = [] let inferenceStartTime = Date() let input = model_smallInput(input: data) let prediction = try await model.model.prediction(from: input) // Get result into format if let output = prediction.featureValue(for: "output")?.multiArrayValue { if let bufferPointer = try? UnsafeBufferPointer<Float>(output) { results = Array(bufferPointer) } } // Set Render Data for Metal Rendering await ScreenRecorder.shared .setRenderDataNormalized(surface: surface, depthData: results) } catch { print("Error performing inference: \(error)") } } Since Async prediction API cannot speed up the prediction, is there anything else I can do? The prediction time is almost the same on macbook M2 Pro and macbook M1 Air!
Oct ’24