According to a very helpful DTS Engineer I was recently in contact with, the order is guaranteed across all observations.
Post not yet marked as solved
Thank you, Quinn!
I am going to use the location background mode because I actually have a use case for that. Capturing the UDP packets is essential and being able to correlate them with the users location is even better.
Even ran a test already by combining the code base with the UDP listener and the code responsible for tracking the users location. Did start the app yesterday morning and put it in the background. Then I have send UDP packets throughout the day, the last time this morning. My app successfully captured all packets but one. All in all, I have received around 60k location updates as well.
Now, I am just unsure about what activityType I need to set and in how far this variable changes the behaviour.
What I have found through my research is, that activityType determines when no location updates will be send any longer when the position does not change anymore.
Am struggling to understand what this means in case of otherNavigation, other, automotiveNavigation and airborne.
My assumption is, the faster you move in your activity type, the sooner it will pause updating locations.
Is my assumption correct?
Post not yet marked as solved
What would you recommend for those devices connected to power all the time?
Use navigation to prevent the app from being suspended?
Post not yet marked as solved
Some vehicles do have a power supply, but some do not have one.
How about those that have one?
Post not yet marked as solved
Thank you, Quinn!
The app shall record telemetry data from a device in a vehicle. The device continuously sends out the UDP packets over WiFi.
It is a specific user in a managed environment. Nothing the general public would use or come in contact with.
The user enters the vehicle and ideally, the app shall already start recording or the user presses a "Start Recording" button after entering the vehicle, doesn't matter, as long as the user does not have to necessarily keep the app active all the time. There might be other apps the user might need in that situation. So there is always the danger the app might get suspended.
Navigation could be an option and if its the only viable option there might be a use case that could involve navigation.
Reading the documents you have provided me made be thinking about background tasks again.
Might it be possible to start a background task, e.g. every X seconds that captures a few UDP packets and then finishes until the next X seconds have passed?
Naively thinking, I somehow would have to put
connection.receiveMessage { (data, context, isComplete, error) in
// Decode and continue processing data
}
into a background process.
Is it doable in some way?
Finally, it works!
I had to re-introduce the AppDelegate with @UIApplicationDelegateAdaptor as explained with some code examples in my answer on Stackoverflow - https://stackoverflow.com/a/67429447/1065468 and also below.
It works, which is nice. Still trying to figure out why it works though.
If anyone has any deeper understanding of this, feel free to post here. I really would like to understand.
So, before, I had this:
swift
@main
struct MyApp: App {
var someClass = SomeClass()
@Environment(\.scenePhase) private var scenePhase
var body: some Scene {
WindowGroup {
ContentView()
}
.onChange(of: scenePhase) { (newScenePhase) in
switch newScenePhase {
case .active:
print("active")
someClass.doSomethingThatWouldNotWorkBefore()
case .background:
print("background")
case .inactive:
print("inactive")
@unknown default:
print("default")
}
}
}
}
And after the bugfix, the working solution:
swift
import SwiftUI
class AppDelegate: UIResponder, UIApplicationDelegate {
var someClass = SomeClass()
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) - Bool {
someClass.doSomethingThatWouldNotWorkBefore()
return true
}
}
@main
struct MyApp: App {
@UIApplicationDelegateAdaptor(AppDelegate.self) var appDelegate
@Environment(\.scenePhase) private var scenePhase
var body: some Scene {
WindowGroup {
ContentView()
}
.onChange(of: scenePhase) { (newScenePhase) in
switch newScenePhase {
case .active:
print("active")
case .background:
print("background")
case .inactive:
print("inactive")
@unknown default:
print("default")
}
}
}
}
I have made an observation: After restarting my phone and then starting the app directly by tapping the app icon everything works as it should: the app starts, gets connected to the drone, the connection icon turns green and the camera feed is being updated as well. I was able to repeat this several times.
I have looked there before, but could not find anything that would indicate any difference on how the app runs according from where it has been started.
The question that comes up when I look at the "Run"-configuration: When started from Xcode, does it use the "Debug"-config and when started directly form the phone is it then using the "Release"-config?
How can I check which configuration is being used
when app is started from Xcode
when app is started directly from the phone ?
Is there a difference? The binary (IPA) itself should be the same, but yes, maybe some configuration and/or assets might be different.
But where to check?
Yes, I did contact DJI:
Sending dev support an email
Posting in their dev support forum
Opening an issue in the corresponding Github repo
Let's hope that at least one will lead to success.
I have also opened a DST with Apple.
One more additional observation:
I remove the app completely from the phone,
then start it again from Xcode
I acknowledge the dialogs that ask permission for Bluetooth and Location usage, which I want.
Then the app will again not react and stay in its initial state.
Only when I restart again via Xcode the connection icon gets green and the video feet is being shown.