Hello,
I am attempting to simultaneously stream video to a remote client and run inference on a neural network (on the same video frame) locally. I have done this in other platforms, using Gstreamer on linux and on android using libstreaming for compression and packetization. I've attempted this now on iPhone using ffmpeg to stream and a capture session to feed the neural network but I run into the problem of multiple camera access.
Most of the posts I see are concerned with receiving RTP streams in iOS but I need to do the opposite. As I am new to iOS and Swift I was hoping someone could provide method for RTP packetization? Any library recommendations or example code for something similar?
Best,
Replying to my own post since this doesn't seem to be getting much traction. I was able to successfully stream video to my client via FFmpeg, but it's a bit of a duct-tape solution. Using the ffmpeg pipeline, I write my pixelBuffers via FileHandle (updatingAtPath) to a temporary location, then using the -stream_loop -1 option in ffmpeg to constantly read the file. FFMPEG pretty much takes care of the rest (encoding, packetizing, etc). Some other things to note, I write just the raw bytes to the file handle and read in the format 'rawvideo'. If anyone can think of a better way for realtime video streaming I am all ears. Right now my latency is in the order of ~170 ms.