First up, just how many connections are you expecting to be wrangling here? I regularly see folks get distracted by this stuff when they’re building an app that is never going to benefit from it. Unless your processing tens, perhaps even hundreds, of I/O operations per second, this level of parallelism just doesn’t matter.
And if you are targeting that sort of workload, my advice is that you design something simple and then profile it. Because, as with most performance problems, it’s hard to predict the ultimate behaviour you’ll see on real systems.
Anyway, back to your questions:
why would that happen?
Consider what happens internally to the NWConnection
. Let’s say you have a connection with an outstanding receive. On the wire, data comes in and the connection then closes. NWConnection
tell you about that by queuing two blocks on the queue that you supplied. If you use a serial queue then those blocks are serialised, that is, your receive completes and then your state update handling is called. If you use a concurrent queue then those blocks can run in parallel. Now you need your own internal locking to manage your connection state. Worse yet, these blocks can arrive out of order.
Dispatch queues guarantee FIFO order and for serial queues that’s an important property. But for concurrent queues the FIFO guarantee is not helpful. Yes, Dispatch removes the blocks from the queue and passes them to the scheduler in FIFO order, but then the scheduler can run the blocks as it sees fit.
Why would one like to serialize processing of incoming connections on a NWListener using a serial queue … ?
Because you’re not processing the whole connection, you’re just processing the acceptance of that connection. When it receives a connection the listener starts that connection, and the listener gets to choose what queue to use for it. The act of starting a connection is fast, to the point where there’s no point trying to do it in parallel.
Now, I’d argue that it probably makes sense to run the entire network subsystem — that is, the listener and all the connections — on a single serial queue, because it simplifies your code and you’re unlikely to benefit from significant parallelism. If you run into a situation where parallelism is important — say your server needs to run some CPU bound image processing task — then explicitly parallelise that.
Which brings us to a general guideline: In the Dispatch model, it’s important to separate CPU bound work from I/O bound work. Networking is very likely to I/O bound [1], and thus it’s fine to serialise and doing so will radically simplify your life.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] I’m talking about networking at the scale typically done on Apple devices. If you’re building a server that’s processing thousands of I/O operations a second, networking starts to hit the CPU hard. However, very few folks use Apple hardware for such tasks.