[quote='824638022, DTS Engineer, /thread/773326?answerId=824638022#824638022']
Create a no-copy DispatchData with the init(bytesNoCopy:deallocator:) initialiser. If you pass the .custom(…) deallocator, you get called back when it’s safe to free the memory.
IMPORTANT Your current approach, which relies on send completion callbacks, is incorrect and could lead to subtle memory corruption bugs.
[/quote]
Hi @DTS Engineer ,
I'm trying to use the DispatchData initializer with bytesNoCopy and a custom deallocator, as you suggested above. However, when I try the following code, I encounter a build error: "No exact matches in call to initializer". Below is the code I'm using:
let serial_dispatch = DispatchQueue(label: "com.custom.serial.queue")
let bhagavadGitaExcerpt = """
Your right is to perform your duty only, but never to its fruits. Let not the fruits of action be your motive, nor let your attachment be to inaction.
"""
let oneString = String(repeating: bhagavadGitaExcerpt, count: 1024 / bhagavadGitaExcerpt.count)
if let utf8Data = oneString.data(using: .utf8) {
let rawPointer = UnsafeMutableRawPointer(mutating: utf8Data.withUnsafeBytes { $0.baseAddress! })
let dispatchData = DispatchData(bytesNoCopy: rawPointer, deallocator: .custom (serial_dispatch, {
print("Data is safe to de-allocate")
})
}
I am using macOS Sonoma (14.5 and 14.2.1), and it seems that the init(bytesNoCopy:deallocator:) initializer is not available. Can someone confirm if this initializer is exposed to public API or if there's a different way to use DispatchData with no-copy functionality?
Also,
When I am using Data with bytesNoCopy and a custom deallocator in my code, even though I receive the send completion callbacks, the deallocator is never called. I have verified that the data is actually being delivered to another machine (checked using Wireshark), but the deallocator doesn't seem to be triggered.
import Foundation
import Network
var connect = NWConnection(host: "10.20.5.190", port: NWEndpoint.Port(rawValue: 28000)!, using: .udp)
connect.stateUpdateHandler = { state in
print ("Connection did change state: \(state)")
}
let bhagavadGitaExcerpt = """
Your right is to perform your duty only, but never to its fruits. Let not the fruits of action be your motive, nor let your attachment be to inaction.
"""
let oneMBString = String(repeating: bhagavadGitaExcerpt, count: 1024 / bhagavadGitaExcerpt.count)
let utf8Data = oneMBString.cString(using: .utf8)!
let data = Data(bytesNoCopy: UnsafeMutableRawPointer(mutating: utf8Data),
count: utf8Data.count,
deallocator: .custom({ ptr, size in
print ("Data is safe to de-allocate")
}))
for i in 1...1000 {
connect.send(content: data, completion: NWConnection.SendCompletion.contentProcessed({ error in
print ("Send \(i) completed with error code \(error?.errorCode)")
}))
}
let serial_dispatch = DispatchQueue(label: "com.custom.serial.queue")
connect.start(queue: serial_dispatch)
RunLoop.main.run()
Could you help me understand why the deallocator isn't being triggered in this case? Is there something in the Data or NWConnection lifecycle that I might be overlooking?
Post
Replies
Boosts
Views
Activity
Hi @DTS Engineer ,
Thanks for the information. It really helps!
Coming back to what you mentioned here
In practice, however, it’s reasonable to assume that, once you receive the completion handler for the last send, the others are on their way.
Just to follow up on something we've discussed previously, if I’m sending 100 requests in a batch, with the first 99 being idempotent and only the last request having a completion callback, is it guaranteed that all the earlier sends have been completed from send perspective once I receive the callback for the final send?
Hi @DTS Engineer ,
[quote='824638022, DTS Engineer, /thread/773326?answerId=824638022#824638022']
The non-copy initialiser for Data is unlikely to benefit you here. If you want to go down this path, use a DispatchData.
You send DispatchData using the normal send method:
[/quote]
I’m not entirely clear on what you meant here. Even with the no-copy initializer for Data, I can still specify a .custom deallocator. So, I didn’t fully understand your point. For Data, I get an UnsafeMutableRawPointer along with the size I passed, which should allow me to reuse the same Data object, pointing to different memory each time, right?
IMPORTANT Your current approach, which relies on send completion callbacks, is incorrect and could lead to subtle memory corruption bugs.
I have a question regarding the send callback. If the callback is triggered, is it safe to modify or deallocate the data within it? Is there a chance the OS is still referencing the data, even after the callback has been triggered? This is what I inferred from your earlier statement.
Also, I’m still unclear about your answer to my original question. If I use a batch API to perform 100 sends, where 99 are marked as idempotent (with no callbacks) and only the last one has a callback, does receiving the callback guarantee that the first 99 sends are already complete? In other words, can I safely deallocate their data since it's already been written to the OS buffer? If I use a deallocator for each data object, will I end up managing the same number of callbacks as if I had used a callback for every send, instead of just the final one?
Hi @DTS Engineer ,
I don’t know what you mean by that. Network framework works in terms of Swift Data values [1]. When you call send(…), the framework makes a copy of the value [2]. As soon as the send(…) returns you are free to do whatever you want with the Data value you supplied [3].
To clarify, in my case, I’m creating the actual memory to be sent in C++ and then using Data's bytesNoCopy initializer to wrap this C++ memory for the send operation.
Since I’m sending multiple packets in a batch operation (in this case, 100 sends), and the first 99 are marked as idempotent, while the last one has a completion callback, I want to ensure that I deallocate the memory at the right time.
sample code:
public class MyClientClass {
public var uConneciton : NWConnection
// init and other methods
private func GetDataAndPerfromIdempotentSend () -> Void {
let data_size: Int;
let cpp_memory_pointer : UnsafeMutableRawPointer = CppGetMemory (data_size);
let data = Data ( bytesNoCopy: cpp_memory_pointer, count : data_size, deallocator : .none)
uConnection.send (content: data, completion: .idempotent)
}
private func GetDataAndPerfromSend () -> Void {
let data_size: Int;
let cpp_memory_pointer : UnsafeMutableRawPointer = CppGetMemory (data_size);
let data = Data ( bytesNoCopy: cpp_memory_pointer, count : data_size, deallocator : .none)
uConnection.send (content: data, completion: .contentProcessed : CallbackFunction)
}
public func TriggerBatchSends () -> Void {
uConnection.batch ( {
for _ in range 1...99 {
GetDataAndPerfromIdempotentSend ();
}
GetDataAndPerfromSend ();
})
}
}
In above case since my actual memory is allocated in cpp, I wanted to know when can I free this memory? Is it safe to de-allocate all the 100 packets' memory once I receive the callback for last send?
HI @DTS Engineer ,
[quote='824207022, DTS Engineer, /thread/771207?answerId=824207022#824207022']
So step 2 suggests that your accessory is 10.20.16.45 and the Apple device is 10.20.16.144. Is that right?
[/quote]
Yes, that's correct. The accessory is 10.20.16.45, and the Apple device is 10.20.16.144.
Apologies for the incorrect example in my previous reply.
Here are the step 2 and step 5 tuples with context of multicast:
Step 2:
Local IP: 10.20.16.45
Local Port: 5000
Remote IP: 239.0.0.25
Remote Port: 5000
Step 5: (working tuple)
Local IP: 10.20.16.28 ( a linux device jonied on same multicast ip and group)
Local port: 5000
Remote IP: 239.0.0.25
Remote Port: 5000
Step 5 : (failing tuple)
Local IP: 10.20.16.144 (Apple device running Network Framework using NWConnectionGroup)
Local Port: 53000 (because responses are sent from ephemeral ports)
Remote IP: 239.0.0.25
Remote Port: 5000
Hi @DTS Engineer ,
Technically you can’t. In practice, however, it’s reasonable to assume that, once you receive the completion handler for the last send, the others are on their way.
In above scenario, now assume that if I am sending 100 datagrams using "batch" and first 99 are marked idempotent whereas the last one has a completion callback attached to it. Now whenever i get the completion callback for the last send is is safe to assume that all the previous sends where completed either successfully or with failure and I can safely de-allocate the buffers for all these 100 sends ?
Sending data on a TCP connection from multiple threads simultaneously is bananas. Don’t do that. So, assuming that you’re doing all the sends from the same thread [1] then the data will be serialised on the wire in order. That’s true regardless of whether or not you use batching.
Now in above case if I am doing TCP sends from "n" different threads not simultaneously, but ensuring that one thread goes after another without waiting for the completion will the order of sends still be maintained?
Also, assuming I have a TCP Client which is performing "10" sends and there are a few pending recvs triggered waiting for the data on client, now if the other end closes the connection after receiving say 3rd packet, what will happen to the remaining recvs / sends, will I get an error for all the pending sends / recvs on my client?
Hi @DTS Engineer ,
Yes, when I mentioned the firewall, I was referring to a non-Apple firewall — one that's external to macOS and potentially blocking unsolicited traffic.
As for the address tuples you requested:
Step 2 (Recorded Tuple) :
Local IP: 10.20.16.45
Local Port: 5000
Remote IP: 10.20.16.144
Remote Port: 5000
Step 5 (Working Tuple):
Local IP: 10.20.16.45
Local Port: 5000
Remote IP: 10.20.16.144
Remote Port: 5000
Step 5 (Failing Tuple):
Local IP: 10.20.16.45
Local Port: 5000
Remote IP: 10.20.16.144
Remote Port: 53000 (Response is sent from ephemeral port in case of NF)
Let me know if you need any more details!
Thanks @DTS Engineer !
This really helps to clear things up.
I have one more question:
In the case of TCP, if I trigger three sends simultaneously, either with or without batching, does this guarantee that the order in which I triggered the sends is the same order in which they are delivered? Or should I wait for the completion of the “n-1”th send before triggering the “nth” send?
Hi @DTS Engineer ,
Thanks!
I was using the below code to test this out:
import Foundation
import Network
class Networkconnection {
var vConnection : NWConnection
public init () {
vConnection = NWConnection(to: NWEndpoint.hostPort(host: "10.20.16.144", port: NWEndpoint.Port(rawValue: 45000)!), using: .udp)
vConnection.stateUpdateHandler = { (state) in
print ("did change state: \(state)")
}
}
public func sendData () -> Void {
vConnection.batch {
vConnection.send(content: "Send1".data(using: .utf8), completion: NWConnection.SendCompletion.idempotent)
vConnection.send(content: "Send2".data(using: .utf8), completion: NWConnection.SendCompletion.idempotent)
vConnection.send(content: "Send3".data(using: .utf8), completion: NWConnection.SendCompletion.contentProcessed({ error in
print (Thread.current)
print ("Send 3 completed with error code: \(error?.errorCode)")
}))
}
}
public func StartConnection () -> Void {
vConnection.start(queue: .global())
}
}
func Main () -> Void {
var conn_obj = Networkconnection ()
conn_obj.StartConnection()
conn_obj.sendData()
}
Main ()
RunLoop.main.run()
Now, I have a couple of questions:
How can I determine whether "Send 1" and "Send 2" were successfully written to the OS buffer? Since only the callback for "Send 3" is triggered, is there any way to track the success of the previous sends?
When I mark "Send 1" and "Send 2" as idempotent, what does that actually mean? I understand that marking something as idempotent usually means I can retry the operation without causing any issues if it fails. But how do I know if the operation succeeded or failed? Is this the same as usual, or does idempotency work differently in the batch API?
When would I actually want to use idempotent sends within a batch as opposed to using callbacks for each operation? What is the specific use case for marking operations as idempotent within a batch, and how does it impact performance or behavior?
If for some of my application logic I am still interested in receiving callbacks for each individual send operation, would using the batch API provide any benefits compared to just triggering the sends individually without batching them? Would the batch API improve efficiency or callback management in this scenario, or is it better to handle each send separately?
Hi @DTS Engineer ,
Thanks for the advice! I’ve been trying to apply the batch(_:) method for both send and receive operations, but I’m having trouble figuring out how to do it, since both methods have their own completion handlers. Could you please provide a demonstration of how to apply batching in this context?
Also, when you mention attaching a completion handler only to the last send operation in the batch, I’m not clear on how failures from earlier send operations are communicated to me. Since each send and receive has its own callback, how would I be notified if an earlier send fails if I’m only handling the final completion?
Hi @DTS Engineer ,
Thanks for your insights on the changes in macOS. I have a couple of follow-up questions:
Can you share more details on the changes made with macOS 15 regarding user space network stack and Network Framework?
I’ve been testing network stack behavior on two different macOS machines—one running Sonoma 14.5 and another running Sequoia 15.3—and observed the following:
On the Sonoma machine, when the firewall is inactive, the network flow goes to the user-space stack, and I see that skywalkctl is enabled. However, when the firewall is active, it falls back to using the kernel-space BSD sockets instead of the user-space stack.
On the Sequoia machine, the firewall status doesn’t seem to affect this, and it always uses the user-space stack.
In terms of managing network connections and the send/receive processes, are there any differences we, as developers, should be aware of when the application uses the kernel stack vs the user-space stack? Specifically, are there any new practices or considerations introduced in macOS 15 that would impact how we handle these aspects?
Hi @DTS Engineer ,
I need to resolve DNS through unicast IP-based resolution rather than multicast DNS (mDNS). Based on your previous post, I understand that mDNSResponder typically handles mDNS queries and uses the kDNSServiceFlagsMoreComing flag to manage responses.
However, I’m wondering if mDNSResponder is involved in any way when performing unicast DNS resolution or if there is a separate mechanism to resolve DNS using the traditional query flow: local resolver (stub resolver) → recursive resolver → TLD servers → authoritative servers.
Is there a way to perform DNS resolution through this traditional unicast route while bypassing mDNSResponder entirely? Or does mDNSResponder still play a role even for unicast queries?
@DTS Engineer
Thank you for your feedback, and I apologize for the oversight. It's something I tend to do out of habit, but I completely understand the importance of keeping topics in their appropriate areas. I’ll make sure to keep that in mind going forward and ensure my posts are in the correct threads from now on.
Thanks again for your patience!
Contrary to what Eskimo stated in the reply here
QUOTE
"Upon initializing NWConnectionGroup, it appears that packets are received on all interfaces without the ability to control this at the interface level. Is this correct?"
Eskimo: Correct.
UNQUOTE
I wrote below code:
import Network
import Foundation
var multicast_group_descriptor : NWMulticastGroup
var multicast_endpoint : NWEndpoint
multicast_endpoint = NWEndpoint.hostPort(host: NWEndpoint.Host("234.0.0.1"), port: NWEndpoint.Port(rawValue: 49154)!)
do {
multicast_group_descriptor = try NWMulticastGroup(for: [multicast_endpoint], disableUnicast: false)
var connection_group : NWConnectionGroup
var multicast_params : NWParameters
multicast_params = NWParameters.udp
multicast_params.allowLocalEndpointReuse = true
var ip = multicast_params.defaultProtocolStack.internetProtocol! as! NWProtocolIP.Options
ip.disableMulticastLoopback = true
connection_group = NWConnectionGroup (with: multicast_group_descriptor, using: multicast_params)
connection_group.stateUpdateHandler = { state in
print (state)
if (state == .ready) {
connection_group.send(content: "Hello from machine on 15".data(using: .utf8)) { error in
print ("send to mg1 completed with error \(error?.errorCode)")
}
}
}
connection_group.newConnectionHandler = { connection in
print ("received new connection")
print (connection.endpoint)
connection.stateUpdateHandler = { state in
print ("Received connection state: \(state)")
if (state == .ready) {
connection.receive(minimumIncompleteLength: 1, maximumLength: 65000) { content, contentContext, isComplete, error in
print ("received message with error \(error?.errorCode) is: ")
print (String (data: content!, encoding: .utf8))
}
}
}
connection.start(queue: .global())
}
connection_group.setReceiveHandler { message, content, isComplete in
print ("Recevied on Receive Handler: \(String (data: content!, encoding: .utf8))")
print (message.remoteEndpoint)
message.reply(content: "hello back from 15 P2".data(using: .utf8))
}
connection_group.start(queue: .global())
RunLoop.main.run()
} catch (let err) {
print (err)
}
When I send packets from a device connected via Ethernet (Network 2), they are delivered to the Receive Handler. However, packets sent from a device on Wi-Fi (Network 1) are not received by the handler, despite both packets being captured on my machine (confirmed via Wireshark).
Wireshark Trace:
10 130.653672 X.Y.Z.Q 234.0.0.1 UDP 60 64737 → 49154 Len=18
11 134.479501 A.B.C.D 234.0.0.1 UDP 60 56643 → 49154 Len=18
Output from Code:
Received on Receive Handler: Optional("\u{01}\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0X")
Optional(X.Y.Z.Q:60504)
Note:
X.Y.Z.Q is the IP for the machine connected to Ethernet (Network 2).
A.B.C.D is the IP for the machine connected to Wi-Fi (Network 1).
When I run the networksetup -listallnetworkservices command, I get the following output:
n asterisk (*) denotes that a network service is disabled.
AX88179A
Belkin USB-C LAN // my ethernet adaptor
Wi-Fi // my wifi adaptor
Thunderbolt Bridge
GlobalProtect
This confirms that both the Wi-Fi and Ethernet services are active.
How can I ensure that I receive all the multicast packets from all available interfaces (Ethernet and Wi-Fi) on the Receive Handler?
Regards,
Harshal
Continuing this discussion here.
https://developer.apple.com/forums/thread/772363#772363021