Developer ID signing identities are precious, which means that you need to take good care of them (see this post). Any significant organisation has all their developers using Development signing and then has a publication step where a restricted set of folks, those authorised to ship code on the organisation’s behalf, re-signs the product for distribution.
I think it's worth emphasizing this point, and thank you for it: it does make sense. Automatic code signing worked flawlessly for an application, even when I don't code in Swift and had no idea what I was doing: it produced a built product that was safe to distribute through the App Store. It didn't matter whether I was significant: what I was doing was parse-able by the security checks, regardless.
All this doesn't depend on my ability as a developer, or the developer's superior, to personally audit the code and check that it's not problematic. The system rightly rejects it if it's not up to snuff. I'm hesitant to endorse anything that tries to 'work around' this system, except that I'm truly not: the distinctions in workflow, for a one-person operation that will never be other than a one-person operation, come down to 'write a Terminal shell script that can iterate over hundreds of files running codesign -s "Developer ID Application" -f --timestamp PPP.bundle substituting the project name for PPP… something I could do one project at a time by simply typing it… and which would result in code signing, if I'm not mistaken, that does NOT identify me and my signing identity?
From
Note In the above line TTT identifies you team. For example, for my individual team this expands to Developer ID Application: Quinn Quinn (SKMME9E2Y8). If you only work in one team, using just Developer ID Application is fine.
it sounds a bit like you can execute a more generic code signing? That or, if you don't specify the team explicitly, it will always tie the signing to your active Developer ID signing identity: it will 'expand' it to the only possible identity it has on tap. That seems like a correct behavior.
Forgive me for belaboring this: my experience trying to work through this problem as an open source developer whose code is used by other projects, and treated as an 'on ramp' by totally naive developers who truly don't know what they're doing, puts me into a position where I MUST correctly understand the details of how all this works, and why. What I've seen out there is that this subject is treated as black art and largely something to 'work around', which I don't feel serves anybody's purposes well. Now that I've found a genuine authority on the subject, and apparently solved my own problem several different possible ways, there's a temptation on my part to be intimidated by language like 'any significant organization' and to back off, treating the overall system as hostile to me and mine: I have captured the secret invocations to bypass Gatekeeper, and must flee lest I anger Apple and have all my work revoked! :D
When in fact I am doing no such thing: I am, either way I choose to do it (even configuring XCode for 'basic V2 audio unit manual code signing' for that case only) working WITH the larger Gatekeeper system the way it is intended, and producing a workflow where I am sending plugin code, in shipping form, to be audited by Apple so that it can run on user machines without USER workarounds.
I've got a user who's shipped double-digit millions of records sold, who is not himself a programmer or anything like that, and when repeatedly trying to get his 'secret weapon plugin' working on his new M1 laptop running Big Sur, one of the things he knew to try was running Terminal and manually stripping quarantine bits from the downloads. Neither of us thought anything of it, beyond 'oh good, you're a power user'. But that's not a power user, that is somebody who is normalizing the execution of code that hasn't been checked for malware. It seems like the less of that behavior out in the wild, the better.
It sounds like it would be best for me to correctly parse how the overall Gatekeeper/codesign system works, so I can communicate the correct way to view it: not as an opaque extra layer of difficulty to turn away small indie devs and about which little is told or known, but as a way to establish a trusted third party (that would be Apple) which can guarantee with reasonable certainty that the stuff you run, which doesn't throw up objections or barriers, also doesn't contain nefarious code that will hurt you or others.
In sound engineering, we've got a discipline known as mastering. It's sound-processing the two-track audio after it's already been mixed, to match it to user environments better. But traditionally it serves another purpose: to let a third party, another set of ears, into the picture and get their 'take' on how the mix sounds. This is crucial: listening to your own mix over and over can lead you to a place where you come to accept things that are actually quite weird, and the mastering engineer isn't fooled by your expectations: if you have gone awry, they'll catch it.
Am I right in thinking the path toward doing the code signing is less important than doing, and normalizing, code signing so Apple's security systems can serve as that third party 'mastering engineer' to check the work of coders, whether or not they're working in large organizations or consider themselves to be small and insignificant?
It seems like making this as seamless as possible, is the most important part, and I play a role in that to the extent that I'm producing a library of open source code which can on-ramp naive developers. They will acquire my attitudes towards Apple code signing, and my difficulties in doing this has been in part because the body of developers already out there seems prone to getting mad at Gatekeeper and code signing when it's not easy for them. I wish to acquire, and propagate, a different attitude toward required code signing :)