With more investigation I think I've solved this issue! Hope this helps others who are investigating performance with XCTest and SIMD or other Accelerate technologies...
XCTest performance tests can work great for benchmarking and investing alternate implementations even with micro performance, but the trick is to make sure you're not testing code built for debug or running under debugging.
I now have XCTest running the performance tests from my original post and showing meaningful (and actionable) results. On my current machine, the 100000 regular Double calculation block has an average measurement of 0.000328 s, while the simd_double2 test block has an average measurement of 0.000257 s, which is about 78% of the non-SIMD time, very close to the difference I measured in my release build. So now I can reliably measure what performance gains I'll get from SIMD and other Accelerate APIs as I decide whether to adopt.
Here's the approach I recommend: Put all of your performance XCTests in separate files from functional tests, so you can have a separate target compile them.
Create a separate Performance Test target in the Xcode project. If you already have a UnitTest target, it's easy just to duplicate it and rename.
Separate your tests between these targets, with the functional tests only in the original Unit Test target, and the performance tests in the Performance Test target.
Create a new Performance Test Scheme associated with the Performance Test Target.
THE IMPORTANT PART: Edit the Performance Test Scheme, Test action, and set its Build Configuration to Release, uncheck Debug Executable, and uncheck everything under Diagnostics. This will make sure that when you run Project->Test, it's Release-optimized code that's getting run for your performance measurements.
There are a couple of additional steps if you want to be able to run performance tests ad hoc from the editor, with your main app set as the current scheme. First you'll need to add the Performance Test target to your main app scheme's Test section.
The problem now is that your main app's scheme only has one setting for test configuration (Debug vs. Release), so assuming it's set to Debug when you run your performance test ad hoc it will display the behavior in my OP, with SIMD code especially orders of magnitude slower.
I do want my main app's test configuration to remain Debug for working with functional unit test code. So to make performance tests work tolerably in this scenario, I edited the build settings of the Performance Test target (only) so that it's Debug settings were more like Release - the key setting being Swift Compiler Code Generation, changing Debug to Optimize for Speed [-O]. While I don't think this is going to be quite as accurate as running under the Performance Test scheme with Release configuration and all other debug options disabled, I'm now able to run the performance test under my main app's scheme and see reasonable results - it again shows SIMD time measurement in the 75-80% range compared to non-SIMD for the test in question.