Hi all! We're actively working to improve the guardrails and reduce false positives. Apologies for all the headaches.
The #1 workaround I can offer you is for summarizing content like news articles, you can use .permissiveContentTransform to turn the guardrails off.
Please checkout this article I wrote on Improving the safety of generative model output because there are some caveats on when it's appropriate to turn the guardrails off, and cases where the model may refuse to answer anyway.
If you feel comfortable doing so, send us any prompts that falsely trigger the guardrails.
While we're actively working to improve guardrail false-refusals, it's incredibly helpful to see prompts from real developers (like you) to identify blind-spots we might have in our guardrail evaluations.
Check out this post on sending us feedback. It shows a handy way to send feedback from Xcode. Make sure to send your Feedback Assistant bug reports to Foundation Models framework so we get your issue.
Include your prompt in your report. If you feel comfortable, please also include your language/locale Siri setting, e.g. "Spanish (Mexico)" in your report, since the guardrails are influenced by locale. Thanks!