Hello everyone,
We are the developer of PhotoMagic – AI Photo Editor, and we would appreciate guidance from the community regarding a pending enforcement case currently under review by the App Review Board.
Case Information
Case Number: 102812414888
App ID: 6746860005
Notice Date: January 25, 2026
Current Status: Pending review by the App Review Board
Reason cited: Section 3.2(f) of the Apple Developer Program License Agreement
The notice stated that our app submissions were considered to have “repeatedly violated the App Review Guidelines in an attempt to evade the review process.”
Background & Acknowledgement
Upon receiving the notice, we immediately conducted a comprehensive internal audit. During this process, we identified that certain user-uploaded template content (for example, templates involving concepts such as “kiss” or “hug”) posed potential policy risks.
We would like to clearly state that we did not intentionally hide or disguise content to evade App Review. However, we fully acknowledge that our previous content moderation mechanisms were insufficient, and we take full responsibility for this oversight.
Corrective Actions Taken
Following the audit, we implemented the following remediation measures. Some items are already live, while others will take effect in the next app update once submission restrictions are lifted.
Already Implemented
1.Expanded and strengthened our prohibited keyword library to prevent searching or generating sensitive content (e.g. “kiss”, “hug”, “nude”, etc.).(We can provide the keyword list if requested.)
2.Added detection and filtering mechanisms for public figures (including political figures and well-known individuals) to prevent potential deepfake misuse.
3.Raised AI model safety thresholds to prioritize risk prevention, even if this blocks borderline or otherwise normal content.
4.Fully restructured our content moderation team to ensure independence, accountability, and redundancy.
5.Implemented an in-app reporting mechanism allowing users to report any content, which is immediately reviewed and removed if found non-compliant.
6.Discontinued use of Grok and any other AI services deemed potentially risky; all AI generation capabilities have been migrated to Gemini.
Pending Next App Update
7. Updated Terms of Use to restrict access to adults only and explicitly prohibit generating any prohibited content. Repeated violation attempts result in warnings and eventual denial of service.
8. Removed all user-uploaded custom templates. Only internally designed, fixed templates are provided, with no user-editable prompts.
9. Added both visible and invisible watermarks to all generated content to ensure clear AI attribution and traceability.
Questions for the Community
We would appreciate any guidance or shared experience regarding the following:
Are there any additional steps that are appropriate or recommended to bring remediation details to the App Review Board’s attention while a case is under review?
Is there any specific type of supporting documentation (e.g. technical architecture explanations, data cleanup confirmations, policy mappings) that has been helpful in similar situations?
In general, once a case is under review by the App Review Board, is it best to wait for further instructions, or are proactive clarifications sometimes appropriate?
We respect Apple’s review process and are fully committed to long-term compliance with the App Store Review Guidelines and the Apple Developer Program License Agreement. Any insight would be greatly appreciated.
Thank you for your time and support.