Guideline 1.2 - Safety - User-Generated Content

Hi,

I always get this app rejection:

Next Steps

To resolve this issue, please revise your app to implement the following precautions:

  • Require that users agree to terms (EULA) and these terms must make it clear that there is no tolerance for objectionable content or abusive users
  • A method for filtering objectionable content
  • A mechanism for users to flag objectionable content
  • A mechanism for users to block abusive users
  • The developer must act on objectionable content reports within 24 hours by removing the content and ejecting the user who provided the offending content

I implemented an EULA with a zero-tolerance policy when the app is started the first time and needs to be accepted. I have a method for filtering content, so the user can specify himself what he wants to filter. I have a mechanism to report content and a backend to delete messages/ban users. Users can also block other users if they want to.

So, at this point I'm not sure what else I need to change.

Did you explain in the note to reviewer all you did (as explained in your post) to satisfy the requirements ?

Only remaining point it seems is the 24 hours requirement. How do you handle it ?

Yes, I did. I am currently waiting for a reply.

The 24 hours requirement relies on human interaction. Nobody can guarantee.

Thank you for your post. We're investigating and will contact you in App Store Connect to provide further assistance. If you continue to experience issues during review, please contact us.

Hello, I am glad that you are looking into this matter. Thank you very much.

The EULA opens when you first launch the app. To report a message, you need to press and hold the message until the menu opens. Then you can report it. Events can be reported via a button. I have also provided access to the backend with a username and password for testing purposes. Messages can be filtered via the settings under “Profile.” Users can be blocked by opening the user's profile.

Hello,

my app got reviewed again, I provided backend admin access in the "note" section and a full guide on how the app can be tested. I can see in the backend that the app was not really tested.

It was for 5 minutes in review and got immediately rejected with new reasons. Some of the reasons are already fixed.

The new reasons are:

  • Age rating must reflect 18+ (new)
  • Require that users agree to terms (EULA) and these terms must make it clear that there is no tolerance for objectionable content or abusive users (fixed)
  • A method for filtering objectionable content (fixed)
  • A mechanism for users to flag objectionable content (fixed)
  • A mechanism for users to block abusive users (fixed)
  • A mechanism for users to immediately remove posts from the feed (new)
  • Developer must act on objectionable content reports within 24 hours by removing the content and ejecting the user who provided the offending content (fixed)
  • Developer must provide contact information in the app itself, giving users the ability to report inappropriate activity (new but fixed since users can report content)

To resolve this issue, provide a user name and password in the App Review Information section of App Store Connect. (I did) It is also acceptable to include a demonstration mode that exhibits the app’s full features and functionality. (I did) Note that providing a demo video showing the app in use is not sufficient to continue the review.

I wonder how an app like Bitchat got approved.

#EDIT: I fixed now all issues again and I will resubmit the new build. Keep you updated here.

Guideline 1.2 - Safety - User-Generated Content
 
 
Q