Desperate with constant Guideline 1.1 Rejection and possible unfair treatment

Hi everyone,

I am really not sure where else to go for a piece of mind because our recent review experience has been far from clear (and probably fair).

We are a photo editing app developers who deal with AI models, effects and transformations on a constant basis. I believe last year we (like many other apps in the category) introduced effects that get two people in different photos and unite them in one cute/fun/cozy/emotional picture. Be it a couple, a mother and daughter - you name it. Of course all the templates are pre-set, there is no option for users to generate any scene by text-to-image or image-to-image model. Otherwise it would be unsafe.

When submitting our app to review in May, 2025 we first faced the situation that this kind of effects are not welcome because of:

Guideline 1.1 - Safety - Objectionable Content

The app references or includes features that some users may find objectionable or could be used to create objectionable content. Specifically:

  • The app includes templates for generating content showing people making intimate contact with each other, such as hugging, kissing, or other intimate templates. While these templates alone may not be objectionable, they could be used to create objectionable content.

Apps on the App Store should be safe, appropriate for a general audience, and should not include features that are objectionable or could be used to create objectionable content.

At that first time we saw it as new general requirement and rule and complied. It took us about a month to delete the "objectionable" content and finally get approved.

Time passed and what we saw in the middle of summer was ALL the huge players in the market kept providing these "objectionable content" freely, both inside apps and as store graphics. So we re-added the content, submitted, and get approved like immediately. Was it a miracle or a matter of a particular reviewer, I cannot say, however after that we didn't get any reject for months.

Then around October we submitted our app for another review and history repeated itself - same reject as in May for the same reason. Nothing helps us to get through. Nothing. 5-6 appeals about the review and 1-2 about unfair treatment, no response.

Did I mention that all the huge players (like 5-6 apps) keep posting the same content freely? All of them released updates for Christmas last week with Group Photos / Shared shots / Photobooth Lab featuring exactly the same 1+1=united pic concept.

Our latest appeal gets reply however, all the same:

We understand your position and consideration of your app's compliance with Guideline 1.1. However, we found that the app references or includes features that some users may find objectionable or could be used to create objectionable content.

The app includes templates for generating content showing people making intimate contact with each other, such as hugging, kissing, or other intimate templates. While these templates alone may not be objectionable, they could be used to create objectionable content.

The strange thing is the screenshot attached wasn't even connected to the case itself, it was of a filter that was applied to photo that already contained two people.

I don't see any logic in the responses any more:

  • Not only the Guideline 1.1 - Safety - Objectionable Content quotes anything about the mentioned reasons for rejection. We have carefully reviewed the entire text of Guideline 1.1, including sections 1.1.1 through 1.1.7, and we are still unable to determine which specific part of the guideline the rejected templates violate.

The note states that our “couple effects” (hugging, close poses, romantic themes) could be used to create objectionable content. However, when mapped against the text of Guideline 1.1, none of the listed categories appear to apply:

  • The effects are not defamatory, discriminatory, or mean-spirited (1.1.1)
  • They do not depict violence, harm, or abuse (1.1.2)
  • They do not involve weapons or illegal activity (1.1.3)
  • They do not contain sexual or pornographic material, nor explicit activity (1.1.4)
  • They do not involve inflammatory religious commentary (1.1.5)
  • They do not supply false information or trick functionality (1.1.6)
  • They do not capitalize on harmful current events (1.1.7)

And so on.

As reply for our "Why other apps are allowed to have this content while we don't?" they returned with this:

On occasion, there may be apps on the App Store that don't appear to be in compliance with the App Store Review Guidelines.

We work hard to ensure that the apps on the App Store are in compliance, and we try to identify any apps currently on the App Store that may not be. It takes time to identify these occurrences, but another app being out of compliance is not a reason for your app to be.

It has been 6 months since our first instance on the matter. None of the competitors removed such content from their apps while we are constantly being forced to do so.

Does this imply them having another review experience? Or do they hide the mentioned features before review and get them back right after? It's a mystery...

I will really appreciate any advice on how we should deal with this matter.

Thank you in advance

Thank you for your post and appeal. We're investigating this currently. A representative from App Review will contact you to provide further assistance. If you continue to experience issues during review, please contact us.

Desperate with constant Guideline 1.1 Rejection and possible unfair treatment
 
 
Q