Hello everyone
The application we developed is for pet medical care and is aimed at veterinarians The main function of the app is an AI conversation app, where the user talks to the AI. The AI answer content may involve disease diagnosis and treatment suggestions for pets. When we submitted it to the apple store for review, we received a 1.4.1 rejection notice (the app must clearly disclose data and methods to support accuracy claims related to health measurements)
Our solutions
- Before entering the app, we have added a pop-up window (the pop-up window introduces that the output content of our app is generated by AI and cannot replace the malicious veterinary consultation and diagnosis. If you have health problems, please consult offline certified veterinarians and other content to ensure the safety of pets in a timely manner). Users must agree before proceeding to the next step of using the app.
- Our AI model has been registered with the algorithm (and we have also uploaded screenshots of the algorithm registration)
- Each AI reply message in our app has displayed content (this answer is generated by AI, the content is for reference only, please check carefully) to remind users that these answers are generated by AI and allow users to check carefully.
Even though we made all the above obvious reminders, we still received rejection from the app review
our problem
- For a large language model, the content is implemented by a deep learning algorithm. It is impossible to accurately know the source and link of the generated content every time the AI replies to the content.
- If this review logic is followed, then the reply content of AI apps with language models such as chatgpt will also include medical-related diagnostic suggestions. How to solve this scenario?
- Our model is a diagnostic recommendation for pets. Does clause 1.4.1 refer to humans or animals?