Hi,
I wondered if I could ask a question which may have a fundamental impact on how I model some data I'm working on.
I'm working with some medical data which I want to model with SNOMED CT terms (https://en.wikipedia.org/wiki/SNOMED_CT for an overview).
SNOMED describes each medical object as a 'concept'. Concepts are hierarchical - ie. can be subclasses of another concept.
Within each term, I'm going to want to store a set of different values. I'm also going to want to query those concepts to find out which relate to my patient.
So - in a nutshell I think I'm describing entities, attributes and relationships.
The difficulty comes that SNOMED allows MANY different codes: there are 311,000 concept codes to date and this is expanding.
I'm hopefully only dealing with a small subset of those codes for now: I was thinking about the feasibility of modelling 'known' concepts as entities with a parent entity of 'concept', which can take all the detail we are expecting to store. Unknown entities would have to be modelled as generics - which would then lose all the potency of core data to manage their data efficiently, but would mean we could store the data without a precise model.
This plan rests on the idea that Core Data scales reasonably well with having lots of entities in a model. Has anyone had experience of how well Core data scale in terms of the number of entities, or having a large hierarchy of entities with parent entities?