Hi,
I've recently started prototyping my first SceneKit app, and I have a sphere which I want to render as a surface of hexagons (a truncated icoshedron).
Now, in order to get the hexagon resolution I need, there are around 4500 hexagons to be displayed. I managed to get this working by creating a SCNGeometry to represent each hexagon, and adding it to an SCNNode.
This works, however it's terribly slow, especially on older devices. On my iPhone 5s, the frame rate is just under 30fps, and on an iPad Mini it's around 4fps.
I suspect I can probably render it all as a single geometry however that (I think) means I lose the ability to animate individual hexagons, change their colours/materials, etc.
So for 4500 hexagons I'm looking at 27,000 vertices. That sounds like a lot, but I see and read about apps and games that seem to have much much larger vertice counts, so it seems to me that I must be doing something wrong.
At this point the materials are all just a basic colour; I'm not trying to do anything fancy. Below is the code that creates the geometry objects and adds them to the SCNNode.
Is there another way to be able to have individually addressable hexagons (facets) like this that is more efficient?
Thanks in anticipaton.
UInt16 indices[] = {0, 1, 2, 0, 2, 3, 0, 3, 4, 0, 4, 5};
NSData *indiceData = [NSData dataWithBytes:indices
length:sizeof(UInt16) * 12];
SCNGeometryElement *element =
[SCNGeometryElement geometryElementWithData:indiceData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:12
bytesPerIndex:sizeof(UInt16)];
for (HSHex *hex in internalSphere.hexs) {
SCNVector3 vertices[[hex boundaryLength]+1];
for (NSUInteger index = 0; index < [hex boundaryLength]; index++) {
vertices[index] = [hex boundaryPointAtIndex:index];
}
vertices[[hex boundaryLength]] = [hex boundaryPointAtIndex:0];
SCNGeometrySource *verticeSource =
[SCNGeometrySource geometrySourceWithVertices:vertices
count:([hex boundaryLength]+1)];
SCNGeometry *hexGeometry =
[SCNGeometry geometryWithSources:[NSArray arrayWithObject:verticeSource]
elements:[NSArray arrayWithObject:element]];
hexGeometry.firstMaterial.diffuse.contents = [UIColor greenColor];
hexGeometry.firstMaterial.doubleSided = YES;
SCNNode *node = [SCNNode nodeWithGeometry:hexGeometry];
[self addChildNode:node];
}
There is a fundamental difference to grasp.
An SCNGeometry defines the relative position of vertices to one another in a model.
An SCNNode defines the transform (position, rotation, scale) of the geometry it displays in the world (or in its parent node if attached), relative to its pivot point.
The same geometry instance can be assigned to thousands of different nodes, and therefore displayed in many different locations.
Let's take an example. You create a car racing game.
You need an Ford GT model. You create the geometry for it in Maya. The model defines the relative position of vertices that make up the car, as well as its materials.
Now, you import the model in XCode. You can instantiate the same geometry n times (e.g 15 Ford GT cars). Each car will be assigned to a different SCNNode instance and therefore displayed at a different position (and also rotation on all three axes) on the race track. There are 5 racing teams, and each racing team has 3 cars. You need distinct materials for these 5 teams. The 3 cars in each team will share the same materials. You can copy the geometry so you can assign different materials to change the look of the cars.