During the main lifetime of the N64, the most important fact to note is that SGI and Nintendo never, ever released detailed graphics specifications of the N64. Until recently, it wasn't even known how the RCP functioned, so back when all the arguments raged one could be certain that SGI was not going to release details of performance figures.
The rest of this page I shall leave as it was when the arguments about the N64's performance still raged on the newsgroups. Much more detailed information is available now about how the N64 works, so some of the comments below no longer apply. However, it's interesting to see what the situation was at the time, from a historical perspective.
I said back in May 1999...
Thus, if you ever see anyone quoting a polygon performance number for the N64, I'm afraid they're talking rubbish. Challenge them to state the press release from SGI or NOA which officially states the performance numbers they're using - they won't be able to. If you're interested, the following list details those aspects of the N64 system which have not been released by SGI or NOA:
Anyway, since people do talk about polygon performance numbers so much, this page will detail some of the reasons why such numbers are, on the whole, completely useless. If you ever see a polygon performance comparison between two systems, be very careful: if full details of the tests carried out are not given, there's no way you can tell for sure that the test conditions were the same - given such an unknown, the quoted numbers are thus completely useless.
If two tests are run at different resolutions, the test at the lower resolution will be able to achieve a higher frame rate because the available pixel fill rate is being used to fill a smaller area.
You'll often see phrases like '50 pixel triangles' or '25 pixel triangles'. Larger triangle size in a performance figure may sound more impressive, but this allows tests to run better on systems that have poor geometry preprocessing abilities since fewer vertices are needed.
Are the polygons meshed or not? 'Meshed' means that the triangles are connected, ie. they share common vertices. Meshing reduces the geometry calculations that need to be done. If polygons are separate (not meshed) then more calculations are involved.
Spacing and Depth Complexity
Are the polygons randomly spaced? On average, how many polygons are infront or behind each polygon in the scene? (ie. what is the average depth complexity?). It would be easy to lay each polygon side-on (perpendicular) to one's viewpoint, thus requiring a much lower pixel fill rate. Random positioning ensures that some polygons will be infront of others, making for a better test. True real-world situations can be far more complex, eg. a street scene will have dozens of objects occluding each other, requiring repeated Z buffer computations for many pixel locations. A very sneaky trick of PC graphics card makers is to always quote polygon performance figures for a depth complexity of 1, thus ensuring that polygons are not overlapping in the test.
By contrast, when SGI quotes polygon performance figures for visual simulation applications, they often assume a depth complexity of 4, ie. if a system can do 840000 full-featured triangles/second (such as RealityEngine, which is the technology that the N64 is based on at an equivalent 1/4 resolution), SGI will divide that figure by 4 and then divide that again by 30 or 60 to give a scene complexity rating at 30Hz or 60Hz respectively. This is why old RealityEngine documents from the early 1990s (eg. 'RealityEngine Host Integrated Computer Image Generator) quote performance figures of "7000 textured antialiased polygons per pipeline at 30Hz,". Systems like this have huge pixel fill rates in order to be able to handle scenes with severe depth complexities (RE is rated at 320M pixels/sec, which still has not been beaten by any PC even though RE technology is almost a decade old).
Is the polygon flat shaded? If so, the number is useless. See my description of flat shading for good reasons why.
Is the polygon Gouraud shaded? If so, it then becomes important to note the kind of lighting used (see below).
Is the polygon textured? If so, what kind of texture? Colour-quality is important, as is the nature of the texture data, eg. does each polygon have a separate texture or is the same texture used for every polygon? (extreme cases) Is there transparency? The answers to these questions will determine how much work is being done by the graphics hardware's texture loading system and processing abilities. This is a good example where a CD system can be very slow: fast texture changes.
Does the polygon have inherant vertex colour data? If so, the lighting and/or texture mapping calculations can be more complex.
First make sure you read my description of the different types of light that can be used in a scene and note especially the con trick described at the end concerning 'Phong lighting'.
A test that uses a single directional light involves far fewer calculations (ie. is faster) than a test that employs a spotlight or pointlight. In fact, this is so true that I would suggest that a performance figure for just 1 spotlight is probably of more use than a performance figure for 2 directional lights.
And be wary of tests that mix properties: is a test for unlit textured polygons faster or slower than a test for lit non-textured polygons? The answer depends on the test: texture complexity, lighting-type, etc.
Double Buffered vs. Single Buffered
This is very important indeed. Follow the link above for a complete explanation and also my page on frame rates.
Real games are always double buffered, but benchmarks should not be for reasons given in the above referenced articles.
This makes a huge difference. Anti-aliasing involves a great deal of computation and comparing a test which used it to one that does not is just crazy. Note: any visual simulation specialist will tell you that anti-aliasing is essential for visual realism in a virtual environment. The RealityEngine in Visual Simulation Technical Report says:
A less powerful main CPU will have a smaller geometry processing ability. Having larger polygon sizes can hide this.
The Saturn and PSX consoles simply do not have many of the abilities supported by the N64's RCP. Given this, comparing the N64 to the Saturn or PSX is just pointless. For example, N64 games will always use TLMMI for greater realism, something that the Saturn and PSX systems cannot do. This means that any meaningful performance figure for the N64 simply will not be comparable to a performance figure for the Saturn or PSX since the latter consoles do not have TLMMI. Way back in 1995, people were arguing over a 100000/sec figure. George Zachary of SGI told me at the time:
In other words, when it comes to polygon performance figures, you must compare like with like. PR people in every major company break this rule time and again in the constant war of words. If you're to properly understand the capabilities of a system, you have to accept that graphics rendering is an inherantly complex process involving many different factors and as such demands an understanding of what the graphical terms and numbers mean in order for performance comparisons to be of any value.
For a further discussion on this subject, read the excellent article on the PSX2 by Douglas Rensch, entitled "Are the specs for real?"