super sized problems - like tomorrow's weather

One of the prime tasks of the 50 or so supercomputers in the world is weather prediction. No longer a matter of simply reading a barometer, or looking at a few clouds, weather forecasting has become an intensely mathemtical operation. At any point in the atmosphere, the physical properties which go to make up the weather - temperature, pressure, wind and so on - depend largely on the state of the surrounding air. Complex mathematical expressions have been developed which express these interrelationships: if you know the values for the various physical properties at some point in the atmosphere you can calculate the values at other points - and so find out what the weather is like somewhere else. The problem is that the number of calculations required is enormous. For an accurate prediction, equations have to be solved for millions of points throughout the atmosphere - and each point requires many separate equations. Conventional computers would take hours or even days to compute problems of this complexity - and by that time the result wouldn't be a forecast.

**The next generation**

Only supercomputers have the power to perform enough calculations in time to produce useful results. They perform their miracles by a combination of existing technology, with some new tricks of their own.

One of the first supercomputers - Illiac IV - contained 32000 memory chips each using a technology known as medium scale circuit integration (MSI) to hold 256 bits (each bit, or binary digit, represents a 0 or 1 of the binary code which computers operate in). By contrast, the central memory of today's supercomputers - Cray-1 and Cyber 205 - use large-scale integration (LSI) to give a transistor density allowing 4096 bits per chip. And the next generation of supercomputers will have sixteen times more memory! The Cray X-MP could be five times faster than Cray-1.

Increasing packing density in this way does not only give more memory for lower cost. It also speeds up operations. Digital computers operate in fixed pulses, regulated by internal clocks: only at each 'tick' of the clock can the states of memories be changed and data moved from one part of the computer to another. The clock cycle - the time between ticks - is now as short as one hundred-millionth of a second (ten nanoseconds). So, circuits within the computer that operate near-simultaneously (in less than one nanosecond) have to be placed very close together.

But pure speed is not the only yardstick of computing power - the human brain is about 100000 times slower than Cray-1, yet still manages feats of data processing that computer programmers can only dream about. So to increase their computing power, supercomputers are designed to use a range of techniques that are more subtle than mere increases in speed and size. Supercomputers, unlike conventional computers, are multiprocessors. Instead of having a single central processor or 'brain' they possess many processing units. The S-1 supercomputer, developed by the US Navy, is made up of sixteen individual uniprocessors, each a computer in its own right, but which share a main memory. Each uniprocessor can operate independently or, on large problems, jointly.

Another supercomputer technique is pipelining. A simple computer instruction (for example, to add together two numbers) requires up to ten steps, most concerned with fetching and carrying rather than with arithmetic. The two numbers to be added have to be located in the memory, transferred to the processor for addition, and then the result has to be stored in some other numbered memory location. In simple computers this is carried out a step at a time. With supercomputers, the main functional instructions are cut up into small pieces, and all stages handled independently. And it is this that makes them so superfast.

Figure 1 [Next to a picture of the Cray-1] Costing over $10 million and weighing over 4 tonnes, the Cray-1 is one of the most powerful supercomputers commercially available. Designed to solve the complex spatial problems encountered in aerodynamics, Cray-1 can compute at a peak rate of 100 million arithmetical operations per second. The computer's 'brain' and memory are housed in the twelve-sided structure in the foreground. The input/output system is made up of three small, fast computers with extended memory and is housed within the four-sided segment to the left. The padded 'seats' which surround both units contain the power systems.

Figure 2 [Next to 4 images, each a screenshot of a rendered partially transparent airflow simulation] Supercomputer simulation of the airflow over the rear of a rocket travelling just below the speed of sound, predicted at a quarter of a million points. The simulation required over 10 billion arithmetical operations and took 18 hours. The simulation presents the rocket from all angles and depicts a horseshoe-shaped region where the airflow over the rocket reaches the speed of sound. Computer simulations of airflow closely resemble results from wind tunnel tests, but are much cheaper and allow designs to be changed and tested simply and swiftly.

Figure 3 [Next to a small photo of the internal wiring] The maze of wires which interconnect the 1600 circuit boards contained within the central processor of Cray-1. Each of the 300000 wires is cut to the same length to ensure that signal transmission times between two points are constant. This allows the computer to tackle thousands of calculations at the same time with different circuits working in parallel.

Figure 4 [Next to an architecture diagram of supercomputer compared to simple computer] Simultaneous calculation characterizes the Supercomputer and can be seen in its basic structure, or architecture. Unlike conventional computers, supercomputers do not have one or two central processors but possess many 'uniprocessors' - each a computer in its own right. Furthermore, the uniprocessors are linked to the memory in such a way that when the computer is given a complex problem it can instantly break it up and begin tackling it from all sides at once - a process called 'pipelining'. This is the fundamental difference between supercomputers and conventional 'step-by-step' computers and is the key to their incredible ability to crunch numbers.

Figure 5 [Next to six images - two rows of three - of simulated airflow; four are wireframe, the other two are wireframe but with hidden lines removed (one on each row)] Simulating the airflow over a three-dimensional object requires a computer of immense power and flexibility. In these projections of the aerodynamics of the rear part of a rocket, supercomputer Illiac IV was programmed to predict the effects on the airflow of altering the angle at which the rocket pierces the air - the angle of attack. By comparing the top displays which represent a four-degree angle of attack with the lower series which correspond to a 12-degree angle, it can be seen that turbulence increases with angle. The computations are further broken down to illustrate the changes in air pressure over the rocket, the horseshow of accelerated air, and the shearing effect of the air adjacent to the rocket body. Simulations of this level of complexity afford engineers the opportunity to fly their rockets even before they are built.