The Long-Awaited Release of Illiac IV

As people started turning to computers for countless computing processes and work applications, many find these computing machines quite slow and unable to respond to all their necessities. Thus, manufacturers began looking into new possibilities. They tried applying orthogonality wherein a new instruction set is added to the CPU. Although some embraced it, it was seen as costly and slowed the performance of numerous machines. And so, manufacturers began brainstorming other ideas. They started experimenting with massively parallel computers that could work on and process huge data sets. One of the first few attempts would be that of ILLIAC IV.

It started when the U.S. Defense Advanced Research Projects Agency tapped the University of Illinois to produce a parallel computer for them. Headed by Daniel Slotnick, they began the design of the machine. Their goal is to enable Illiac IV to work on one billion floating points per second, a feat that no other machine has ever achieved. To do this, they made use of four CPUs, 256 PEs, ECL integrated circuits, B6500 mainframe and a 64-bit word design. New programming languages were also created to adapt to parallel computing namely, Ivtran, Tranquil and Glypnir.

When Illiac IV was almost completed, it was moved to the Ames Research Center at NASA since the university has become hounded with controversies. The public became alarmed and highly suspicious that the university has been working closely with the U.S. Department of Defense and played a part in the nuclear programs. Fearing for their safety, the team moved the machine to NASA where they ran the first simulation test. Since then, Illiac IV has been instrumental in running countless operations over at NASA. It was perfect for processing all the data required for NASA’s computational fluid dynamics.

During its run, Illiac IV became the fastest series of computers run throughout the world. Its design and development inspired other manufacturers to come up with their own parallel computers. It became the groundwork for the creation of vector processors which produced the most successful computers back then.

The Invention of the Apollo Guidance Computer

The government and a number of research centers saw the potential and benefits of using computers in their projects and analysis. That’s why it was no wonder when NASA tapped the help of computer engineers to build a digital model for them. It was around the time when the space race was at its peak. Both the United States and the Soviet Union were trying to outdo each other in coming up with the best spacecraft for travel and the best innovations in the field. NASA believed that an advanced computer model would propel them forward and seal their spot at the top.

They tapped the help of the engineers over at MIT Instrumentation Laboratory. Spearheaded by Charles Draper, they deduced that transistors were not practical for the design of the model. Luckily, Fairchild Semiconductor already released the integrated circuits to positive feedback. Draper and his team then decided that it would be worth the risk to use integrated circuits instead. Using the resistor-transistor logic, the integrated circuits worked and sped up the process in building Apollo’s first digital computer. It was in 1966 when the Apollo Guidance Computer was completed.

There are three parts to the computer’s interface – the keyboard, the lighting screen and the display screen. It was called DSKY, short for display and keyboard. Commands are in numeric format and the users have to key in certain sequences to represent the word commands. But Draper and his team encountered another problem. Speed is a priority but they could not achieve this based on the standard word size. Thus, they reduced the 24-bit word size to 16-bit word size to accommodate greater speed and higher precision.

The digital computer was fairly successful upon its release but it was discontinued a few years after its initial launch due to memory failure. When NASA tapped MIT, they merely provided a set of general tasks and requirements. No specific details were given thus MIT only built the computer based on these specifications. But during its actual implementation, they found the computer’s memory very limiting despite using a core rope memory. Even when they increased the 4,000 to 36,000 fixed memory and the 256 words to 2,000 words erasable memory, it still wasn’t enough to fully carry out all the requirements of the Apollo missions.

The CDC 6600 Supercomputer

The late 20th century saw the vast improvement in the speed and capacity of the computers. Manufacturers tried to outdo one another by getting the best engineers in the field and asking them to come up with a better computer than what the industry currently has. That spurred the invention of many computer models that continuously served the increasing needs of different industries and companies. One of these inventions is the CDC 6600 Supercomputer which trampled all the other existing models at that time. It became the most powerful and the fastest computer throughout the world immediately upon its release.

Seymour Cray was tasked by the Control Data Corporation to come up with the fastest computer machines possible. Previously, he had invented the CDC 1604 which succeeded to become the fastest transistor machine in the market. Following his success, CDC wanted him to come up with a faster and far more improved machine. Cray took to the task but he knew that in order to achieve that goal, he had to look further and beyond the current capacities of the industry. What followed was a series of close door meetings and brainstorming with his team. He detailed everything that is needed and realized that it would take five years to complete such a task.

But Cray was undeterred. Despite management objections, Cray proceeded as planned. He used the newly-released silicon-based transistors over the germanium-based transistors that he previously used for the 1604. The change hugely contributed to the increased speed of the model. With the help of system architect Jim Thornton, the CDC 6600 was finally invented in 1965.

There were a number of breakthroughs achieved by the 6600. The CPUs no longer handle a multitude of tasks. Cray and his team designed it to include only the logic and arithmetic operations. As for the rest of the functions, Cray created separate processors to handle each of those tasks. This increased the speed of 6600 by a huge margin over the other models. It was because of this that 6600 was sought after by a number of institutions which include the CERN laboratory, university computing laboratories and other laboratories that are used for nuclear bomb analysis and invention.

IBM Invents System/360

In 1964, IBM released a computer system that revolutionized the history of computers – the IBM System/360. It became such a huge hit among the public that even until today, no one can deny that System/360 remains to be one of the most influential inventions of IBM. It is believed that many succeeding computer inventions were designed based on it.

It all started when Chairman Watson Jr. discussed his idea with Fred Brooks about the diverse needs of users and the inability of their current line of products to address all these needs. True, their products have been sought after by their clients but in order to expand their business, they need to expand their clientele. The problem lies in the fact that their current line of products is limited. But Brooks had an idea, why not design a family of computers that would cater to the needs of every one of their clients. The series of computers would include the same instruction set which would enable the users to try them out and expand to a more upgraded system when the need arises.

Brooks immediately went to work. With the help of Amdahl as the architect, Brooks spearheaded the team in tapping the benefits of using a microcode across the computer models. The family of computers ranged from commercial use to personal. They designed various models that include high performing ones to slower systems. They planned this to match the preference of clients. Users get to choose whether they will invest in a more expensive but fast performing system or a cheaper model, albeit slower and containing fewer applications

By targeting the low, mid and high-end markets, IBM was able to capture the public’s interest. They released the Models 30, 40, 50, 60, 62, 65, 70 and 75 to huge success. Each of the models has different applications like scientific calculations and process control, among others. IBM likewise offered emulation programs on the models which allow the users to run their old programs on the new system if and when they choose to upgrade to a faster model.

System/360 proved to be a cut above its contemporaries. It contained a number of advancements that no other machine has at the time of its launch. It uses a byte addressable memory unlike others that relied on bits. It uses the floating point architecture, the bus and tag I/O channel and the EBCDIC character set. But most importantly, it was the first to use micro-coded CPUs, a feature that allowed IBM to expand its product line and ensure a constant demand from its consumers.

MIT Invents the Whirlwind Computer

MIT played a significant role in the development of computers. It was in their laboratories that a number of innovative designs and inventions took place. One of their important inventions includes the Whirlwind computer which became the basis and foundation in designing the succeeding computer models.

It was during World War II when the design was conceptualized. The U.S. Navy needed a computer to proceed with their simulations that would train their troops to launch bombs accurately. The computer needs to have a system that will represent the aerodynamics of any type of plane. This would aid in the training of their men towards accuracy and proficiency. MIT deemed it possible and the researchers and engineers over at the Servomechanisms Laboratory took on the challenge.

But during the early stages, the team headed by Forrester realized that such a task may seem impossible with the analog computer they have in mind. It wouldn’t be able to stand up to the expectations of the U.S. Navy. They even thought of withdrawing from the project but the problem was that it had already been funded. Luckily, they chanced upon the ENIAC computer and saw that switching to a digital computer may solve the problem. All they need to do is come up with a program that would ensure the accuracy of the simulation.

Speed was a critical element. It cannot make do with the bit-serial mode that was incorporated in most computers those days. They needed something more. They eventually settled with the bit-parallel mode which made Whirlwind way faster than any other computers at that time hence, the name. It works this way, when an instruction was fed into the computer, a switch would be activated that would signal a control system to work on the instruction. This allows entering multiple instructions at a time. The computer would not be stalled since it will work on them simultaneously.

During this time, the Whirlwind project attracted the attention of the Air Force and they wanted it to track their aircrafts. So Forrester and his team increased its memory capability to include a 1024 word memory and increased the number of vacuum tubes used to 5,000.

Finally, the Whirlwind computer was completely built in 1951 for the use of the Air Force. It was able to track the aircraft signals from a number of radars. The radar would signal the data about the aircraft and the signal would then travel through the telephone lines before finally reaching the reception of the Whirlwind computer. It had significantly helped in the operations of the Air Force that it became the most important computer at that time.