History

The history of computing at CERN is complex, driven by need, resources and a certain amount of serendipity. For a broader view on our computing history, with resources, testimonials and original sources, see CERN Computing History.

For the IT Department Corridor "History" posters, please see http://cds.cern.ch/record/2750600

Below we describe the outline for the benefit of the Data Centre VisitPoint.

Networking

From using a bicycle to hosting Europe's first internet exchange point, CERN was a pioneer in developing international connectivity.

Local Area Networking - LAN

Before networks at we know them today were in wide use at CERN, data was moved by transporting tape by bicycle to the Computer Centre. The bicycle was a popular means of carrying tape and was officially known as "BOL"... Bicycle On-Line (show bicycle video).

Data exchange between CERN and other laboratories/universities was done by sending the data via the postal service or crating it up and shipping it by lorry.

Wide-Area Networking - WAN

Data exchange between CERN and other laboratories/universities was done by sending the data via the postal service or crating it up and shipping it by lorry.

Homegrown network: CERNET

By the 1970's, the quantities of data produced had exploded and moving data by bike was no longer fast enough.

CERN decided to create its own internal network in 1974, called CERNET, and it was one of the world's fastest networks. All the network equipment had to be developed and built here at CERN - our engineers liked to include their own portraits on the cards in the space between components.  This CERNet card you can see was part of the network hubs.

Adopting standard network technology and trailblazing international connectivity

In the 1980s and early 1990's, Ethernet became more popular and provided a much faster data transmission rate.  This cable is one of the first ethernet cables from 1983, a thick, bulky affair.  Computers were attached via "Vampire Taps" which were  connectors screwed straight through the shielding of the cable.

On an international level, CERN  spearheaded the development of the networking in Europe.  In 1989, CERN created the CERN Internet Exchange Point, CIXP, which was the first ever European internet hub, and then the following year we set up the first ever transatlantic link to the USA. By 1991, 80% of  Europe's international network traffic was passing through CERN.

Networks today and Big Data

Experiments at the LHC are producing big data, and we need to transmit this data out to hundreds of computer centres around the world. All transmissions are based on fibre optic cables and routers like what you see here. These fibre-optic cables can carry hundreds of Gbits/s per fibre. All this data is routed with the help of high-speed routers such as the Brocade MLX  which can handle the equivalent of 100,000 connections per second.

See also 

Ben Segal - "History of Network Protocols" (or http://ref.web.cern.ch/ref/CERN/CNL/2001/001/tcpip/Pr/ )

Francois Fluckiger - "The role of CERN in the internet"

Francois Fluckiger - "The European Researcher's Network"

 

CPU/Processing

From super-computers, to mainframes, to commodity mass-produced CPUs, the evolution of processing at CERN has closely followed market trends.

CERN's first computer, a huge vacuum-tube Ferranti Mercury, was installed in 1958, starting the 40 years reign of the Mainframe at CERN. It took 3 months to install and filled a huge room. The complexity of its circuitry is nowadays matched by the electronic  chip in a musical birthday card. 

More powerful computing was needed to deal with the increase in data.  An IBM 709 was installed in 1961. Its CPU was five times faster than that of the Mercury, but it came with a price tag of the equivalent of 50 million Swiss francs. Shortly after it was replaced transistorized version of the same machine. This marked the end for the valve machines, as machines with transistors were more reliable, compact and efficient.

More powerful computing was needed again... in 1965 the first CDC machine arrived at CERN - the 6600 designed by computer pioneer Seymour Cray, with a processing power 10 times that of the IBM 7090. It was joined by the CDC 7600 which was the most powerful computer of its time. After a very painful run-in period, the CDC 7600 provided the bulk of the computing power needed by CERN for almost 12 years. No system has ever lasted so long since then.

The IBM 370/168  was the starting point for 25 years of IBM-based services in the computer centre. This machine had dependable tape drives, as well as other new automated periphals such as robotic storage and a laser printer. Here you can see one of the multi-chip CPUs from the IBM 3090, which was water-cooled.   At its peak around 1995, the IBM service provided the processing power of around a quarter of today's top PCs.

The colourful CRAY X-MP arrived at CERN in 1988 to provide computing for the LEP program, and was particularly suited to accelerator design and other engineering challenges.

The era of the supercomputer was brought to an end by a new computing model. A single big expensive machine was replaced by large numbers of computers.  Cheap to buy, easy to repair and replace, these thousands of PCs form the backbone of today's computing power - these are PCs just like what you have at home. Here you can see a sample of today's CPU+disk units - the CERN computer centre has hundreds of racks of these, as you will soon see. Together, they are over a million times more powerful than our first computer in the 1960's.

Storage at CERN saw the evolution from manual to automatic search/retrieve, and packing more and more data into increasingly dense cartridges. We've had to keep it as inexpensive as possible due to the vast quantities of data which need to be stored. 

Storage

Tape Reels - the only technology available. Manual and slow...

From the beginning of computing at CERN in the 1960's onwards, the 9-track tape was the standard storage medium. They did not hold very much data compared with today, and they needed manual intervention to retrieve and load.  From requesting a certain tape to being able to read it, you could wait up to one or two days.

Move to automated retrieval

Physicists wanted to be able to get their results quicker. To make retrieval faster, Industry introduced automated tape retrieval systems, such as this honeycomb/robot system. 9-track reels were replaced by round cartridges, and while  a single cartridge held less data than a single 9-track tape, they were much more compact and could be automatically retreived. Access time went from day/hours down to minutes.

LEP - more data, more users, faster. Standardisation of tape cartridges to today's familiar form-factor

Tape technology and even more advanced robots meant more data could be stored and retrieved even faster - vital during the LEP days. Tape cartridge shape was standardised to this square-shape to be compatible with different robots, and the tape inside could hold more and more data for the same size.  Compare the "old" honeycomb cartridges which could hold 50MB, to this "new" square-shape cartridge, which can hold the equivalent of 1100 DVDs.

Sophisticated and compact data storage technology

Often people wonder why we still use tape, as they see it as an old-fashioned technology. Tape is actually a very sophisticated storage material and it can store huge volumes of data. For example, the data which was stored on thousands of reels for the 1990's OPAL experiment now fit on one of today's cartridges. Tape is inexpensive, compact, doesn't consume much electricity, and is durable for long-term storage.  With the data tsunami from the LHC, being able to quickly retrieve petabytes of stored data is essential for physicists to make ground-breaking discoveries.

Further info

Typical yearly data flow from CERN experiments

Tape Drive technology

Capacity

Transfer speed

Number of tapes in storage in the CC

Note

Around 1 GB per experiment   (PS/ ISR / advent of wire chambers)

7 and 9 track 1/2 inch tape reels

Up to 160 MB

1.25MB/s

50 000

Tape system seen as inseparable part of the computer. Tapes exchanged with other institutions.

Around 12TB per experiment? (SPS)

IBM 3580 MSS (honeycomb)

Up to 50 MB

Used in pairs to match the disk size for backup.

N/A

~2000

First automatic tape mounting system in use at CERN. Seen as very reliable compared with previous generation. Used in parallel with the above.

? (LEP)

IBM 3480

200 MB

3MB/s

Over
 100,000

Humans used to mount tapes. Cartridge cover form factor established and later 
used by other vendors.

Up to 30 PB/ year (LHC)

IBM TS11X0, Oracle StorageTek T10000X

Up to 8.5 TB

250 MB/s

Over
 50,000

All tapes in fully automated
 tape libraries.

  • the significance of the "honeycomb" is that with this the move to robot mounting of tapes at a significant scale started at CERN. (MS)
  • the two Oracle systems filled with the first cartridges being comparable with one new tape makes a good visual comparison. (MS)